If we assemble a zend_string manually, we need to end it with a NUL
byte ourselves.
We also fix the size calculation for that zend_string; there is no need
for the extra byte for each part, and we don't have to multiply by two,
since we're using DnsQuery_A(), not DnsQuery_W () (in which case we
would have to do the character set conversion, anyway). This avoids
over-allocation, and the need to explicitly set the string length.
Finally, we use the proper access macro for zend_strings.
Closes GH-7427.
Quoting from UPGRADING:
- A leading dollar in a quoted string can now be escaped: "\${" will now be
interpreted as a string with contents `${`.
- Backslashes in double quoted strings are now more consistently treated as
escape characters. Previously, "foo\\" followed by something other than a
newline was not considered as a teminated string. It is now interpreted as a
string with contents `foo\`. However, as an exception, the string "foo\"
followed by a newline will continue to be treated as a valid string with
contents `foo\` rather than an unterminated string. This exception exists to
support naive uses of Windows file pahts as "C:\foo\".
Closes GH-7420.
This is to match the way that we handle UCS-2. When a BOM is found at
the beginning of a 'UCS-2' string (NOT 'UCS-2BE' or 'UCS-2LE'), we take
note of the intended byte order and handle the string accordingly, but
do NOT emit a BOM to the output. Rather, we just use the default byte
order for the requested output encoding.
Some might argue that if the input string used a BOM, and we are
emitting output in a text encoding where both big-endian and
little-endian byte orders are possible, we should include a BOM in the
output string. To such hypothetical debaters of minutiae, I can only
offer you a shoulder shrug. No reasonable program which handles UCS-2
and UCS-4 text should require a BOM.
Really, the concept of the BOM is a poor idea and should not have been
included in Unicode. Standardizing on a single byte order would have
been much better, similar to 'network byte order' for the Internet
Protocol. But this is not the place to speak at length of such things.
Sigh. I included tests which were intended to check this case in the
test suite for ISO-2022-JP-MS, but those tests were faulty and didn't
actually test what they were supposed to.
Fixing the tests revealed that there were still bugs in this area.
There was a bit of legacy code here which looks like the original author
of mbstring intended to allow conversion of Unicode Private Use Area
codepoints to ISO-2022-JP-KDDI. However, that code never worked.
It set the output variable to values which were not matched by any
of the 'if' clauses below, which meant that nothing was actually
emitted to the output. In other words, if one tried to convert Unicode
to ISO-2022-JP-KDDI, and the Unicode string contained PUA codepoints,
they would be quietly 'swallowed' and disappear.
I don't know what ISO-2022-JP-KDDI byte sequences the author wanted
to map those PUA codepoints to, and anyways, this use case is so obscure
that there is little point in worrying about it. However, it is better
to remove the non-functioning code than to leave it in.
This means that if now one tries to convert PUA codepoints to
ISO-2022-JP-KDDI, those codepoints will be treated as erroneous rather
than silently ignored.
After mb_substitute_character("long"), mbstring will respond to
erroneous input by inserting 'long' error markers into the output.
Depending on the situation, these error markers will either look like
BAD+XXXX (for general bad input), U+XXXX (when the input is OK, but it
converts to Unicode codepoints which cannot be represented in the
output encoding), or an encoding-specific marker like JISX+XXXX or
W932+XXXX.
We have almost no tests for this feature. Add a bunch of tests to
ensure that all our legacy encoding handlers work in a reasonable
way when 'long' error markers are enabled.
Some text encodings supported by mbstring (such as UCS-4) accept 4-byte
characters. When mbstring encounters an illegal byte sequence for the
encoding it is using, it should emit an 'illegal character' marker,
which can either be a single character like '?', an HTML hexadecimal
entity, or a marker string like 'BAD+XXXX'.
Because of the use of signed integers to hold 4-byte characters,
illegal 4-byte sequences with a 'negative' value (one with the high
bit set) were not handled correctly when emitting the illegal char
marker. The result is that such illegal sequences were just skipped
over (and the marker was not emitted to the output). Fix that.
The "wchar" encoding isn't really an encoding -- it's what we
internally use as the representation of decoded characters.
In practice, it tends to behave a lot like the 8bit encoding when
used from userland, because input code units end up being treated
as code points.
This patch removes the wchar encoding from the public encoding
list and reserves it for internal use only.