(chained zlib filters silently fail with large amounts of data)
Use the same buffer size zlib uses internally to avoid
Z_DATA_ERROR on massively compressed data
While running these on HHVM I've run into a lot of parallelism issues.
I'm backporting all the fixes I had to do in
https://github.com/facebook/hiphop-php/blob/master/hphp/tools/import_zend_test.py#L650
to php core.
Most of these changes were just filenames that were shared between
tests, but I did more surgery on the fixed ports. I can apreciate port
31337 as much as the next nerd, but random ports are better for tests.
* pull-request/320:
this is test 5 not 6
fix race condition
more shared names that create race conditions
change to a unique filename
more shared filenames
yet another shared filename
don't share a filename to stop race conditions
fix race condition for 2-4 and normalize names for others
fix race condition when running tests in parallel
clean up after test
Fix#64572: Clean up after the test
Fix#64572: Clean up after the test
* pull-request/320:
this is test 5 not 6
fix race condition
more shared names that create race conditions
change to a unique filename
more shared filenames
yet another shared filename
don't share a filename to stop race conditions
fix race condition for 2-4 and normalize names for others
fix race condition when running tests in parallel
clean up after test
Fix#64572: Clean up after the test
Fix#64572: Clean up after the test
* bug55544.phpt - VT vs. EXT at the start of the data block,
but the data can still be decoded correctly
* bug_52944.phpt works with the corrupted data and has some
different out
Most likely the ASM optimization under windows is responsible
for this behaviour.
The test is known to fail on windows with zlib version < 1.2.7 (current dep is 1.2.5),
with 1.2.7 it works. As it's primarily a zlib 1.2.5 issue on windows, skip it for now.
using ob_gzhandler will complain about headers already sent
when no compression
the Vary header should only be sent on the PHP_OUTPUT_HANDLER_START
event