buffers.
When selecting for read, the streams are examined; if any of them have
pending read data, no actual select(2) call is performed; instead the
streams with buffered data are returned; just like a regular select
call.
Prevent erroneous warning in stream_select when obtaining the fd.
php_stream_gets is now a macro which calls php_stream_get_line. The latter
has an option argument to return the number of bytes in the line.
Functions like fgetcsv(), fgetss() can be made binary safe by calling
php_stream_get_line directly.
# HEADS UP: You will need to make clean after updating your CVS, as the
# binary signature has changed.
When not enough data to satisfy a read was found in the buffer, fgets modifies
the buf pointer to point to the position to store the next chunk. It then
returned the modified buf pointer, instead of a pointer to the start of the
buffer.
Also added some infrastructure for making fgets grow the buffer on-demand to
the correct line-size. Since streams uses reasonable chunk sizes, the
performance of the realloc's should be pretty good; in the best case, the line
is already found completely in the buffer, so the returned buffer will be
allocated to precisely the correct size.
In the worst case, where the buffer only contains part of the line, we get a
realloc per buffer fill. The reallocs are either the size of the remainder
of the line, or the chunk_size (if the buffer sill does not contain a complete
line). Each realloc adds an extra byte for a NUL terminator.
I think this will perform quite well using the default chunk size of 8K.
data is sent to the kernel using write(2). fsync'ing a
file descriptor is not required -- writing to a fd has the same
affect as calling fflush after each fwrite.
I've moved EOF detection into the streams layer; a stream reader
implementation should set stream->eof when it detects EOF.
Fixed test for user streams - it still fails but that is due to an output
buffering bug.
with regard to sockets. The behaviour should be aligned with PHP 4.2 now.
This has been verified to some degree.
If the underlying stream operations block when no new data is readable,
we need to take extra precautions.
If there is buffered data available, we check for a EOL. If it exists,
we pass the data immediately back to the caller. This saves a call
to the read implementation and will not block where blocking
is not necessary at all.
If the stream buffer contains more data than the caller requested,
we can also avoid that costly step and simply return that data.
second and subsequent events.
Implement very simple recursion protection for user streams written
like this:
class urlEncodeStream {
var $fp = NULL;
function stream_open($path, $mode, $options, &$opened_path)
{
$this->fp = fopen($path, $mode); // <-- this recurses infinitely
return is_resource($this->fp);
}
}
file_register_wrapper('urlencode', 'urlEncodeStream');
$fp = fopen('urlencode:///tmp/outputfile.txt', 'w');
Noticed by: Yasuo.
Eliminate similar code from network.c.
Implement fgets equivalent at the streams level, which can detect
the mac, dos and unix line endings and handle them appropriately.
The default behaviour is unix (and dos) line endings.
An ini option to control this behaviour will follow.
# Don't forget to make clean!
# I've done some testing but would appreciate feedback from
# people with scripts/extensions that seek around a lot.