A MemoryError is now raised when the list cannot be created.
There is a test, but as the comment says, it really only
works for 32 bit systems. I don't know how to improve
the test for other systems (ie, 64 bit or systems
where the data size != addressable size,
e.g. 64 bit data, but 48 bit addressable memory)
instead of calling the getaddrlist() method, since the latter doesn't
work with multiple calls (it will return the empty list for the second
and subsequent calls).
Closes SF bug #555035. Include a unittest.
of the PyUNIT version of the same file. This helps people understand that
this version is the same as the version from the independent PyUNIT
release (confusion was indicated on the PyUNIT mailing list).
email package's Parser to handle the three common line endings.
Certain protocols such as IMAP define CRLF line endings and it doesn't
make sense for the client app to have to normalize the line endings
before handing it message off to the Parser.
_parsebody(): Be more flexible in the matching of line endings for
finding the MIME separators. Accept any of \r, \n and \r\n. Note
that we do /not/ change the line endings in the payloads, we just
accept any of those three around MIME boundaries.
single byte character sets. Also fixed a semantic problem with the
constructor's default arguments. Specifically,
__init__(): Change the maxlinelen argument default to None instead of
MAXLINELEN. The semantics should have been (and now are) that if
maxlinelen is given it is always honored. If it isn't given, but
header_name is given, then the maximum line length is calculated. If
neither are given then the default 76 characters is used.
_split(): If the character set is a single byte character set then we
can split the line at the maxlinelen because we know that encoding the
header won't increase its length. If the charset isn't a single byte
charset then we use the quicker divide-and-conquer line splitting
algorithm as before.
for the email package. The former is now just a shell project that
has some extra files for packaging for independent use (e.g. setup.py
and README).
Added a compatibility layer so that the same API can be used in Python
2.1 and 2.2/2.3 with the major differences shuffled off into helper
modules (_compat21.py and _compat22.py).
Also bumped the package version number to 2.0.3 for some fixes to be
checked in momentarily.
Scot Stevenson. Could be a bug fix candidate, but probably doesn't
matter much unless a certain blue-nosed cat suddenly becomes corporeal
and starts emailing some stmp.py (sic) fronted mailer.
returned a proxy for __class__ whose __bases__ was also a proxy. The
merge_class_dict() helper for dir() assumed incorrectly that __bases__
would always be a tuple and used the in-line tuple API on the proxy.
I will backport this to 2.2 as well.
test if 'callable' has not been supplied is to test for None instead of
False. The previous correction to 'if callable()' was wrong because an unusable
callback would be ignored rather than raising an exception.
and the .seed() and .whseed() methods failed to reset it. In other
words, setting the seed didn't completely determine the sequence of
results produced by random.gauss(). It does now. Programs repeatedly
mixing calls to a seed method with calls to gauss() may see different
results now.
Bugfix candidate (random.gauss() has always been broken in this way),
despite that it may change results.
This now does a dynamic analysis of which elements are so frequently
repeated as to constitute noise. The primary benefit is an enormous
speedup in find_longest_match, as the innermost loop can have factors
of 100s less potential matches to worry about, in cases where the
sequences have many duplicate elements. In effect, this zooms in on
sequences of non-ubiquitous elements now.
While I like what I've seen of the effects so far, I still consider
this experimental. Please give it a try!
On Win2K it thought 'foo' started at byte offset 0 instead of at the
pagesize, and on Win98 it thought 'foo' didn't exist at all. Somehow
or other this is related to the new "in memory file" gimmicks in
bsddb, but the old bsddb we use on Windows sucks so bad anyway I don't
want to bother digging deeper. Flushing the file in test_mmap after
writing to it makes the problem go away, so good enough.
build's "undetected error" problems were originally detected with
extension types, but we can whitebox test the same situations with
new-style classes.
Also add a test that Python doesn't die with SIGXFSZ if it exceeds the
file rlimit. (Assuming this will also test the behavior when the 2GB
limit is exceed on a platform that doesn't have large file support.)
closes SF #514433
can now pass 'None' as the filename for the bsddb.*open functions,
and you'll get an in-memory temporary store.
docs are ripped out of the bsddb dbopen man page. Fred may want to
clean them up.
Considering this for 2.2, but not 2.1.
http://www.python.org/sf/444708
This adds the optional argument for str.strip
to unicode.strip too and makes it possible
to call str.strip with a unicode argument
and unicode.strip with a str argument.
- islink() now returns true for alias files
- walk() no longer follows aliases while traversing
- realpath() implemented, returning an alias-free pathname.
As this could conceivably break existing code I think it isn't a bugfix candidate.
test data: this test fails on WIndows now if universal newlines are
enabled (which they aren't yet, by default). I don't know whether the
test will also fail on Linux now.
SF bug #522264 reported by Evelyn Mitchell.
The code included a comment about "STAR STAR" which was translated
into the code as the bogus attribute token.STARSTAR. This name never
caused an attribute error because it was never retrieved. The code
was based on an old version of the grammar that specified kwargs as
two tokens ('*' '*'). I checked as far back as 2.1 and didn't find
this production.
The fix is simple, because token.DOUBLESTAR is the only token
allowed. Also update the grammar fragment in com_arglist().
XXX I'll bet lots of other grammar fragments in comments are out of
date, probably in this module and in compile.c.
Close a file before trying to unlink it, and apparently Cygwin needs
writes to an mmap'ed file to get flushed before they're visible.
Bugfix candidate, but I think only for the 2.2 line (it's testing
features that I think were new in 2.2).
Change type_get_doc (the get function for __doc__) to look in tp_dict
more often, and if it finds a descriptor in tp_dict, to call it (with
a NULL instance). This means you can add a __doc__ descriptor to a
new-style class that returns instance docs when called on an instance,
and class docs when called on a class -- or the same docs in either
case, but lazily computed.
I'll also check this into the 2.2 maintenance branch.
If a str or unicode method returns the original object,
make sure that for str and unicode subclasses the original
will not be returned.
This should prevent SF bug http://www.python.org/sf/460020
from reappearing.
PyNumber_InPlaceMultiply insisted on calling sq_inplace_repeat if it
existed, even if nb_inplace_multiply also existed and the arguments
weren't right for sq_inplace_repeat. Change this to only use
sq_inplace_repeat if nb_inplace_multiply isn't defined.
Bugfix candidate.
double call to AddressList.getaddrlist(), and /that/ always returns an
empty list for the second and subsequent calls.
Instead, instantiate an AddressList directly, and get the parsed
addresses out of the addresslist attribute.
which requires that if there are ehlo parameters returned with an ehlo
keyword (in the response to EHLO), the keyword and parameters must be
delimited by an ASCII space. Thus responses like
250-AUTH=LOGIN
should be ignored as non-conformant to the RFC (the `=' isn't allowed
in the ehlo keyword).
This is a bug fix candidate.
Add a method zfill to str, unicode and UserString and change
Lib/string.py accordingly.
This activates the zfill version in unicodeobject.c that was
commented out and implements the same in stringobject.c. It also
adds the test for unicode support in Lib/string.py back in and
uses repr() instead() of str() (as it was before Lib/string.py 1.62)
In DatagramRequestHandler.setup(), the wfile initialization should be
StringIO.StringIO(), not StringIO.StringIO(slf.packet).
Bugfix candidate (all the way back to Python 1.5.2 :-).
Highlights: import and friends will understand any of \r, \n and \r\n
as end of line. Python file input will do the same if you use mode 'U'.
Everything can be disabled by configuring with --without-universal-newlines.
See PEP278 for details.
Add optional arg to string methods strip(), lstrip(), rstrip().
The optional arg specifies characters to delete.
Also for UserString.
Still to do:
- Misc/NEWS
- LaTeX docs (I did the docstrings though)
- Unicode methods, and Unicode support in the string methods.
The test function's signature should be
test(methodname, input, output, *args)
but the output argument was omitted. This caused all tests to fail,
because the expected output was passed as the initial argument to the
method call. But because of the way the test works (it compares the
results for a regular string to the results for a UserString instance
with the same value, and it's OK if both raise the same exception) the
test never failed!
I've fixed this, and also cleaned up a few warts in the verbose
output. Finally, I've made it possible to run the test stand-alone in
verbose mode by passing -v as a command line argument.
Now, the test will report failure related to zfill. That's not my
fault, that's a legitimate problem: the string_tests.py file contains
a test for the zfill() method (just added) but this method is not
implemented. The responsible party will surely fix this soon now.
non-us-ascii character sets in headers and bodies. Some API changes
(with DeprecationWarnings for the old APIs). Better RFC-compliant
implementations of base64 and quoted-printable.
Updated test cases. Documentation updates to follow (after I finish
writing them ;).
Change pickling format for bools to use a backwards compatible
encoding. This means you can pickle True or False on Python 2.3
and Python 2.2 or before will read it back as 1 or 0. The code
used for pickling bools before would create pickles that could
not be read in previous Python versions.