course, that means we need to handle the cases where the two arrays to
bn_mul_recursive() and bn_mul_part_recursive() differ in size.
I haven't yet changed the comments that describe bn_mul_recursive()
and bn_mul_part_recursive(). I want this to be tested by more people
before I consider this change final. Please test away!
so these macros probably shouldn't be used like that at all. So, this
change removes the misleading comment and also adds an implicit trailing
semi-colon to the DECLARE macros so they too don't require one.
IMPLEMENT macros for defining wrapper functions for "hash" and "cmp" callbacks
that are specific to the underlying item type in a hash-table. This prevents
function pointer casting altogether, and also provides some type-safety
because the macro does per-variable casting from the (void *) type used in
LHASH itself to the type declared in the macro - and if that doesn't match the
prototype expected by the "hash" or "cmp" function then a compiler error will
result.
NB: IMPLEMENT macros are not required unless predeclared forms are required
(either in a header file, or further up in a C file than the implementation
needs to be). The DECLARE macros must occur after the type-specific hash/cmp
callbacks are declared. Also, the IMPLEMENT and DECLARE macros are such that
they can be prefixed with "static" if desired and a trailing semi-colon should
be appended (making it look more like a regular declaration and easier on
auto-formatting text-editors too).
Now that these macros are defined, I will next be commiting changes to a
number of places in the library where the casting was doing bad things. After
that, the final step will be to make the analogous changes for the lh_doall
and lh_doall_arg functions (more specifically, their callback parameters).
The bn_cmp_part_words bug was only caught in the BN_mod_mul() test,
not in the BN_mul() test, so apparently the choice of parameters in
some cases is bad.
casts) used in the lhash code are about as horrible and evil as they can
be. For starters, the callback prototypes contain empty parameter lists.
Yuck.
This first change defines clearer prototypes - including "typedef"'d
function pointer types to use as "hash" and "compare" callbacks, as well as
the callbacks passed to the lh_doall and lh_doall_arg iteration functions.
Now at least more explicit (and clear) casting is required in all of the
dependant code - and that should be included in this commit.
The next step will be to hunt down and obliterate some of the function
pointer casting being used when it's not necessary - a particularly evil
variant exists in the implementation of lh_doall.
But even if this is avoided, there are still segmentation violations
(during one of the BN_free()s at the end of test_kron
in some cases, in other cases during BN_kronecker, or
later in BN_sqrt; choosing a different exponentiation
algorithm in bntest.c appears to influence when the SIGSEGV
takes place).
so we have to reduce the random numbers used in test_mont.
Before this change, test_mont failed in [debug-]solaris-sparcv9-gcc
configurations ("Montgomery multiplication test failed!" because
the multiplication result obtained with Montgomery multiplication
differed from the result obtained by BN_mod_mul).
Substituing the old version of bn_gcd.c (BN_mod_inverse) did not avoid
the problem.
The strange thing is that it I did not observe any problems
when using debug-solaris-sparcv8-gcc and solaris-sparcv9-cc,
as well as when compiling OpenSSL 0.9.6 in the solaric-sparcv9-gcc
configuration on the same system.
This caused a segmentation fault in calls to malloc, so I cleaned up
bn_lib.c a little so that it is easier to see what is going on.
The bug turned out to be an off-by-one error in BN_bin2bn.
the RSA_METHOD's "init()" handler is called, and is cleaned up after the
RSA_METHOD's "finish()" handler is called. Custom RSA_METHODs may wish to
initialise contexts and other specifics in the RSA structure upon creation
and that was previously not possible - "ex_data" is where that stuff
should go and it was being initialised too late for it to be used.
These new files will not be included literally in OpenSSL, but I intend
to integrate most of their contents. Most file names will change,
and when the integration is done, the superfluous files will be deleted.
Submitted by: Lenka Fibikova <fibikova@exp-math.uni-essen.de>
I'm a little bit nervous about bn_div_words, as I don't know what it's
supposed to return on overflow. For now, I trust the rest of the
system to give it numbers that will not cause any overflow...
BN_mul() correctly constified, avoids two realloc()'s that aren't
really necessary and saves memory to boot. This required a small
change in bn_mul_part_recursive() and the addition of variants of
bn_cmp_words(), bn_add_words() and bn_sub_words() that can take arrays
with differing sizes.
The test results show a performance that very closely matches the
original code from before my constification. This may seem like a
very small win from a performance point of view, but if one remembers
that the variants of bn_cmp_words(), bn_add_words() and bn_sub_words()
are not at all optimized for the moment (and there's no corresponding
assembler code), and that their use may be just as non-optimal, I'm
pretty confident there are possibilities...
This code needs reviewing!
situation where they've initialised the ENGINE, loaded keys (which are then
linked to that ENGINE), and performed other checks (such as verifying
certificate chains etc). At that point, if the application goes
multi-threaded or multi-process it creates problems for any ENGINE
implementations that are either not thread/process safe or that perform
optimally when they do not have to perform locking and other contention
management tasks at "run-time".
This defines a new ENGINE_ctrl() command that can be supported by engines
at their discretion. If ENGINE_ctrl(..., ENGINE_CTRL_HUP,...) returns an
error then the caller should check if the *_R_COMMAND_NOT_IMPLEMENTED error
reason was set - it may just be that the engine doesn't support or need the
HUP command, or it could be that the attempted reinitialisation failed. A
crude alternative is to ignore the return value from ENGINE_ctrl() (and
clear any errors with ERR_clear_error()) and perform a test operation
immediately after the "HUP". Very crude indeed.
ENGINEs can support this command to close and reopen connections, files,
handles, or whatever as an alternative to run-time locking when such things
would otherwise be needed. In such a case, it's advisable for the engine
implementations to support locking by default but disable it after the
arrival of a HUP command, or any other indication by the application that
locking is not required. NB: This command exists to allow an ENGINE to
reinitialise without the ENGINE's functional reference count having to sink
down to zero and back up - which is what is normally required for the
finish() and init() handlers to get invoked. It would also be a bad idea
for engine_lib to catch this command itself and interpret it by calling the
engine's init() and finish() handlers directly, because reinitialisation
may need special handling on a case-by-case basis that is distinct from a
finish/init pair - eg. calling a finish() handler may invalidate the state
stored inside individual keys that have already loaded for this engine.
two functions that did expansion on in parameters (BN_mul() and
BN_sqr()). The problem was solved by making bn_dup_expand() which is
a mix of bn_expand2() and BN_dup().
load the "external" built-in engines (those that require DSO). This
makes linking with libdl or other dso libraries non-mandatory.
Change 'openssl engine' accordingly.
Change the engine header files so some declarations (that differed at
that!) aren't duplicated, and make sure engine_int.h includes
engine.h. That way, there should be no way of missing the needed
info.
automatically, however some code was still referring to the original
pointer rather than the internal one (and thus to NULL instead of the
created pointer).
translate library names by only adding ".so" to them without
prepending them with "lib". Add the flag DSO_FLAG_NAME_TRANSLATION_EXT_ONLY
for that purpose.
appropriate filename translation on the host system. Apart from this point,
users should also note that there's a slight change in the API functions
too. The DSO now contains its own to-be-converted filename
("dso->filename"), and at the time the DSO loads the "dso->loaded_filename"
value is set to the translated form. As such, this also provides an impicit
way of determining if the DSO is currently loaded or not. Except, perhaps,
VMS .... :-)
The various DSO_METHODs have been updated for this mechanism except VMS
which is deliberately broken for now, Richard is going to look at how to
fit it in (the source comments in there explain "the issue").
Basically, the new callback scheme allows the filename conversion to
(a) be turned off altogether through the use of the
DSO_FLAG_NO_NAME_TRANSLATION flag,
(b) be handled in the default way using the default DSO_METHOD's converter
(c) overriden per-DSO by setting the override callback
(d) a mix of (b) and (c) - eg. implement an override callback that;
(i) checks if we're win32 "if(strstr(dso->meth->name, "win32"))..."
and if so, convert "blah" into "blah32.dll" (the default is
otherwise to make it "blah.dll").
(ii) default to the normal behaviour - eg. we're not on win32, so
finish with (return dso->meth->dso_name_converter(dso,NULL)).
(e) be retried a number of times by writing a new DSO_METHOD where the
"dso_load()" handler will call the converter repeatedly. Then the
custom converter could use state information in the DSO to suggest
different conversions or paths each time it is invoked.
NCONF_get_number_e() is defined (_e for "error checking") and is
promoted strongly. The old NCONF_get_number is kept around for
binary backward compatibility.
Actually, it's a feature that it goes looking at environment
variables. It's just a pity that it's at the cost of the error
checking... I'll see if I can come up with a better interface for
this.
record-oriented fashion. That means that every write() will write a
separate record, which will be read separately by the programs trying
to read from it. This can be very confusing.
The solution is to put a BIO filter in the way that will buffer text
until a linefeed is reached, and then write everything a line at a
time, so every record written will be an actual line, not chunks of
lines and not (usually doesn't happen, but I've seen it once) several
lines in one record. Voila, BIO_f_linebuffer() is born.
Since we're so close to release time, I'm making this VMS-only for
now, just to make sure no code is needlessly broken by this. After
the release, this BIO method will be enabled on all other platforms as
well.
BN_mod_mul_montgomery, which calls bn_sqr_recursive
without much preparation.
bn_sqr_recursive requires the length of its argument to be
a power of 2, which is not always the case here.
There's no reason for not using BN_sqr -- if a simpler
approach to squaring made sense, then why not change
BN_sqr? (Using BN_sqr should also speed up DH where g is chosen
such that it becomes small [e.g., 2] when converted
to Montgomery representation.)
Case closed :-)
doesn't quite work on WinNT 4 earlier than SP6. It works fine on
Windows 98 and Windows 2000.
I'm disabling it for now. What's really needed is some kind of check
to see if GetCursorInfo is safe to call, or alternatively, GetCursor
or GetCursorPos could be used, according to Jeffrey.
- Make sure PCURSORINFO is defined even on systems that do not provide it.
- Change the reference to Peter Gutmann's paper.
- Make sure we don't walk the whole heap lists for performance reasons.
Jeffrey Altman suggests following Peter Gutmann's advice to keep it
to 50 heap entries per heap list.