In automated QA environments there may be tests that are known to fail,
and being able to skip them is useful to remove known failures from the
test log.
Signed-off-by: Ross Burton <ross.burton@arm.com>
When reading test logs manually it is a lot easier when the tests are
sorted by name rather than by disk order.
Signed-off-by: Ross Burton <ross.burton@arm.com>
When the interpeter is missing, we get an exit code of 127. Let's treat those
tests as skipped too. If we could run the test far enough so that it could do
the check itself, it would return 77 anyway.
$ test/asdf; echo $?
exec: Failed to execute process 'test/asdf': The file specified the interpreter '/bin/asdf', which is not an executable command.
127
$ test/asdf; echo $?
/usr/bin/env: ‘/bin/asdf’: No such file or directory
127
This should resolve the problem that TEST-02 fails or Debian's 'unit-tests' fail
when python3 is not installed. Installing python3 via the mechanism that is
used to construct TEST images, i.e. the dracut dependency chasing scheme, would
be a lot of work for python with its modules in multiple locations and hundreds
of little files. So I think it OK to just skip the test there, and also in
other cases where python is not available.
/usr/lib/systemd/tests may contain more than the unit tests. For example on
SUSE we also install the integration tests there.
Putting the unit tests in a dedicated directory named 'unit-tests' makes the
layout cleaner.
Note that `run-unit-tests.py` has not been moved so we don't need to adjust
(Fedora) packaging and users also don't need to descend into the subdirectory.
Printing stdout and stderr from a failed test makes it harder to
interpret what the specific problem was; instead let's print out
the lines in order as we got them when the test was run
Also save failed test output to file if ARTIFACT_DIRECTORY is defined
We have "installed tests", but don't provide an easy way to run them.
The protocol is very simple: each test must return 0 for success, 77 means
"skipped", anything else is an error. In addition, we want to print test
output only if the test failed.
I wrote this simple script. It is pretty basic, but implements the functions
listed above. Since it is written in python it should be easy to add option
parsing (like running only specific tests, or running unsafe tests, etc.)
I looked at the following alternatives:
- Ubuntu root-unittests: this works, but just dumps all output to the terminal,
has no coloring.
- @ssahani's test runner [2]
It uses the unittest library and the test suite was implented as a class, and
doesn't implement any of the functions listed above.
- cram [3,4]
cram runs our tests, but does not understand the "ignore the output" part,
has not support for our magic skip code (it uses hardcoded 80 instead),
and seems dead upstream.
- meson test
Here the idea would be to provide an almost-empty meson.build file under
/usr/lib/systemd/tests/ that would just define all the tests. This would
allow us to reuse the test runner we use normally. Unfortunately meson requires
a build directory and configuration to be done before running tests. This
would be possible, but seems a lot of effort to just run a few binaries.
[1] 242c96addb/debian/tests/root-unittests
[2] https://github.com/systemd/systemd-fedora-ci/blob/master/upstream/systemd-upstream-tests.py
[3] https://bitheap.org/cram/
[4] https://pypi.org/project/pytest-cram/Fixes#10069.