It's like CIFuzz but unlike CIFuzz it's compatible with forks and
it should make it possible to run the fuzzers to make sure that
patches backported to them are backported correctly without introducing
new bugs and regressions.
It was copy-pasted directly from OSS-Fuzz where it makes sense to
kind of strip binaries to get nice backtraces but when the fuzzers
are built and run locally with gdb it would be nice to have a little
bit more than that.
It was initially discovered in elfutils where I put the same flags
and was surprised when I couldn't run the fuzzer comfortably step
by step, which led to the same change there: https://github.com/google/oss-fuzz/pull/7092
:-)
The scheme is very similar to libsystemd-shared.so: instead of building a
static library, we build a shared library from the same objects and link the
two users to it. Both systemd and systemd-analyze consist mostly of the fairly
big code in libcore, so we save a bit on the installation:
(-0g, no strip)
-rwxr-xr-x 5238864 Dec 14 12:52 /var/tmp/inst1/usr/lib/systemd/systemd
-rwxr-xr-x 5399600 Dec 14 12:52 /var/tmp/inst1/usr/bin/systemd-analyze
-rwxr-xr-x 244912 Dec 14 13:17 /var/tmp/inst2/usr/lib/systemd/systemd
-rwxr-xr-x 461224 Dec 14 13:17 /var/tmp/inst2/usr/bin/systemd-analyze
-rwxr-xr-x 5271568 Dec 14 13:17 /var/tmp/inst2/usr/lib/systemd/libsystemd-core-250.so
(-0g, strip)
-rwxr-xr-x 2522080 Dec 14 13:19 /var/tmp/inst1/usr/lib/systemd/systemd
-rwxr-xr-x 2604160 Dec 14 13:19 /var/tmp/inst1/usr/bin/systemd-analyze
-rwxr-xr-x 113304 Dec 14 13:19 /var/tmp/inst2/usr/lib/systemd/systemd
-rwxr-xr-x 207656 Dec 14 13:19 /var/tmp/inst2/usr/bin/systemd-analyze
-rwxr-xr-x 2648520 Dec 14 13:19 /var/tmp/inst2/usr/lib/systemd/libsystemd-core-250.so
So for systemd itself we grow a bit (2522080 → 2648520+113304=2761824), but
overall we save. The most is saved on all the test files that link to libcore,
if they are installed, because there's 15 of them:
$ du -s /var/tmp/inst?
220096 /var/tmp/inst1
122960 /var/tmp/inst2
I also considered making systemd-analyze a symlink to /usr/lib/systemd/systemd
and turning systemd into a multicall binary. We did something like this with
udevd and udevadm. But that solution doesn't fit well in this case.
systemd-analyze has a bunch of functionality that is not used in systemd,
so the systemd binary would need to grow quite a bit. And we're likely to
add new types of verification or introspection features in analyze, and this
baggage would only grow. In addition, there are the test binaries which also
benefit from this.
00db9a114e ("docs: generate table from header using a script") got the
descriptions for the partition types mixed up. After that change, the
spec claimed, for example, that the /usr partition should contain
"dm-verity integrity hash data for the matching root partition", and
that the /usr verity partition should be of type "Any native, optionally
in LUKS". This made the spec an extremely confusing read before I
figured out what must have happened!
I've gone through the table as it existed prior to 00db9a114e, and moved
the descriptions around in the script that generates the table until
they matched up with what they used to be. Then I regenerated the
table from the fixed script.
This adds a helper script:
$ python3 tools/list-discoverable-partitions.py <src/shared/gpt.h
<!-- generated with tools/list-discoverable-partitions.py -->
| Partition Type UUID | Name | Allowed File Systems | Explanation |
|---------------------|------|----------------------|-------------|
| _Root Partition (Alpha)_ | `6523f8ae-3eb1-4e2a-a05a-18b695ae656f` | [Root Partition] | [Root Partition more] |
| _Root Partition (ARC)_ | `d27f46ed-2919-4cb8-bd25-9531f3c16534` | ditto | ditto |
...
The output can be pasted into the markdown file. I think this works better than
trying to match the two lists by hand.
When using "capture : true" in custom_target()s the mode of the source
file is not preserved when the generated file is not installed and so
needs to be tweaked manually. Switch from output capture to creating the
target file and copy the permissions from the input file.
Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
Imports are sorted in the usual fashion: stdlib first.
literal_eval() parses string/numbers/lists/sets/dicts, and nothing else, while
eval will execute any python code. Using literal_eval() is generally more
correct, because it avoids the risk of side effects from the parsed expression.
In this case, we generate the parsed strings ourselves, so it's very unlikely
to have anything unexpected in the expressions. But let's do the correct thing
anyway.
It makes it easier to process the license automatically like other files.
The text of the license in tools/chromiumos/LICENSE matches
https://spdx.org/licenses/BSD-3-Clause.html exactly.
Format output in a manner that can be copypasted as-is to NEWS.
That is, with 8 spaces indentation and wrapped at 80 columns.
Before:
$ tools/git-contrib.sh
Ben Stockett,
Carl Lei,
Frantisek Sumsal,
Gibeom Gwon,
Hugo Osvaldo Barrera,
James Hilliard,
Jan Palus,
Lennart Poettering,
Luca Boccassi,
Luca BRUNO,
Mike Gilbert,
nassir90,
nl6720,
Raul Tambre,
Yegor Alexeyev,
Yu Watanabe,
Zbigniew Jędrzejewski-Szmek,
After:
Contributions from: Ben Stockett, Carl Lei, Frantisek Sumsal,
Gibeom Gwon, Hugo Osvaldo Barrera, James Hilliard, Jan Palus,
Lennart Poettering, Luca Boccassi, Luca BRUNO, Mike Gilbert,
nassir90, nl6720, Raul Tambre, Yegor Alexeyev, Yu Watanabe,
Zbigniew Jędrzejewski-Szmek
Lines in the dumps are ordered by some pseudo-random hashmap entry order, which
makes it hard to diff two outputs. This sort the entries alphabetically, and
also sorts items within the entries, and supresses timestamps and other fields
which always vary.
We could sort the output inside of systemd itself, but it'd make things more
complex, and we probably don't need output to be sorted in most cases. It also
wouldn't be enough, because timestamps and such would still need to be ignored
to do a nice diff. So I think doing the sorting and suppression in a python
helper is a better approach.
m4 was hugely popular in the past, because autotools, automake, flex, bison and
many other things used it. But nowadays it much less popular, and might not even
be installed in the buildroot. (m4 is small, so it doesn't make a big difference.)
(FWIW, Fedora dropped make from the buildroot now,
https://fedoraproject.org/wiki/Changes/Remove_make_from_BuildRoot. I think it's
reasonable to assume that m4 will be dropped at some point too.)
The main reason to drop m4 is that the syntax is not very nice, and we should
minimize the number of different syntaxes that we use. We still have two
(configure_file() with @FOO@ and jinja2 templates with {{foo}} and the
pythonesque conditional expressions), but at least we don't need m4 (with
m4_dnl and `quotes').
m4 was nice in '85, but the syntax feels a bit dated. Since we use python for
meson, let's use a popular python templating engine to replace some m4 usage.
A little nicety is that typos are caught:
FAILED: sysusers.d/systemd-remote.conf
/usr/bin/meson --internal exe --capture sysusers.d/systemd-remote.conf -- /home/zbyszek/src/systemd/tools/meson-render-jinja2.py config.h ../sysusers.d/systemd-remote.conf.j2
Traceback (most recent call last):
File "/home/zbyszek/src/systemd/tools/meson-render-jinja2.py", line 28, in <module>
print(render(sys.argv[2], defines))
File "/home/zbyszek/src/systemd/tools/meson-render-jinja2.py", line 24, in render
return template.render(defines)
File "/usr/lib/python3.9/site-packages/jinja2/environment.py", line 1090, in render
self.environment.handle_exception()
File "/usr/lib/python3.9/site-packages/jinja2/environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "/usr/lib/python3.9/site-packages/jinja2/_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "<template>", line 8, in top-level template code
jinja2.exceptions.UndefinedError: 'HAVE_MICROHTTP' is undefined
This checking mirrors what 349cc4a507 did for C defines.
This reverts commit a2031de849.
The patch itself seems OK, but it exposes a bug in lxml or libxml2-2.9.12 which
was just released. This is being resolved in
https://gitlab.gnome.org/GNOME/libxml2/-/issues/255, but it might be while. So
let's revert this for now to unbreak our CI.
Fixes#19601.
I occasionally do 'build/man/man systemd.directives' when working on man pages,
and it's annoying slow. By paralellizing the parsing of xml, we can make it a
bit faster.
This is still rather innefficient. Only the parsing part is serialized, xml is
still produced serially at the end, which is hard to avoid.
$ ninja -C build man/systemd.directives.xml
before:
8.20s user 0.21s system 99% cpu 8.460 total
8.33s user 0.18s system 98% cpu 8.619 total
8.72s user 0.19s system 98% cpu 9.019 total
after:
13.99s user 0.73s system 345% cpu 4.262 total
14.15s user 0.35s system 348% cpu 4.161 total
14.33s user 0.35s system 339% cpu 4.321 total
I.e. it uses almost twice as much cpu, but cuts the wallclock time down (on a
2-core/4-thread cpu) to about half too, which is an overall win if you're just
trying to render the man page.
The change from list and .append() to set and .add() is something that could
have been done before too, but it's noticable now. It cuts down on the
serialization/deserialization time (about .2s).
Add a build script to compile bpf source code. A program in restricted
C is compiled into an object file. Object file is converted to BPF
skeleton [0] header file.
If build with custom meson build rule, the target header will reside in
build/ directory (not in source tree), e.g the path for socket_bind:
`build/src/core/bpf/socket_bind/socket-bind.skel.h`
Script runs the phases:
* clang to generate *.o from restricted C
* llvm-strip to remove useless DWARF info
* bpf skeleton generation with bpftool
These phases are logged to stderr for debug purposes.
To include BTF debug information, -g option is passed to clang.
[0] https://lwn.net/Articles/806911/
When executed in test mode, "OUTDATED" is appropriate. But when executed
to actually update the text, after the tool executes, those pages are the
opposite, not outdated.
668b3a42fe allowed update-dbus-docs.py to start
running on Cent OS 8 (instead of skipping). But subprocess.check_output()'s
text argument didn't exist until Python 3.7 and C8 is still running
Python 3.6. Use universal_newlines instead for backwards compatibility.