systemd/test
2024-11-12 01:07:43 +09:00
..
auxv
dmidecode-dumps
fuzz network: add missing else in dhcp_lease_load 2024-10-28 20:59:17 -07:00
hwdb.d
journal-data
knot-data resolve: provide service resolve over varlink 2024-02-16 16:24:08 +01:00
TEST-01-BASIC test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-02-UNITTESTS test: dump a simple summary at the end of TEST-02-UNITTEST 2024-06-12 14:04:10 +01:00
TEST-03-JOBS TEST-03-JOBS: add test case for #34758 2024-10-27 20:04:58 +01:00
TEST-04-JOURNAL test: Rename INTERACTIVE_DEBUG to TEST_SHELL 2024-08-05 15:00:24 +02:00
TEST-05-RLIMITS test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-06-SELINUX mkosi: Remove enforcing=0 from default kernel command line 2024-07-17 18:56:02 +02:00
TEST-07-PID1 core: Introduce PrivatePIDs= 2024-11-05 05:32:02 -08:00
TEST-08-INITRD test: Run tests that don't need a vm in systemd-nspawn 2024-05-29 14:10:50 +02:00
TEST-09-REBOOT TEST-09-REBOOT: Set auto firmware 2024-06-28 16:21:39 +02:00
TEST-13-NSPAWN test: add a testcase for unprivileged nspawn 2024-09-06 18:33:50 -06:00
TEST-15-DROPIN test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-16-EXTEND-TIMEOUT TEST-16-EXTEND-TIMEOUT: Convert to oneshot service 2024-06-02 19:15:21 +02:00
TEST-17-UDEV test: Run tests that don't need a vm in systemd-nspawn 2024-05-29 14:10:50 +02:00
TEST-18-FAILUREACTION TEST-18-FAILUREACTION: Set auto firmware 2024-06-28 16:23:25 +02:00
TEST-19-CGROUP test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-21-DFUZZER test: disable TEST-21-DFUZZER in mkosi, as it is very flacky 2024-06-03 19:37:17 +02:00
TEST-22-TMPFILES test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-23-UNIT-FILE test: Rename testsuite-XX units to match test name 2024-05-14 12:43:28 +02:00
TEST-24-CRYPTSETUP test: fix TEST-24-CRYPTSETUP on SUSE 2024-07-02 19:05:07 +02:00
TEST-25-IMPORT test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-26-SYSTEMCTL test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-29-PORTABLE test: Run tests that don't need a vm in systemd-nspawn 2024-05-29 14:10:50 +02:00
TEST-30-ONCLOCKCHANGE test: Run tests that don't need a vm in systemd-nspawn 2024-05-29 14:10:50 +02:00
TEST-31-DEVICE-ENUMERATION test: Run tests that don't need a vm in systemd-nspawn 2024-05-29 14:10:50 +02:00
TEST-32-OOMPOLICY test: Run tests that don't need a vm in systemd-nspawn 2024-05-29 14:10:50 +02:00
TEST-34-DYNAMICUSERMIGRATE test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-35-LOGIN test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-36-NUMAPOLICY test: Run tests that don't need a vm in systemd-nspawn 2024-05-29 14:10:50 +02:00
TEST-38-FREEZER test: Run tests that don't need a vm in systemd-nspawn 2024-05-29 14:10:50 +02:00
TEST-43-PRIVATEUSER-UNPRIV test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-44-LOG-NAMESPACE test: Generate basic testsuite services with meson 2024-05-14 12:43:28 +02:00
TEST-45-TIMEDATE test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-46-HOMED test: attempt to install sshd-session from multiple places 2024-08-01 15:02:34 +02:00
TEST-50-DISSECT core: deduplicate identical dm-verity ExtensionImages= 2024-06-28 14:37:58 +01:00
TEST-52-HONORFIRSTSHUTDOWN test: Rename testsuite-XX units to match test name 2024-05-14 12:43:28 +02:00
TEST-53-ISSUE-16347 test: Run tests that don't need a vm in systemd-nspawn 2024-05-29 14:10:50 +02:00
TEST-54-CREDS TEST-54-CREDS: Specify SMBIOS creds via corresponding mkosi option 2024-07-25 13:12:16 +02:00
TEST-55-OOMD TEST-55-OOMD: Remove the opensuse user@ dropin 2024-07-15 16:17:33 +02:00
TEST-58-REPART TEST-58-REPART: Always run TEST-58-REPART in virtual machine 2024-09-03 08:48:34 +02:00
TEST-59-RELOADING-RESTART test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-60-MOUNT-RATELIMIT test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-62-RESTRICT-IFACES test: Run tests that don't need a vm in systemd-nspawn 2024-05-29 14:10:50 +02:00
TEST-63-PATH test: Rename testsuite-XX units to match test name 2024-05-14 12:43:28 +02:00
TEST-64-UDEV-STORAGE TEST-64-UDEV-STORAGE: Don't hardcode device name in long-sysfs-path test 2024-11-02 20:43:22 +01:00
TEST-65-ANALYZE test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-66-DEVICE-ISOLATION test: Run tests that don't need a vm in systemd-nspawn 2024-05-29 14:10:50 +02:00
TEST-67-INTEGRITY test: Run tests that don't need a vm in systemd-nspawn 2024-05-29 14:10:50 +02:00
TEST-68-PROPAGATE-EXIT-STATUS test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-69-SHUTDOWN TEST-69-SHUTDOWN: Order after systemd-user-sessions.service 2024-05-31 17:26:13 +02:00
TEST-70-TPM2 TEST-70-TPM2: Use UEFI firmware if available 2024-06-28 15:47:33 +02:00
TEST-71-HOSTNAME test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-72-SYSUPDATE sysupdate: Add integration test for updatectl updates 2024-08-21 09:31:41 +01:00
TEST-73-LOCALE test: Set priority for TEST-73-LOCALE 2024-07-06 02:07:03 +02:00
TEST-74-AUX-UTILS bootctl: Add --secure-boot-auto-enroll 2024-11-03 10:46:17 +01:00
TEST-75-RESOLVED test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-76-SYSCTL test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-78-SIGQUEUE test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-79-MEMPRESS test: Generate basic testsuite services with meson 2024-05-14 12:43:28 +02:00
TEST-80-NOTIFYACCESS TEST-80-NOTIFYACCESS: don't specify --pid= if MAINPID= is provided explicitly 2024-10-29 18:42:16 +01:00
TEST-81-GENERATORS test: Rework integration test definitions 2024-05-14 12:43:28 +02:00
TEST-82-SOFTREBOOT test: Run tests that don't need a vm in systemd-nspawn 2024-05-29 14:10:50 +02:00
TEST-83-BTRFS test: Run tests that don't need a vm in systemd-nspawn 2024-05-29 14:10:50 +02:00
TEST-84-STORAGETM test: Run tests that don't need a vm in systemd-nspawn 2024-05-29 14:10:50 +02:00
TEST-85-NETWORK mkosi: Run integration tests as root 2024-05-29 14:10:50 +02:00
TEST-86-MULTI-PROFILE-UKI Rework TEST-86-MULTI-PROFILE-UKI 2024-10-21 17:24:14 +02:00
test-bcd
test-execute test-execute: ExecStop= and friends should not get credentials 2024-07-21 19:10:58 +01:00
test-fstab-generator fstab-generator: disable default deps if x-systemd.{wanted,required}-by= is used 2023-12-12 00:34:32 +08:00
test-journals test: add coverage for #27533 2023-05-09 19:59:13 +02:00
test-keymap-util
test-network test-network: add test case for issue #35047 2024-11-11 13:59:41 +00:00
test-network-generator-conversion network: make generated configs have higher precedence over default configs 2023-11-03 11:42:19 +09:00
test-path core: add WantsMountsFor= 2023-11-29 11:04:59 +00:00
test-path-util
test-resolve
test-sysusers sysusers: check if requested group name matches user name in queue 2024-08-06 13:02:58 +02:00
test-umount
units Introduce systemd-keyutil to do various key/certificate operations 2024-11-08 15:00:21 +01:00
.gitignore
create-sys-script.py
hwdb-test.sh test: Make sure SYSTEMD_HWDB_UPDATE_BYPASS is disabled in the hwdb test 2023-12-19 16:01:54 +01:00
integration-test-setup.sh test: Use usual setup in integration-test-setup script 2024-08-14 14:18:40 +02:00
integration-test-wrapper.py mkosi: mark test as skipped when QEMU crashes 2024-10-07 23:58:38 +01:00
meson.build test: install integration-test-setup.sh in testdata/ 2024-11-08 12:37:40 +01:00
networkd-test.py networkctl-status-link: show netdev files associated with link 2024-09-09 23:20:42 +02:00
README.testsuite mkosi: replace PackageManagerTrees= with SandboxTrees= 2024-10-13 05:43:32 +09:00
rule-syntax-check.py
run-integration-tests.sh test: run clean-again between tests, not at the end 2024-03-21 11:11:01 +00:00
run-unit-tests.py tests/run-unit-tests: add option to skip tests 2024-03-11 23:27:01 +00:00
sd-script.py
sys-script.py
sysv-generator-test.py test: replace readfp() with read_file() 2023-07-05 21:38:24 +01:00
test-bootctl-json.sh test: add regression tests for find_esp() and friend 2023-04-19 04:04:57 +09:00
test-compare-versions.sh test: quote paths to executables 2024-09-18 09:47:04 +09:00
test-fstab-generator.sh test: quote paths to executables 2024-09-18 09:47:04 +09:00
test-functions test: CET/EET are deprecated, use Europe/Berlin and Kyiv 2024-10-21 21:37:33 +02:00
test-network-generator-conversion.sh network-generator: use extract_first_word() 2024-08-14 15:29:45 +09:00
test-rpm-macros.sh test/test-rpm-macros.sh: add build directory to pkg-config search path 2024-05-15 18:25:27 +02:00
test-shutdown.py test: make the output of TEST-69 less painful to read 2024-04-11 11:35:17 +02:00
test-systemctl-enable.sh Add $SYSTEMD_IN_CHROOT to override chroot detection 2024-08-16 10:11:29 +02:00
test-systemd-tmpfiles.py test-systemd-tmpfiles: skip when /tmp has unexpected ownership 2023-10-26 14:46:15 +01:00
test-sysusers.sh.in test: quote paths to executables 2024-09-18 09:47:04 +09:00
test-udev.py udev-rules: support case insensitive match 2024-09-15 23:09:26 +09:00
test.service.in test: install integration-test-setup.sh in testdata/ 2024-11-08 12:37:40 +01:00
testdata
udev-dmi-memory-id-test.sh

# Integration tests

## Running the integration tests with meson + mkosi

To run the integration tests with meson + mkosi, make sure you're running the
latest version of mkosi. See
[`docs/HACKING.md`](https://github.com/systemd/systemd/blob/main/docs/HACKING.md)
for more specific details. Make sure `mkosi` is available in `$PATH` when
reconfiguring meson to make sure it is picked up properly.

We also need to make sure the required meson options are enabled:

```shell
$ meson setup --reconfigure build -Dremote=enabled
```

To make sure `mkosi` doesn't try to build systemd from source during the image build
process, you can add the following to `mkosi.local.conf`:

```
[Content]
Environment=NO_BUILD=1
```

You might also want to use the `PackageDirectories=` or `Repositories=` option to provide
mkosi with a directory or repository containing the systemd packages that should be installed
instead. If the repository containing the systemd packages is not a builtin repository known
by mkosi, you can use the `SandboxTrees=` option to write an extra repository definition
to /etc which is used when building the image instead.

Next, we can build the integration test image with meson:

```shell
$ meson compile -C build mkosi
```

By default, the `mkosi` meson target which builds the integration test image depends on
other meson targets to build various systemd tools that are used to build the image to make
sure they are up-to-date. If you instead want the already installed systemd tools on the
host to be used, you can run `mkosi` manually to build the image. To build the integration test
image without meson, run the following:

```shell
$ mkosi -f
```

Note that by default we assume that `build/` is used as the meson build directory that will be used to run
the integration tests. If you want to use another directory as the meson build directory, you will have to
configure the mkosi build directory (`BuildDirectory=`), cache directory (`CacheDirectory=`) and output
directory (`OutputDirectory=`) to point to the other directory using `mkosi.local.conf`.

After the image has been built, the integration tests can be run with:

```shell
$ SYSTEMD_INTEGRATION_TESTS=1 meson test -C build --no-rebuild --suite integration-tests --num-processes "$(($(nproc) / 4))"
```

As usual, specific tests can be run in meson by appending the name of the test
which is usually the name of the directory e.g.

```shell
$ SYSTEMD_INTEGRATION_TESTS=1 meson test -C build --no-rebuild -v TEST-01-BASIC
```

See `meson introspect build --tests` for a list of tests.

To interactively debug a failing integration test, the `--interactive` option
(`-i`) for `meson test` can be used. Note that this requires meson v1.5.0 or
newer:

```shell
$ SYSTEMD_INTEGRATION_TESTS=1 meson test -C build --no-rebuild -i TEST-01-BASIC
```

Due to limitations in meson, the integration tests do not yet depend on the
mkosi target, which means the mkosi target has to be manually rebuilt before
running the integration tests. To rebuild the image and rerun a test, the
following command can be used:

```shell
$ meson compile -C build mkosi && SYSTEMD_INTEGRATION_TESTS=1 meson test -C build --no-rebuild -v TEST-01-BASIC
```

The integration tests use the same mkosi configuration that's used when you run
mkosi in the systemd reposistory, so any local modifications to the mkosi
configuration (e.g. in `mkosi.local.conf`) are automatically picked up and used
by the integration tests as well.

## Iterating on an integration test

To iterate on an integration test, let's first get a shell in the integration test environment by running
the following:

```shell
$ meson compile -C build mkosi && SYSTEMD_INTEGRATION_TESTS=1 TEST_SHELL=1 meson test -C build --no-rebuild -i TEST-01-BASIC
```

This will get us a shell in the integration test environment after booting the machine without running the
integration test itself. After booting, we can verify the integration test passes by running it manually,
for example with `systemctl start TEST-01-BASIC`.

Now you can extend the test in whatever way you like to add more coverage of existing features or to add
coverage for a new feature. Once you've finished writing the logic and want to rerun the test, run the
the following on the host:

```shell
$ mkosi -t none
```

This will rebuild the distribution packages without rebuilding the entire integration test image. Next, run
the following in the integration test machine:

```shell
$ systemctl soft-reboot
$ systemctl start TEST-01-BASIC
```

A soft-reboot is required to make sure all the leftover state from the previous run of the test is cleaned
up by soft-rebooting into the btrfs snapshot we made before running the test. After the soft-reboot,
re-running the test will first install the new packages we just built, make a new snapshot and finally run
the test again. You can keep running the loop of `mkosi -t none`, `systemctl soft-reboot` and
`systemctl start ...` until the changes to the integration test are working.

If you're debugging a failing integration test (running `meson test --interactive` without `TEST_SHELL`),
there's no need to run `systemctl start ...`, running `systemctl soft-reboot` on its own is sufficient to
rerun the test.

## Running the integration tests the old fashioned way

The extended testsuite only works with UID=0. It consists of the subdirectories
named `test/TEST-??-*`, each of which contains a description of an OS image and
a test which consists of systemd units and scripts to execute in this image.
The same image is used for execution under `systemd-nspawn` and `qemu`.

To run the extended testsuite do the following:

```shell
$ ninja -C build  # Avoid building anything as root later
$ sudo test/run-integration-tests.sh
ninja: Entering directory `/home/zbyszek/src/systemd/build'
ninja: no work to do.
--x-- Running TEST-01-BASIC --x--
+ make -C TEST-01-BASIC clean setup run
make: Entering directory '/home/zbyszek/src/systemd/test/TEST-01-BASIC'
TEST-01-BASIC CLEANUP: Basic systemd setup
TEST-01-BASIC SETUP: Basic systemd setup
...
TEST-01-BASIC RUN: Basic systemd setup [OK]
make: Leaving directory '/home/zbyszek/src/systemd/test/TEST-01-BASIC'
--x-- Result of TEST-01-BASIC: 0 --x--
--x-- Running TEST-02-CRYPTSETUP --x--
+ make -C TEST-02-CRYPTSETUP clean setup run
```

If one of the tests fails, then $subdir/test.log contains the log file of
the test.

To run just one of the cases:

```shell
$ sudo make -C test/TEST-01-BASIC clean setup run
```

### Specifying the build directory

If the build directory is not detected automatically, it can be specified
with BUILD_DIR=:

```shell
$ sudo BUILD_DIR=some-other-build/ test/run-integration-tests
```

or

```shell
$ sudo make -C test/TEST-01-BASIC BUILD_DIR=../../some-other-build/ ...
```

Note that in the second case, the path is relative to the test case directory.
An absolute path may also be used in both cases.

### Testing installed binaries instead of built

To run the extended testsuite using the systemd installed on the system instead
of the systemd from a build, use the NO_BUILD=1:

```shell
$ sudo NO_BUILD=1 test/run-integration-tests
```

### Configuration variables

`TEST_NO_QEMU=1`: Don't run tests under qemu.

`TEST_QEMU_ONLY=1`: Run only tests that require qemu.

`TEST_NO_NSPAWN=1`:  Don't run tests under systemd-nspawn.

`TEST_PREFER_NSPAWN=1`:  Run all tests that do not require qemu under
systemd-nspawn.

`TEST_NO_KVM=1`: Disable qemu KVM auto-detection (may be necessary when you're
trying to run the *vanilla* qemu and have both qemu and qemu-kvm installed)

`QEMU_MEM=512M`: Configure amount of memory for qemu VMs (defaults to 512M).

`QEMU_SMP=1`: Configure number of CPUs for qemu VMs (defaults to 1).

`KERNEL_APPEND='...'`: Append additional parameters to the kernel command line.

`NSPAWN_ARGUMENTS='...'`:  Specify additional arguments for systemd-nspawn.

`QEMU_TIMEOUT=infinity`: Set a timeout for tests under qemu (defaults to 1800
sec).

`NSPAWN_TIMEOUT=infinity`: Set a timeout for tests under systemd-nspawn
(defaults to 1800 sec).

`TEST_SHELL=1`: Configure the machine to be more *user-friendly* for
interactive debugging (e.g. by setting a usable default terminal, suppressing
the shutdown after the test, etc.).

`TEST_MATCH_SUBTEST=subtest`:  If the test makes use of `run_subtests` use this
variable to provide a POSIX extended regex to run only subtests matching the
expression.

`TEST_MATCH_TESTCASE=testcase`: Same as $TEST_MATCH_SUBTEST but for subtests
that make use of `run_testcases`.

The kernel and initrd can be specified with $KERNEL_BIN and $INITRD. (Fedora's
or Debian's default kernel path and initrd are used by default.)

A script will try to find your qemu binary. If you want to specify a different
one with `$QEMU_BIN`.

`TEST_SKIP`: takes a space separated list of tests to skip.

### Debugging the qemu image

If you want to log in the testsuite virtual machine, use `TEST_SHELL=1`
and log in as root:

```shell
$ sudo make -C test/TEST-01-BASIC TEST_SHELL=1 run
```

The root password is empty.

## Ubuntu CI

New PRs submitted to the project are run through regression tests, and one set
of those is the 'autopkgtest' runs for several different architectures, called
'Ubuntu CI'.  Part of that testing is to run all these tests.  Sometimes these
tests are temporarily deny-listed from running in the 'autopkgtest' tests while
debugging a flaky test; that is done by creating a file in the test directory
named 'deny-list-ubuntu-ci', for example to prevent the TEST-01-BASIC test from
running in the 'autopkgtest' runs, create the file
'TEST-01-BASIC/deny-list-ubuntu-ci'.

The tests may be disabled only for specific archs, by creating a deny-list file
with the arch name at the end, e.g.
'TEST-01-BASIC/deny-list-ubuntu-ci-arm64' to disable the TEST-01-BASIC test
only on test runs for the 'arm64' architecture.

Note the arch naming is not from 'uname -m', it is Debian arch names:
https://wiki.debian.org/ArchitectureSpecificsMemo

For PRs that fix a currently deny-listed test, the PR should include removal
of the deny-list file.

In case a test fails, the full set of artifacts, including the journal of the
failed run, can be downloaded from the artifacts.tar.gz archive which will be
reachable in the same URL parent directory as the logs.gz that gets linked on
the Github CI status.

The log URL can be derived following a simple algorithm, however the test
completion timestamp is needed and it's not easy to find without access to the
log itself. For example, a noble s390x job started on 2024-03-23 at 02:09:11
will be stored at the following URL:

https://autopkgtest.ubuntu.com/results/autopkgtest-noble-upstream-systemd-ci-systemd-ci/noble/s390x/s/systemd-upstream/20240323_020911_e8e88@/log.gz

Fortunately a list of URLs listing file paths for recently completed test runs
is available at:

https://autopkgtest.ubuntu.com/results/autopkgtest-noble-upstream-systemd-ci-systemd-ci/

paths listed at this URL can be appended to the URL to download them. Unfortunately
there are too many results and the web server cannot list them all at once. Fortunately
there is a workaround: copy the last line on the page, and append it to the URL, with
a '?marker=' prefix, and the web server will show the next page of results. For example:

https://autopkgtest.ubuntu.com/results/autopkgtest-noble-upstream-systemd-ci-systemd-ci/?marker=noble/amd64/s/systemd-upstream/20240616_211635_5993a@/result.tar

The 5 characters at the end of the last directory are not random, but the first
5 characters of a SHA1 hash generated based on the set of parameters given to
the build plus the completion timestamp, such as:

```shell
$ echo -n 'systemd-upstream {"build-git": "https://salsa.debian.org/systemd-team/systemd.git#debian/master", "env": ["UPSTREAM_REPO=https://github.com/systemd/systemd.git", "CFLAGS=-O0", "DEB_BUILD_PROFILES=pkg.systemd.upstream noudeb", "TEST_UPSTREAM=1", "CONFFLAGS_UPSTREAM=--werror -Dslow-tests=true", "UPSTREAM_PULL_REQUEST=31444", "GITHUB_STATUSES_URL=https://api.github.com/repos/systemd/systemd/statuses/c27f600a1c47f10b22964eaedfb5e9f0d4279cd9"], "ppas": ["upstream-systemd-ci/systemd-ci"], "submit-time": "2024-02-27 17:06:27", "uuid": "02cd262f-af22-4f82-ac91-53fa5a9e7811"}' | sha1sum | cut -c1-5
```

To add new dependencies or new binaries to the packages used during the tests,
a merge request can be sent to: https://salsa.debian.org/systemd-team/systemd
targeting the 'upstream-ci' branch.

The cloud-side infrastructure, that is hooked into the Github interface, is
located at:

https://git.launchpad.net/autopkgtest-cloud/

A generic description of the testing infrastructure can be found at:

https://wiki.ubuntu.com/ProposedMigration/AutopkgtestInfrastructure

In case of infrastructure issues with this CI, things might go wrong in two
places:

- starting a job: this is done via a Github webhook, so check if the HTTP POST
  are failing on https://github.com/systemd/systemd/settings/hooks
- running a job: all currently running jobs are listed at
  https://autopkgtest.ubuntu.com/running#pkg-systemd-upstream in case the PR
  does not show the status for some reason
- reporting the job result: this is done on Canonical's cloud infrastructure, if
  jobs are started and running but no status is visible on the PR, then it is
  likely that reporting back is not working

The CI job needs a PPA in order to be accepted, and the
upstream-systemd-ci/systemd-ci PPA is used. Note that this is necessary even
when there are no packages to backport, but by default a PPA won't have a
repository for a release if there are no packages built for it. To work around
this problem, when a new empty release is needed the mark-suite-dirty tool from
the https://git.launchpad.net/ubuntu-archive-tools can be used to force the PPA
to publish an empty repository, for example:

```shell
$ ./mark-suite-dirty -A ppa:upstream-systemd-ci/ubuntu/systemd-ci -s noble
```

will create an empty 'noble' repository that can be used for 'noble' CI jobs.

For infrastructure help, reaching out to 'qa-help' via the #ubuntu-quality
channel on libera.chat is an effective way to receive support in general.

Given access to the shared secret, tests can be re-run using the generic
retry-github-test tool:

https://git.launchpad.net/autopkgtest-cloud/tree/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/retry-github-test

A wrapper script that makes it easier to use is also available:

https://piware.de/gitweb/?p=bin.git;a=blob;f=retry-gh-systemd-Test

## Manually running a part of the Ubuntu CI test suite

In some situations one may want/need to run one of the tests run by Ubuntu CI
locally for debugging purposes. For this, you need a machine (or a VM) with
the same Ubuntu release as is used by Ubuntu CI (Jammy ATTOW).

First of all, clone the Debian systemd repository and sync it with the code of
the PR (set by the `$UPSTREAM_PULL_REQUEST` env variable) you'd like to debug:

```shell
$ git clone https://salsa.debian.org/systemd-team/systemd.git
$ cd systemd
$ git checkout upstream-ci
$ TEST_UPSTREAM=1 UPSTREAM_PULL_REQUEST=12345 ./debian/extra/checkout-upstream
```

Now install necessary build & test dependencies:

```shell
# PPA with some newer Ubuntu packages required by upstream systemd
$ add-apt-repository -y --enable-source ppa:upstream-systemd-ci/systemd-ci
$ apt build-dep -y systemd
$ apt install -y autopkgtest debhelper genisoimage git qemu-system-x86 \
                 libcurl4-openssl-dev libfdisk-dev libtss2-dev libfido2-dev \
                 libssl-dev python3-pefile
```

Build systemd deb packages with debug info:

```shell
$ TEST_UPSTREAM=1 DEB_BUILD_OPTIONS="nocheck nostrip noopt" dpkg-buildpackage -us -uc
$ cd ..
```

Prepare a testbed image for autopkgtest (tweak the release as necessary):

```shell
$ autopkgtest-buildvm-ubuntu-cloud --ram-size 1024 -v -a amd64 -r jammy
```

And finally run the autopkgtest itself:

```shell
$ autopkgtest -o logs *.deb systemd/ \
              --env=TEST_UPSTREAM=1 \
              --timeout-factor=3 \
              --test-name=boot-and-services \
              --shell-fail \
              -- autopkgtest-virt-qemu --cpus 4 --ram-size 2048 autopkgtest-jammy-amd64.img
```

where `--test-name=` is the name of the test you want to run/debug. The
`--shell-fail` option will pause the execution in case the test fails and shows
you the information how to connect to the testbed for further debugging.

## Manually running CodeQL analysis

This is mostly useful for debugging various CodeQL quirks.

Download the CodeQL Bundle from https://github.com/github/codeql-action/releases
and unpack it somewhere. From now the 'tutorial' assumes you have the `codeql`
binary from the unpacked archive in $PATH for brevity.

Switch to the systemd repository if not already:

```shell
$ cd <systemd-repo>
```

Create an initial CodeQL database:

```shell
$ CCACHE_DISABLE=1 codeql database create codeqldb --language=cpp -vvv
```

Disabling ccache is important, otherwise you might see CodeQL complaining:

No source code was seen and extracted to
/home/mrc0mmand/repos/@ci-incubator/systemd/codeqldb. This can occur if the
specified build commands failed to compile or process any code.
 - Confirm that there is some source code for the specified language in the
   project.
 - For codebases written in Go, JavaScript, TypeScript, and Python, do not
   specify an explicit --command.
 - For other languages, the --command must specify a "clean" build which
   compiles all the source code files without reusing existing build artefacts.

If you want to run all queries systemd uses in CodeQL, run:

```shell
$ codeql database analyze codeqldb/ --format csv --output results.csv .github/codeql-custom.qls .github/codeql-queries/*.ql -vvv
```

Note: this will take a while.

If you're interested in a specific check, the easiest way (without hunting down
the specific CodeQL query file) is to create a custom query suite. For example:

```shell
$ cat >test.qls <<EOF
- queries: .
  from: codeql/cpp-queries
- include:
    id:
        - cpp/missing-return
EOF
```

And then execute it in the same way as above:

```shell
$ codeql database analyze codeqldb/ --format csv --output results.csv test.qls -vvv
```

More about query suites here: https://codeql.github.com/docs/codeql-cli/creating-codeql-query-suites/

The results are then located in the `results.csv` file as a comma separated
values list (obviously), which is the most human-friendly output format the
CodeQL utility provides (so far).

## Running Coverity locally

Note: this requires a Coverity license, as the public tool
[tarball](https://scan.coverity.com/download) doesn't contain cov-analyze and
friends, so the usefulness of this guide is somewhat limited.

Debugging certain pesky Coverity defects can be painful, especially since the
OSS Coverity instance has a very strict limit on how many builds we can send it
per day/week, so if you have an access to a non-OSS Coverity license, knowing
how to debug defects locally might come in handy.

After installing the necessary tooling we need to populate the emit DB first:

```shell
$ rm -rf build cov
$ meson setup build -Dman=false
$ cov-build --dir=./cov ninja -C build
```

From there it depends if you're interested in a specific defect or all of them.
For the latter run:

```shell
$ cov-analyze --dir=./cov --wait-for-license
```

If you want to debug a specific defect, telling that to cov-analyze speeds
things up a bit:

```shell
$ cov-analyze --dir=./cov --wait-for-license --disable-default --enable ASSERT_SIDE_EFFECT
```

The final step is getting the actual report which can be generated in multiple
formats, for example:

```shell
$ cov-format-errors --dir ./cov --text-output-style multiline
$ cov-format-errors --dir=./cov --emacs-style
$ cov-format-errors --dir=./cov --html-output html-out
```

Which generate a text report, an emacs-compatible text report, and an HTML
report respectively.

Other useful options for cov-format-error include `--file <file>` to filter out
defects for a specific file, `--checker-regex DEFECT_TYPE` to filter our only a
specific defect (if this wasn't done already by cov-analyze), and many others,
see `--help` for an exhaustive list.

## Code coverage

We have a daily cron job in CentOS CI which runs all unit and integration tests,
collects coverage using gcov/lcov, and uploads the report to
[Coveralls](https://coveralls.io/github/systemd/systemd). In order to collect
the most accurate coverage information, some measures have to be taken regarding
sandboxing, namely:

 - ProtectSystem= and ProtectHome= need to be turned off
 - the $BUILD_DIR with necessary .gcno files needs to be present in the image
   and needs to be writable by all processes

The first point is relatively easy to handle and is handled automagically by
our test "framework" by creating necessary dropins.

Making the `$BUILD_DIR` accessible to _everything_ is slightly more complicated.
First, and foremost, the `$BUILD_DIR` has a POSIX ACL that makes it writable
to everyone. However, this is not enough in some cases, like for services
that use DynamicUser=yes, since that implies ProtectSystem=strict that can't
be turned off. A solution to this is to use `ReadWritePaths=$BUILD_DIR`, which
works for the majority of cases, but can't be turned on globally, since
ReadWritePaths= creates its own mount namespace which might break some
services. Hence, the `ReadWritePaths=$BUILD_DIR` is enabled for all services
with the `test-` prefix (i.e. test-foo.service or test-foo-bar.service), both
in the system and the user managers.

So, if you're considering writing an integration test that makes use of
DynamicUser=yes, or other sandboxing stuff that implies it, please prefix the
test unit (be it a static one or a transient one created via systemd-run), with
`test-`, unless the test unit needs to be able to install mount points in the
main mount namespace - in that case use `IGNORE_MISSING_COVERAGE=yes` in the
test definition (i.e. `TEST-*-NAME/test.sh`), which will skip the post-test
check for missing coverage for the respective test.