Before c4b69e990f, if the socket fd is
passed from pid1, `udev_monitor_set_receive_buffer_size()` (now it is
a wrapper of `sd_device_monitor_set_receive_buffer_size()`) was not
called. Let's preserve the original logic.
Before c4b69e990f, if the socket fd is
passed from pid1, `udev_monitor_enable_receiving()` (now it is a wrapper
of `device_monitor_enable_receiving`) was not called.
Let's preserve the original logic.
Fixes#10754.
Otherwise we might install the socket unit early, but the service
backing it late, and then end up in strange loops when we enter rescue
mode, because we saw an event on /dev/rfkill but really can't dispatch
it nor flush it.
Fixes: #9171
Unfortunately, f5f9a580dd didn't help much and now
the next subtest gets stuck from time to time. Let's skip
test-execute altogether so as not to bother anybody with
spurious failures.
https://github.com/systemd/systemd/issues/10696 is still open.
Everybody is welcome to share ideas :-)
config_parse_iaid(), dhcp_identifier_set_iaid() and sd_dhcp6_client_set_iaid() all
allow for the IAID to be zero. Also, RFC 3315 makes no mention that zero
would be invalid.
However, client_ensure_iaid() would take an IAID of zero as a sign that
the values was unset. Fix that by keeping track whether IAID is
initialized.
This way we can corectly ensure that when a unit that requires some
controller goes away, we propagate the removal of it all the way up, so
that the controller is turned off in all the parents too.
Previously we tried to be smart: when a new unit appeared and it only
added controllers to the cgroup mask we'd update the cached members mask
in all parents by ORing in the controller flags in their cached values.
Unfortunately this was quite broken, as we missed some conditions when
this cache had to be reset (for example, when a unit got unloaded),
moreover the optimization doesn't work when a controller is removed
anyway (as in that case there's no other way for the parent to iterate
though all children if any other, remaining child unit still needs it).
Hence, let's simplify the logic substantially: instead of updating the
cache on the right events (which we didn't get right), let's simply
invalidate the cache, and generate it lazily when we encounter it later.
This should actually result in better behaviour as we don't have to
calculate the new members mask for a whole subtree whever we have the
suspicion something changed, but can delay it to the point where we
actually need the members mask.
This allows us to simplify things quite a bit, which is good, since
validating this cache for correctness is hard enough.
Fixes: #9512
After creating a cgroup we need to initialize its
"cgroup.subtree_control" file with the controllers its children want to
use. Currently we do so whenever the mkdir() on the cgroup succeeded,
i.e. when we know the cgroup is "fresh". Let's update the condition
slightly that we also do so when internally we assume a cgroup doesn't
exist yet, even if it already does (maybe left-over from a previous
run).
This shouldn't change anything IRL but make things a bit more robust.
Previously this would manipulate the realization mask for invalidating
the realization. This is a bit ugly though as the realization mask's
primary purpose to is to reflect in which hierarchies a cgroup currently
exists, and it's probably a good idea to keep that in sync with
realities.
We nowadays have the an explicit fields for invalidating cgroup
controller information, the "cgroup_invalidated_mask", let's use this
one instead.
The effect is pretty much the same, as the main consumer of these masks
(unit_has_mask_realize()) checks both anyway.
This changes cg_enable_everywhere() to return which controllers are
enabled for the specified cgroup. This information is then used to
correctly track the enablement mask currently in effect for a unit.
Moreover, when we try to turn off a controller, and this works, then
this is indicates that the parent unit might succesfully turn it off
now, too as our unit might have kept it busy.
So far, when realizing cgroups, i.e. when syncing up the kernel
representation of relevant cgroups with our own idea we would strictly
work from the root to the leaves. This is generally a good approach, as
when controllers are enabled this has to happen in root-to-leaves order.
However, when controllers are disabled this has to happen in the
opposite order: in leaves-to-root order (this is because controllers can
only be enabled in a child if it is already enabled in the parent, and
if it shall be disabled in the parent then it has to be disabled in the
child first, otherwise it is considered busy when it is attempted to
remove it in the parent).
To make things complicated when invalidating a unit's cgroup membershup
systemd can actually turn off some controllers previously turned on at
the very same time as it turns on other controllers previously turned
off. In such a case we have to work up leaves-to-root *and*
root-to-leaves right after each other. With this patch this is
implemented: we still generally operate root-to-leaves, but as soon as
we noticed we successfully turned off a controller previously turned on
for a cgroup we'll re-enqueue the cgroup realization for all parents of
a unit, thus implementing leaves-to-root where necessary.
Services invoked by PID1 have $INVOCATION_ID initialized, hence let's do
that for scope units (where the payload process is invoked by us on the
client side) too, to minimize needless differences.
Fixes: #8082
I keep running "systemd-run -t /bin/bash" to quickly get a shell running
in service context. I suspect I am not the only one, hence let's add a
shortcut for it. While we are at it, let's make it smarter, and
automatically inherit the $SHELL of the invoking user as well as the
working directory, and let's imply --pty. --shell (or -S) is hence
equivalent to "-t -d $SHELL".
I find myself testing service management quite often with "systemd-run
-t /bin/bash". For that it is handy if the invoked shell would use the
working directory I am currently in. Hence introduce a shorthand for
that:
$ systemd-run -dt /bin/bash
This will automatically insert a WorkingDirectory= property into the
transient service, pointing to the working directory of the caller.
Let's tweak when precisely to apply cgroup attributes on the root
cgroup.
With this we now follow the following rules:
1. On cgroupsv2 we never apply any regular cgroups to the host root,
since the attributes generally do not exist there.
2. On cgroupsv1 we do not apply any "weight" or "shares" style
attributes to the host root cgroup, since they don't make much sense
on the top level where there's only one group, hence no need to
compare weights against each other. The other attributes are applied
to the host root cgroup however.
3. In any case we don't apply attributes to the root of container
environments (and --user roots), under the assumption that this is
managed by the manager further up. (Note that on cgroupsv2 this is
even enforced by the kernel)
4. BPF pseudo-attributes are applied in all cases (since we can have as
many of them as we want)
Let's emphasize that this function checks for the host root cgroup, i.e.
returns false for the root cgroup when we run in a container where
CLONE_NEWCGROUP is used. There has been some confusion around this
already, for example cgroup_context_apply() uses the function
incorrectly (which we'll fix in a later commit).
Just some refactoring, not change in behaviour.