When networkd detects a wlan interface, the interface may not be
connected to any access point, and may enter the unmanaged state.
After the interface connected to an access point, previously networkd
did not reconfigure the interface. This fixes the issue.
This moves the backing store to a separate tmpfs which we can nicely put
a size limit on to make sure we can test maximization sanely: if we ask
for the home dir to be grown really large it should effectively only be
grown until the size of the backing tmpfs.
(While we are at it, also set a cheaper KDF so that we don't waste CI
cycles for password hashing that aren#t secure anyway.)
Being able to invoke the call twice on the same HomeSetup object will
simplify auto-growing/auto-shrinking since we can issue a resize
operatio directly from activate/deactivate
This will be useful when we want to issue a resize operation right when
activating, where the HomeSetup object should be destroyed only after
both activation is done.
A little cleanup to make the next change easier. We're not moving to a
new Entry object in the for loop so there's no danger of changing the
Entry object window.
Similar to sd_journal_next(), if trying to access an entry item
offset's data results in EBADMSG, skip to the next entry item so
we handle corruption better.
Fixes#21407
No IFLA_PROP_LIST attribute contained does not means the interface
has no alternative name.
E.g. the message created by inet6_fill_ifinfo() in net/ipv6/addrconf.c
does not contain IFLA_PROP_LIST.
Since the GNU `diff` utility uses grep-style regular expressions[0], which
use the BRE style, we need to tweak the regex to make it work properly
(most notably - in BRE the meta characters need to be escaped).
```
$ diff a b
21c21
< Volume Key: 256bit
---
> Volume Key: 257bit
25c25
< Disk Ceiling: 323.2M
---
> Disk Ceiling: 323.1M
$ diff -I '^\s*Disk (Size|Free|Floor|Ceiling):' a b
21c21
< Volume Key: 256bit
---
> Volume Key: 257bit
25c25
< Disk Ceiling: 323.2M
---
> Disk Ceiling: 323.1M
$ diff -I '^\s*Disk \(Size\|Free\|Floor\|Ceiling\):' a b && echo OK
21c21
< Volume Key: 256bit
---
> Volume Key: 257bit
```
Caught in one of the nightly CentOS CI cron jobs.
[0] https://www.gnu.org/software/diffutils/manual/html_node/Specified-Lines.html
Previously, we discarded any kmsg messages coming from journald
itself to avoid infinite loops where potentially the processing
of a kmsg message causes journald to log one or more messages to
kmsg which then get read again by the kmsg handler, ...
However, if we completely disable logging whenever we're processing
a kmsg message coming from journald itself, we also prevent any
infinite loops as we can be sure that journald won't accidentally
generate logging messages while processing a kmsg log message.
This change allows us to store all journald logs generated during
the processing of log messages from other services in the system
journal. Previously these could only be found in kmsg which has
low retention, can't be queried using journalctl and whose logs
don't survive reboots.
meson-0.59.4-1.fc35.noarch says:
WARNING: You should add the boolean check kwarg to the run_command call.
It currently defaults to false,
but it will default to true in future releases of meson.
See also: https://github.com/mesonbuild/meson/issues/9300
We were already asserting that the intmax_t and uintmax_t types
are the same as int64_t and uint64_t. Pretty much everywhere in
the code base we use the latter types. In principle intmax_t could
be something different on some new architecture, and then the code would
fail to compile or behave differently. We actually do not want the code
to behave differently on those architectures, because that'd break
interoperability. So let's just use int64_t/uint64_t since that's what
we indend to use.
It seems that the implementation of long double on ppc64el doesn't really work:
long double cast to integer and back compares as unequal to itself. Strangely,
this effect happens without optimization and both with gcc and clang, so it
seems to be an effect of how long double is implemented by the architecture.
Dumping the values shows the following pattern:
00 00 00 00 00 00 24 40 00 00 00 00 00 00 00 00 # long double v = 10;
00 00 00 00 00 00 24 40 00 00 00 00 00 00 80 39 # (long double)(intmax_t) v
Instead of trying to make this work, I think it's most reasonable to switch to
normal doubles. Notably, we had no tests for floating point behaviour. The
first test we added (for values even not in the range outside of double),
showed failures.
Common implementations of JSON (in particular JavaScript) use 64 bit double.
If we stick to this, users are likely to be happy when they exchange data with
those tools. Exporting values that cannot be represented in other tools would
just cause interop problems.
I don't think the extra precision would be much used. Long double seems to make
most sense as a transient format used in calculations to get extra precision in
operations, and not a storage or exchange format. So I expect low-level
numerical routines that have to know about hardware to make use of it, but it
shouldn't be used by our (higher-level) system library. In particular, we would
have to add tests for implementations conforming to IEEE 754, and those that
don't conform, and account for various implementation differences. It just
doesn't seem worth the effort.
https://en.wikipedia.org/wiki/Long_double#Implementations shows that the
situation is "complicated":
> On the x86 architecture, most C compilers implement long double as the 80-bit
> extended precision type supported by x86 hardware. An exception is Microsoft
> Visual C++ for x86, which makes long double a synonym for double. The Intel
> C++ compiler on Microsoft Windows supports extended precision, but requires
> the /Qlong‑double switch for long double to correspond to the hardware's
> extended precision format.
> Compilers may also use long double for the IEEE 754 quadruple-precision
> binary floating-point format (binary128). This is the case on HP-UX,
> Solaris/SPARC, MIPS with the 64-bit or n32 ABI, 64-bit ARM (AArch64) (on
> operating systems using the standard AAPCS calling conventions, such as
> Linux), and z/OS with FLOAT(IEEE). Most implementations are in software, but
> some processors have hardware support.
> On some PowerPC and SPARCv9 machines, long double is implemented as a
> double-double arithmetic, where a long double value is regarded as the exact
> sum of two double-precision values, giving at least a 106-bit precision; with
> such a format, the long double type does not conform to the IEEE
> floating-point standard. Otherwise, long double is simply a synonym for
> double (double precision), e.g. on 32-bit ARM, 64-bit ARM (AArch64) (on
> Windows and macOS) and on 32-bit MIPS (old ABI, a.k.a. o32).
> With the GNU C Compiler, long double is 80-bit extended precision on x86
> processors regardless of the physical storage used for the type (which can be
> either 96 or 128 bits). On some other architectures, long double can be
> double-double (e.g. on PowerPC) or 128-bit quadruple precision (e.g. on
> SPARC). As of gcc 4.3, a quadruple precision is also supported on x86, but as
> the nonstandard type __float128 rather than long double.
> Although the x86 architecture, and specifically the x87 floating-point
> instructions on x86, supports 80-bit extended-precision operations, it is
> possible to configure the processor to automatically round operations to
> double (or even single) precision. Conversely, in extended-precision mode,
> extended precision may be used for intermediate compiler-generated
> calculations even when the final results are stored at a lower precision
> (i.e. FLT_EVAL_METHOD == 2). With gcc on Linux, 80-bit extended precision is
> the default; on several BSD operating systems (FreeBSD and OpenBSD),
> double-precision mode is the default, and long double operations are
> effectively reduced to double precision. (NetBSD 7.0 and later, however,
> defaults to 80-bit extended precision). However, it is possible to override
> this within an individual program via the FLDCW "floating-point load
> control-word" instruction. On x86_64, the BSDs default to 80-bit extended
> precision. Microsoft Windows with Visual C++ also sets the processor in
> double-precision mode by default, but this can again be overridden within an
> individual program (e.g. by the _controlfp_s function in Visual C++). The
> Intel C++ Compiler for x86, on the other hand, enables extended-precision
> mode by default. On IA-32 OS X, long double is 80-bit extended precision.
So, in short, the only thing that can be said is that nothing can be said. In
common scenarios, we are getting only a bit of extra precision (80 bits instead
of 64), but use space for padding. In other scenarios we are getting no extra
precision. And the variance in implementations is a big issue: we can expect
strange differences in behaviour between architectures, systems, compiler
versions, compilation options, and even the other things that the program is
doing.
Fixes#21390.
When user and network namespaces are enabled, the kernel
makes the global keys read-only, and makes the namespaced
ones available for the guest already.
Follow-up for af493fb742.
The kernel sends FRA_SUPPRESS_IFGROUP attribute with -1, that must be
handled by networkd.
For FRA_SUPPRESS_PREFIXLEN, we already handled -1, but ignored values
larger than 128. We should not configure rules with such a meaningless
value, but should manage such rules when received from kernel. It can
occur when created by other tools mistakenly. If networkd ignores them,
then networkd cannot remove them.