2012-06-23 05:14:19 +08:00
|
|
|
<?xml version='1.0'?> <!--*-nxml-*-->
|
2019-03-14 21:40:58 +08:00
|
|
|
<!DOCTYPE refentry PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN"
|
2015-06-19 01:47:44 +08:00
|
|
|
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
2020-11-09 12:23:58 +08:00
|
|
|
<!-- SPDX-License-Identifier: LGPL-2.1-or-later -->
|
2012-06-23 05:14:19 +08:00
|
|
|
|
|
|
|
<refentry id="bootup">
|
|
|
|
|
2015-02-04 10:14:13 +08:00
|
|
|
<refentryinfo>
|
|
|
|
<title>bootup</title>
|
|
|
|
<productname>systemd</productname>
|
|
|
|
</refentryinfo>
|
|
|
|
|
|
|
|
<refmeta>
|
|
|
|
<refentrytitle>bootup</refentrytitle>
|
|
|
|
<manvolnum>7</manvolnum>
|
|
|
|
</refmeta>
|
|
|
|
|
|
|
|
<refnamediv>
|
|
|
|
<refname>bootup</refname>
|
|
|
|
<refpurpose>System bootup process</refpurpose>
|
|
|
|
</refnamediv>
|
|
|
|
|
|
|
|
<refsect1>
|
|
|
|
<title>Description</title>
|
|
|
|
|
2019-03-22 20:10:39 +08:00
|
|
|
<para>A number of different components are involved in the boot of a Linux system. Immediately after
|
|
|
|
power-up, the system firmware will do minimal hardware initialization, and hand control over to a boot
|
|
|
|
loader (e.g.
|
|
|
|
<citerefentry><refentrytitle>systemd-boot</refentrytitle><manvolnum>7</manvolnum></citerefentry> or
|
|
|
|
<ulink url="https://www.gnu.org/software/grub/">GRUB</ulink>) stored on a persistent storage device. This
|
|
|
|
boot loader will then invoke an OS kernel from disk (or the network). On systems using EFI or other types
|
|
|
|
of firmware, this firmware may also load the kernel directly.</para>
|
|
|
|
|
2022-09-23 20:59:02 +08:00
|
|
|
<para>The kernel (optionally) mounts an in-memory file system, often generated by <citerefentry
|
|
|
|
project='man-pages'><refentrytitle>dracut</refentrytitle><manvolnum>8</manvolnum></citerefentry>, which
|
|
|
|
looks for the root file system. Nowadays this is implemented as an "initramfs" — a compressed CPIO
|
man: "the initial RAM disk" → "the initrd"
In many places we spelled out the phrase behind "initrd" in full, but this
isn't terribly useful. In fact, no "RAM disk" is used, so emphasizing this
is just confusing to the reader. Let's just say "initrd" everywhere, people
understand what this refers to, and that it's in fact an initramfs image.
Also, s/i.e./e.g./ where appropriate.
Also, don't say "in RAM", when in fact it's virtual memory, whose pages
may or may not be loaded in page frames in RAM, and we have no control over
this.
Also, add <filename></filename> and other minor cleanups.
2022-09-15 20:43:59 +08:00
|
|
|
archive that the kernel extracts into a tmpfs. In the past normal file systems using an in-memory block
|
|
|
|
device (ramdisk) were used, and the name "initrd" is still used to describe both concepts. It's the boot
|
|
|
|
loader or the firmware that loads both the kernel and initrd/initramfs images into memory, but the kernel
|
|
|
|
which interprets it as a file system.
|
|
|
|
<citerefentry><refentrytitle>systemd</refentrytitle><manvolnum>1</manvolnum></citerefentry> may be used
|
|
|
|
to manage services in the initrd, similarly to the real system.</para>
|
2019-03-22 20:10:39 +08:00
|
|
|
|
|
|
|
<para>After the root file system is found and mounted, the initrd hands over control to the host's system
|
|
|
|
manager (such as
|
|
|
|
<citerefentry><refentrytitle>systemd</refentrytitle><manvolnum>1</manvolnum></citerefentry>) stored in
|
|
|
|
the root file system, which is then responsible for probing all remaining hardware, mounting all
|
|
|
|
necessary file systems and spawning all configured services.</para>
|
2015-02-04 10:14:13 +08:00
|
|
|
|
|
|
|
<para>On shutdown, the system manager stops all services, unmounts
|
|
|
|
all file systems (detaching the storage technologies backing
|
|
|
|
them), and then (optionally) jumps back into the initrd code which
|
|
|
|
unmounts/detaches the root file system and the storage it resides
|
|
|
|
on. As a last step, the system is powered down.</para>
|
|
|
|
|
|
|
|
<para>Additional information about the system boot process may be
|
|
|
|
found in
|
|
|
|
<citerefentry project='man-pages'><refentrytitle>boot</refentrytitle><manvolnum>7</manvolnum></citerefentry>.</para>
|
|
|
|
</refsect1>
|
|
|
|
|
|
|
|
<refsect1>
|
|
|
|
<title>System Manager Bootup</title>
|
|
|
|
|
|
|
|
<para>At boot, the system manager on the OS image is responsible
|
|
|
|
for initializing the required file systems, services and drivers
|
|
|
|
that are necessary for operation of the system. On
|
|
|
|
<citerefentry><refentrytitle>systemd</refentrytitle><manvolnum>1</manvolnum></citerefentry>
|
|
|
|
systems, this process is split up in various discrete steps which
|
|
|
|
are exposed as target units. (See
|
|
|
|
<citerefentry><refentrytitle>systemd.target</refentrytitle><manvolnum>5</manvolnum></citerefentry>
|
|
|
|
for detailed information about target units.) The boot-up process
|
|
|
|
is highly parallelized so that the order in which specific target
|
|
|
|
units are reached is not deterministic, but still adheres to a
|
|
|
|
limited amount of ordering structure.</para>
|
|
|
|
|
|
|
|
<para>When systemd starts up the system, it will activate all
|
|
|
|
units that are dependencies of <filename>default.target</filename>
|
|
|
|
(as well as recursively all dependencies of these dependencies).
|
|
|
|
Usually, <filename>default.target</filename> is simply an alias of
|
|
|
|
<filename>graphical.target</filename> or
|
|
|
|
<filename>multi-user.target</filename>, depending on whether the
|
|
|
|
system is configured for a graphical UI or only for a text
|
|
|
|
console. To enforce minimal ordering between the units pulled in,
|
|
|
|
a number of well-known target units are available, as listed on
|
|
|
|
<citerefentry><refentrytitle>systemd.special</refentrytitle><manvolnum>7</manvolnum></citerefentry>.</para>
|
|
|
|
|
|
|
|
<para>The following chart is a structural overview of these
|
|
|
|
well-known units and their position in the boot-up logic. The
|
|
|
|
arrows describe which units are pulled in and ordered before which
|
|
|
|
other units. Units near the top are started before units nearer to
|
|
|
|
the bottom of the chart.</para>
|
2012-06-23 05:14:19 +08:00
|
|
|
|
2018-03-20 16:54:01 +08:00
|
|
|
<!-- note: do not use unicode ellipsis here, because docbook will replace that
|
|
|
|
with three dots anyway, messing up alignment -->
|
veritysetup-generator: add support for veritytab
This adds the support for veritytab.
The veritytab file contains at most five fields, the first four are
mandatory, the last one is optional:
- The first field contains the name of the resulting verity volume; its
block device is set up /dev/mapper/</filename>.
- The second field contains a path to the underlying block data device,
or a specification of a block device via UUID= followed by the UUID.
- The third field contains a path to the underlying block hash device,
or a specification of a block device via UUID= followed by the UUID.
- The fourth field is the roothash in hexadecimal.
- The fifth field, if present, is a comma-delimited list of options.
The following options are recognized only: ignore-corruption,
restart-on-corruption, panic-on-corruption, ignore-zero-blocks,
check-at-most-once and root-hash-signature. The others options will
be implemented later.
Also, this adds support for the new kernel verity command line boolean
option "veritytab" which enables the read for veritytab, and the new
environment variable SYSTEMD_VERITYTAB which sets the path to the file
veritytab to read.
2020-11-14 22:21:39 +08:00
|
|
|
<programlisting> cryptsetup-pre.target veritysetup-pre.target
|
2019-12-18 17:32:03 +08:00
|
|
|
|
|
|
|
|
(various low-level v
|
veritysetup-generator: add support for veritytab
This adds the support for veritytab.
The veritytab file contains at most five fields, the first four are
mandatory, the last one is optional:
- The first field contains the name of the resulting verity volume; its
block device is set up /dev/mapper/</filename>.
- The second field contains a path to the underlying block data device,
or a specification of a block device via UUID= followed by the UUID.
- The third field contains a path to the underlying block hash device,
or a specification of a block device via UUID= followed by the UUID.
- The fourth field is the roothash in hexadecimal.
- The fifth field, if present, is a comma-delimited list of options.
The following options are recognized only: ignore-corruption,
restart-on-corruption, panic-on-corruption, ignore-zero-blocks,
check-at-most-once and root-hash-signature. The others options will
be implemented later.
Also, this adds support for the new kernel verity command line boolean
option "veritytab" which enables the read for veritytab, and the new
environment variable SYSTEMD_VERITYTAB which sets the path to the file
veritytab to read.
2020-11-14 22:21:39 +08:00
|
|
|
API VFS mounts: (various cryptsetup/veritysetup devices...)
|
2019-12-18 17:32:03 +08:00
|
|
|
mqueue, configfs, | |
|
|
|
|
debugfs, ...) v |
|
|
|
|
| cryptsetup.target |
|
|
|
|
| (various swap | | remote-fs-pre.target
|
|
|
|
| devices...) | | | |
|
|
|
|
| | | | | v
|
|
|
|
| v local-fs-pre.target | | | (network file systems)
|
|
|
|
| swap.target | | v v |
|
|
|
|
| | v | remote-cryptsetup.target |
|
veritysetup-generator: add support for veritytab
This adds the support for veritytab.
The veritytab file contains at most five fields, the first four are
mandatory, the last one is optional:
- The first field contains the name of the resulting verity volume; its
block device is set up /dev/mapper/</filename>.
- The second field contains a path to the underlying block data device,
or a specification of a block device via UUID= followed by the UUID.
- The third field contains a path to the underlying block hash device,
or a specification of a block device via UUID= followed by the UUID.
- The fourth field is the roothash in hexadecimal.
- The fifth field, if present, is a comma-delimited list of options.
The following options are recognized only: ignore-corruption,
restart-on-corruption, panic-on-corruption, ignore-zero-blocks,
check-at-most-once and root-hash-signature. The others options will
be implemented later.
Also, this adds support for the new kernel verity command line boolean
option "veritytab" which enables the read for veritytab, and the new
environment variable SYSTEMD_VERITYTAB which sets the path to the file
veritytab to read.
2020-11-14 22:21:39 +08:00
|
|
|
| | (various low-level (various mounts and | remote-veritysetup.target |
|
2022-01-08 00:06:54 +08:00
|
|
|
| | services: udevd, fsck services...) | | |
|
|
|
|
| | tmpfiles, random | | | remote-fs.target
|
|
|
|
| | seed, sysctl, ...) v | | |
|
|
|
|
| | | local-fs.target | | _____________/
|
|
|
|
| | | | | |/
|
|
|
|
\____|______|_______________ ______|___________/ |
|
|
|
|
\ / |
|
|
|
|
v |
|
|
|
|
sysinit.target |
|
|
|
|
| |
|
|
|
|
______________________/|\_____________________ |
|
|
|
|
/ | | | \ |
|
|
|
|
| | | | | |
|
|
|
|
v v | v | |
|
|
|
|
(various (various | (various | |
|
2019-12-18 17:32:03 +08:00
|
|
|
timers...) paths...) | sockets...) | |
|
|
|
|
| | | | | |
|
|
|
|
v v | v | |
|
|
|
|
timers.target paths.target | sockets.target | |
|
|
|
|
| | | | v |
|
|
|
|
v \_______ | _____/ rescue.service |
|
|
|
|
\|/ | |
|
|
|
|
v v |
|
|
|
|
basic.target <emphasis>rescue.target</emphasis> |
|
|
|
|
| |
|
|
|
|
________v____________________ |
|
|
|
|
/ | \ |
|
|
|
|
| | | |
|
|
|
|
v v v |
|
|
|
|
display- (various system (various system |
|
|
|
|
manager.service services services) |
|
|
|
|
| required for | |
|
|
|
|
| graphical UIs) v v
|
|
|
|
| | <emphasis>multi-user.target</emphasis>
|
|
|
|
emergency.service | | |
|
|
|
|
| \_____________ | _____________/
|
|
|
|
v \|/
|
|
|
|
<emphasis>emergency.target</emphasis> v
|
|
|
|
<emphasis>graphical.target</emphasis></programlisting>
|
2015-02-04 10:14:13 +08:00
|
|
|
|
|
|
|
<para>Target units that are commonly used as boot targets are
|
|
|
|
<emphasis>emphasized</emphasis>. These units are good choices as
|
|
|
|
goal targets, for example by passing them to the
|
|
|
|
<varname>systemd.unit=</varname> kernel command line option (see
|
|
|
|
<citerefentry><refentrytitle>systemd</refentrytitle><manvolnum>1</manvolnum></citerefentry>)
|
2015-02-04 22:35:37 +08:00
|
|
|
or by symlinking <filename>default.target</filename> to them.
|
|
|
|
</para>
|
2015-02-04 10:14:13 +08:00
|
|
|
|
|
|
|
<para><filename>timers.target</filename> is pulled-in by
|
|
|
|
<filename>basic.target</filename> asynchronously. This allows
|
|
|
|
timers units to depend on services which become only available
|
|
|
|
later in boot.</para>
|
|
|
|
</refsect1>
|
|
|
|
|
2019-12-18 18:23:30 +08:00
|
|
|
<refsect1>
|
|
|
|
<title>User manager startup</title>
|
|
|
|
|
|
|
|
<para>The system manager starts the <filename>user@<replaceable>uid</replaceable>.service</filename> unit
|
|
|
|
for each user, which launches a separate unprivileged instance of <command>systemd</command> for each
|
|
|
|
user — the user manager. Similarly to the system manager, the user manager starts units which are pulled
|
|
|
|
in by <filename>default.target</filename>. The following chart is a structural overview of the well-known
|
|
|
|
user units. For non-graphical sessions, <filename>default.target</filename> is used. Whenever the user
|
|
|
|
logs into a graphical session, the login manager will start the
|
|
|
|
<filename>graphical-session.target</filename> target that is used to pull in units required for the
|
2020-04-22 02:46:53 +08:00
|
|
|
graphical session. A number of targets (shown on the right side) are started when specific hardware is
|
2019-12-18 18:23:30 +08:00
|
|
|
available to the user.</para>
|
|
|
|
|
|
|
|
<programlisting>
|
2022-07-15 15:04:41 +08:00
|
|
|
(various (various (various
|
|
|
|
timers...) paths...) sockets...) (sound devices)
|
|
|
|
| | | |
|
|
|
|
v v v v
|
|
|
|
timers.target paths.target sockets.target sound.target
|
|
|
|
| | |
|
|
|
|
\______________ _|_________________/ (bluetooth devices)
|
|
|
|
\ / |
|
|
|
|
V v
|
|
|
|
basic.target bluetooth.target
|
|
|
|
|
|
|
|
|
__________/ \_______ (smartcard devices)
|
|
|
|
/ \ |
|
|
|
|
| | v
|
|
|
|
| v smartcard.target
|
|
|
|
v graphical-session-pre.target
|
|
|
|
(various user services) | (printers)
|
|
|
|
| v |
|
|
|
|
| (services for the graphical session) v
|
|
|
|
| | printer.target
|
|
|
|
v v
|
|
|
|
<emphasis>default.target</emphasis> graphical-session.target</programlisting>
|
2019-12-18 18:23:30 +08:00
|
|
|
|
2022-07-15 15:04:41 +08:00
|
|
|
</refsect1>
|
2019-12-18 18:23:30 +08:00
|
|
|
|
2015-02-04 10:14:13 +08:00
|
|
|
<refsect1>
|
2022-09-23 21:10:06 +08:00
|
|
|
<title>Bootup in the initrd</title>
|
|
|
|
|
|
|
|
<para>The initrd implementation can be set up using systemd as well. In this case, boot up inside the
|
|
|
|
initrd follows the following structure.</para>
|
2015-02-04 10:14:13 +08:00
|
|
|
|
2019-04-26 06:49:57 +08:00
|
|
|
<para>systemd detects that it is run within an initrd by checking
|
|
|
|
for the file <filename>/etc/initrd-release</filename>.
|
|
|
|
The default target in the initrd is
|
2015-02-04 10:14:13 +08:00
|
|
|
<filename>initrd.target</filename>. The bootup process begins
|
|
|
|
identical to the system manager bootup (see above) until it
|
|
|
|
reaches <filename>basic.target</filename>. From there, systemd
|
|
|
|
approaches the special target <filename>initrd.target</filename>.
|
2016-11-23 12:19:56 +08:00
|
|
|
|
|
|
|
Before any file systems are mounted, it must be determined whether
|
|
|
|
the system will resume from hibernation or proceed with normal boot.
|
|
|
|
This is accomplished by <filename>systemd-hibernate-resume@.service</filename>
|
|
|
|
which must be finished before <filename>local-fs-pre.target</filename>,
|
|
|
|
so no filesystems can be mounted before the check is complete.
|
|
|
|
|
2016-05-13 00:42:39 +08:00
|
|
|
When the root device becomes available,
|
2020-03-06 20:51:28 +08:00
|
|
|
<filename>initrd-root-device.target</filename> is reached.
|
2015-02-04 10:14:13 +08:00
|
|
|
If the root device can be mounted at
|
|
|
|
<filename>/sysroot</filename>, the
|
|
|
|
<filename>sysroot.mount</filename> unit becomes active and
|
|
|
|
<filename>initrd-root-fs.target</filename> is reached. The service
|
|
|
|
<filename>initrd-parse-etc.service</filename> scans
|
|
|
|
<filename>/sysroot/etc/fstab</filename> for a possible
|
2020-10-06 00:08:21 +08:00
|
|
|
<filename>/usr/</filename> mount point and additional entries
|
2015-02-04 10:14:13 +08:00
|
|
|
marked with the <emphasis>x-initrd.mount</emphasis> option. All
|
|
|
|
entries found are mounted below <filename>/sysroot</filename>, and
|
|
|
|
<filename>initrd-fs.target</filename> is reached. The service
|
|
|
|
<filename>initrd-cleanup.service</filename> isolates to the
|
|
|
|
<filename>initrd-switch-root.target</filename>, where cleanup
|
|
|
|
services can run. As the very last step, the
|
|
|
|
<filename>initrd-switch-root.service</filename> is activated,
|
|
|
|
which will cause the system to switch its root to
|
|
|
|
<filename>/sysroot</filename>.
|
|
|
|
</para>
|
|
|
|
|
2022-07-15 15:04:41 +08:00
|
|
|
<programlisting> : (beginning identical to above)
|
|
|
|
:
|
|
|
|
v
|
|
|
|
basic.target
|
|
|
|
| emergency.service
|
|
|
|
______________________/| |
|
|
|
|
/ | v
|
|
|
|
| initrd-root-device.target <emphasis>emergency.target</emphasis>
|
|
|
|
| |
|
|
|
|
| v
|
|
|
|
| sysroot.mount
|
|
|
|
| |
|
|
|
|
| v
|
|
|
|
| initrd-root-fs.target
|
|
|
|
| |
|
|
|
|
| v
|
|
|
|
v initrd-parse-etc.service
|
|
|
|
(custom initrd |
|
|
|
|
services...) v
|
|
|
|
| (sysroot-usr.mount and
|
|
|
|
| various mounts marked
|
|
|
|
| with fstab option
|
|
|
|
| x-initrd.mount...)
|
|
|
|
| |
|
|
|
|
| v
|
|
|
|
| initrd-fs.target
|
|
|
|
\______________________ |
|
|
|
|
\|
|
|
|
|
v
|
|
|
|
initrd.target
|
|
|
|
|
|
|
|
|
v
|
|
|
|
initrd-cleanup.service
|
|
|
|
isolates to
|
|
|
|
initrd-switch-root.target
|
|
|
|
|
|
|
|
|
v
|
|
|
|
______________________/|
|
|
|
|
/ v
|
|
|
|
| initrd-udevadm-cleanup-db.service
|
|
|
|
v |
|
|
|
|
(custom initrd |
|
|
|
|
services...) |
|
|
|
|
\______________________ |
|
|
|
|
\|
|
|
|
|
v
|
|
|
|
initrd-switch-root.target
|
|
|
|
|
|
|
|
|
v
|
|
|
|
initrd-switch-root.service
|
|
|
|
|
|
|
|
|
v
|
|
|
|
Transition to Host OS</programlisting>
|
2015-02-04 10:14:13 +08:00
|
|
|
</refsect1>
|
|
|
|
|
|
|
|
<refsect1>
|
|
|
|
<title>System Manager Shutdown</title>
|
|
|
|
|
|
|
|
<para>System shutdown with systemd also consists of various target
|
|
|
|
units with some minimal ordering structure applied:</para>
|
|
|
|
|
2022-07-15 15:04:41 +08:00
|
|
|
<programlisting> (conflicts with (conflicts with
|
|
|
|
all system all file system
|
|
|
|
services) mounts, swaps,
|
|
|
|
| cryptsetup/
|
|
|
|
| veritysetup
|
|
|
|
| devices, ...)
|
|
|
|
| |
|
|
|
|
v v
|
|
|
|
shutdown.target umount.target
|
|
|
|
| |
|
|
|
|
\_______ ______/
|
|
|
|
\ /
|
|
|
|
v
|
|
|
|
(various low-level
|
|
|
|
services)
|
|
|
|
|
|
|
|
|
v
|
|
|
|
final.target
|
|
|
|
|
|
|
|
|
___________________________/ \_________________
|
|
|
|
/ | | \
|
|
|
|
| | | |
|
|
|
|
v | | |
|
|
|
|
systemd-reboot.service | | |
|
|
|
|
| v | |
|
|
|
|
| systemd-poweroff.service | |
|
|
|
|
v | v |
|
|
|
|
<emphasis>reboot.target</emphasis> | systemd-halt.service |
|
|
|
|
v | v
|
|
|
|
<emphasis>poweroff.target</emphasis> | systemd-kexec.service
|
|
|
|
v |
|
|
|
|
<emphasis>halt.target</emphasis> |
|
|
|
|
v
|
|
|
|
<emphasis>kexec.target</emphasis></programlisting>
|
2015-02-04 10:14:13 +08:00
|
|
|
|
2018-03-22 03:57:06 +08:00
|
|
|
<para>Commonly used system shutdown targets are <emphasis>emphasized</emphasis>.</para>
|
|
|
|
|
|
|
|
<para>Note that
|
2018-03-28 23:07:11 +08:00
|
|
|
<citerefentry><refentrytitle>systemd-halt.service</refentrytitle><manvolnum>8</manvolnum></citerefentry>,
|
2018-03-22 03:57:06 +08:00
|
|
|
<filename>systemd-reboot.service</filename>, <filename>systemd-poweroff.service</filename> and
|
|
|
|
<filename>systemd-kexec.service</filename> will transition the system and server manager (PID 1) into the second
|
|
|
|
phase of system shutdown (implemented in the <filename>systemd-shutdown</filename> binary), which will unmount any
|
|
|
|
remaining file systems, kill any remaining processes and release any other remaining resources, in a simple and
|
|
|
|
robust fashion, without taking any service or unit concept into account anymore. At that point, regular
|
|
|
|
applications and resources are generally terminated and released already, the second phase hence operates only as
|
|
|
|
safety net for everything that couldn't be stopped or released for some reason during the primary, unit-based
|
|
|
|
shutdown phase described above.</para>
|
2015-02-04 10:14:13 +08:00
|
|
|
</refsect1>
|
|
|
|
|
|
|
|
<refsect1>
|
|
|
|
<title>See Also</title>
|
|
|
|
<para>
|
|
|
|
<citerefentry><refentrytitle>systemd</refentrytitle><manvolnum>1</manvolnum></citerefentry>,
|
|
|
|
<citerefentry project='man-pages'><refentrytitle>boot</refentrytitle><manvolnum>7</manvolnum></citerefentry>,
|
|
|
|
<citerefentry><refentrytitle>systemd.special</refentrytitle><manvolnum>7</manvolnum></citerefentry>,
|
|
|
|
<citerefentry><refentrytitle>systemd.target</refentrytitle><manvolnum>5</manvolnum></citerefentry>,
|
2018-03-22 03:57:06 +08:00
|
|
|
<citerefentry><refentrytitle>systemd-halt.service</refentrytitle><manvolnum>8</manvolnum></citerefentry>,
|
2020-06-25 20:37:24 +08:00
|
|
|
<citerefentry project='man-pages'><refentrytitle>dracut</refentrytitle><manvolnum>8</manvolnum></citerefentry>
|
2015-02-04 10:14:13 +08:00
|
|
|
</para>
|
|
|
|
</refsect1>
|
2012-06-23 05:14:19 +08:00
|
|
|
|
|
|
|
</refentry>
|