perf tools fixes for v6.4: 2nd batch

- Fix BPF CO-RE naming convention for checking the availability of fields on
   'union perf_mem_data_src' on the running kernel.
 
 - Remove the use of llvm-strip on BPF skel object files, not needed, fixes a
   build breakage when the llvm package, that contains it in most distros, isn't
   installed.
 
 - Fix tools that use both evsel->{bpf_counter_list,bpf_filters}, removing them from a
   union.
 
 - Remove extra "--" from the 'perf ftrace latency' --use-nsec option,
   previously it was working only when using the '-n' alternative.
 
 - Don't stop building when both binutils-devel and a C++ compiler isn't
   available to compile the alternative C++ demangle support code, disable that
   feature instead.
 
 - Sync the linux/in.h and coresight-pmu.h header copies with the kernel sources.
 
 - Fix relative include path to cs-etm.h.
 
 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQR2GiIUctdOfX2qHhGyPKLppCJ+JwUCZHY9egAKCRCyPKLppCJ+
 JzbBAQCv+i0j/+garWKCTSe33yztzQGS6jxu/YzQrbQO427DMwEA9fJgp/r2OQC5
 wMM5gng2fPaHe6Hs4cnPL/SzMxLC2gQ=
 =fHRR
 -----END PGP SIGNATURE-----

Merge tag 'perf-tools-fixes-for-v6.4-2-2023-05-30' into perf-tools-next

perf tools fixes for v6.4: 2nd batch

- Fix BPF CO-RE naming convention for checking the availability of fields on
  'union perf_mem_data_src' on the running kernel.

- Remove the use of llvm-strip on BPF skel object files, not needed, fixes a
  build breakage when the llvm package, that contains it in most distros, isn't
  installed.

- Fix tools that use both evsel->{bpf_counter_list,bpf_filters}, removing them from a
  union.

- Remove extra "--" from the 'perf ftrace latency' --use-nsec option,
  previously it was working only when using the '-n' alternative.

- Don't stop building when both binutils-devel and a C++ compiler isn't
  available to compile the alternative C++ demangle support code, disable that
  feature instead.

- Sync the linux/in.h and coresight-pmu.h header copies with the kernel sources.

- Fix relative include path to cs-etm.h.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This commit is contained in:
Arnaldo Carvalho de Melo 2023-05-31 15:31:56 -03:00
commit d17ed982e4
854 changed files with 7961 additions and 4710 deletions

View File

@ -364,6 +364,11 @@ Nicolas Pitre <nico@fluxnic.net> <nico@linaro.org>
Nicolas Saenz Julienne <nsaenz@kernel.org> <nsaenzjulienne@suse.de>
Nicolas Saenz Julienne <nsaenz@kernel.org> <nsaenzjulienne@suse.com>
Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Nikolay Aleksandrov <razor@blackwall.org> <naleksan@redhat.com>
Nikolay Aleksandrov <razor@blackwall.org> <nikolay@redhat.com>
Nikolay Aleksandrov <razor@blackwall.org> <nikolay@cumulusnetworks.com>
Nikolay Aleksandrov <razor@blackwall.org> <nikolay@nvidia.com>
Nikolay Aleksandrov <razor@blackwall.org> <nikolay@isovalent.com>
Oleksandr Natalenko <oleksandr@natalenko.name> <oleksandr@redhat.com>
Oleksij Rempel <linux@rempel-privat.de> <bug-track@fisher-privat.net>
Oleksij Rempel <linux@rempel-privat.de> <external.Oleksij.Rempel@de.bosch.com>

View File

@ -1706,6 +1706,10 @@ S: Panoramastrasse 18
S: D-69126 Heidelberg
S: Germany
N: Neil Horman
M: nhorman@tuxdriver.com
D: SCTP protocol maintainer.
N: Simon Horman
M: horms@verge.net.au
D: Renesas ARM/ARM64 SoC maintainer

View File

@ -5,5 +5,5 @@ Changes
See https://wiki.samba.org/index.php/LinuxCIFSKernel for summary
information about fixes/improvements to CIFS/SMB2/SMB3 support (changes
to cifs.ko module) by kernel version (and cifs internal module version).
This may be easier to read than parsing the output of "git log fs/cifs"
by release.
This may be easier to read than parsing the output of
"git log fs/smb/client" by release.

View File

@ -45,7 +45,7 @@ Installation instructions
If you have built the CIFS vfs as module (successfully) simply
type ``make modules_install`` (or if you prefer, manually copy the file to
the modules directory e.g. /lib/modules/2.4.10-4GB/kernel/fs/cifs/cifs.ko).
the modules directory e.g. /lib/modules/6.3.0-060300-generic/kernel/fs/smb/client/cifs.ko).
If you have built the CIFS vfs into the kernel itself, follow the instructions
for your distribution on how to install a new kernel (usually you
@ -66,15 +66,15 @@ If cifs is built as a module, then the size and number of network buffers
and maximum number of simultaneous requests to one server can be configured.
Changing these from their defaults is not recommended. By executing modinfo::
modinfo kernel/fs/cifs/cifs.ko
modinfo <path to cifs.ko>
on kernel/fs/cifs/cifs.ko the list of configuration changes that can be made
on kernel/fs/smb/client/cifs.ko the list of configuration changes that can be made
at module initialization time (by running insmod cifs.ko) can be seen.
Recommendations
===============
To improve security the SMB2.1 dialect or later (usually will get SMB3) is now
To improve security the SMB2.1 dialect or later (usually will get SMB3.1.1) is now
the new default. To use old dialects (e.g. to mount Windows XP) use "vers=1.0"
on mount (or vers=2.0 for Windows Vista). Note that the CIFS (vers=1.0) is
much older and less secure than the default dialect SMB3 which includes

View File

@ -215,12 +215,14 @@ again.
reduce the compile time enormously, especially if you are running an
universal kernel from a commodity Linux distribution.
There is a catch: the make target 'localmodconfig' will disable kernel
features you have not directly or indirectly through some program utilized
since you booted the system. You can reduce or nearly eliminate that risk by
using tricks outlined in the reference section; for quick testing purposes
that risk is often negligible, but it is an aspect you want to keep in mind
in case your kernel behaves oddly.
There is a catch: 'localmodconfig' is likely to disable kernel features you
did not use since you booted your Linux -- like drivers for currently
disconnected peripherals or a virtualization software not haven't used yet.
You can reduce or nearly eliminate that risk with tricks the reference
section outlines; but when building a kernel just for quick testing purposes
it is often negligible if such features are missing. But you should keep that
aspect in mind when using a kernel built with this make target, as it might
be the reason why something you only use occasionally stopped working.
[:ref:`details<configuration>`]
@ -271,6 +273,9 @@ again.
does nothing at all; in that case you have to manually install your kernel,
as outlined in the reference section.
If you are running a immutable Linux distribution, check its documentation
and the web to find out how to install your own kernel there.
[:ref:`details<install>`]
.. _another_sbs:
@ -291,29 +296,29 @@ again.
version you care about, as git otherwise might retrieve the entire commit
history::
git fetch --shallow-exclude=v6.1 origin
git fetch --shallow-exclude=v6.0 origin
If you modified the sources (for example by applying a patch), you now need
to discard those modifications; that's because git otherwise will not be able
to switch to the sources of another version due to potential conflicting
changes::
Now switch to the version you are interested in -- but be aware the command
used here will discard any modifications you performed, as they would
conflict with the sources you want to checkout::
git reset --hard
Now checkout the version you are interested in, as explained above::
git checkout --detach origin/master
git checkout --force --detach origin/master
At this point you might want to patch the sources again or set/modify a build
tag, as explained earlier; afterwards adjust the build configuration to the
new codebase and build your next kernel::
tag, as explained earlier. Afterwards adjust the build configuration to the
new codebase using olddefconfig, which will now adjust the configuration file
you prepared earlier using localmodconfig (~/linux/.config) for your next
kernel::
# reminder: if you want to apply patches, do it at this point
# reminder: you might want to update your build tag at this point
make olddefconfig
Now build your kernel::
make -j $(nproc --all)
Install the kernel as outlined above::
Afterwards install the kernel as outlined above::
command -v installkernel && sudo make modules_install install
@ -584,11 +589,11 @@ versions and individual commits at hand at any time::
curl -L \
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/clone.bundle \
-o linux-stable.git.bundle
git clone clone.bundle ~/linux/
git clone linux-stable.git.bundle ~/linux/
rm linux-stable.git.bundle
cd ~/linux/
git remote set-url origin
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
git remote set-url origin \
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
git fetch origin
git checkout --detach origin/master

View File

@ -18,7 +18,6 @@ Block
kyber-iosched
null_blk
pr
request
stat
switching-sched
writeback_cache_control

View File

@ -1,99 +0,0 @@
============================
struct request documentation
============================
Jens Axboe <jens.axboe@oracle.com> 27/05/02
.. FIXME:
No idea about what does mean - seems just some noise, so comment it
1.0
Index
2.0 Struct request members classification
2.1 struct request members explanation
3.0
2.0
Short explanation of request members
====================================
Classification flags:
= ====================
D driver member
B block layer member
I I/O scheduler member
= ====================
Unless an entry contains a D classification, a device driver must not access
this member. Some members may contain D classifications, but should only be
access through certain macros or functions (eg ->flags).
<linux/blkdev.h>
=============================== ======= =======================================
Member Flag Comment
=============================== ======= =======================================
struct list_head queuelist BI Organization on various internal
queues
``void *elevator_private`` I I/O scheduler private data
unsigned char cmd[16] D Driver can use this for setting up
a cdb before execution, see
blk_queue_prep_rq
unsigned long flags DBI Contains info about data direction,
request type, etc.
int rq_status D Request status bits
kdev_t rq_dev DBI Target device
int errors DB Error counts
sector_t sector DBI Target location
unsigned long hard_nr_sectors B Used to keep sector sane
unsigned long nr_sectors DBI Total number of sectors in request
unsigned long hard_nr_sectors B Used to keep nr_sectors sane
unsigned short nr_phys_segments DB Number of physical scatter gather
segments in a request
unsigned short nr_hw_segments DB Number of hardware scatter gather
segments in a request
unsigned int current_nr_sectors DB Number of sectors in first segment
of request
unsigned int hard_cur_sectors B Used to keep current_nr_sectors sane
int tag DB TCQ tag, if assigned
``void *special`` D Free to be used by driver
``char *buffer`` D Map of first segment, also see
section on bouncing SECTION
``struct completion *waiting`` D Can be used by driver to get signalled
on request completion
``struct bio *bio`` DBI First bio in request
``struct bio *biotail`` DBI Last bio in request
``struct request_queue *q`` DB Request queue this request belongs to
``struct request_list *rl`` B Request list this request came from
=============================== ======= =======================================

View File

@ -1,8 +1,8 @@
.. SPDX-License-Identifier: GPL-2.0
=====
cdrom
=====
======
CD-ROM
======
.. toctree::
:maxdepth: 1

View File

@ -32,7 +32,7 @@ properties:
maxItems: 1
iommus:
maxItems: 1
maxItems: 4
power-domains:
maxItems: 1

View File

@ -82,6 +82,18 @@ properties:
Indicates if the DSI controller is driving a panel which needs
2 DSI links.
qcom,master-dsi:
type: boolean
description: |
Indicates if the DSI controller is the master DSI controller when
qcom,dual-dsi-mode enabled.
qcom,sync-dual-dsi:
type: boolean
description: |
Indicates if the DSI controller needs to sync the other DSI controller
with MIPI DCS commands when qcom,dual-dsi-mode enabled.
assigned-clocks:
minItems: 2
maxItems: 4

View File

@ -49,6 +49,7 @@ properties:
properties:
data-lanes:
minItems: 1
maxItems: 2
required:

View File

@ -21,11 +21,22 @@ properties:
st,can-primary:
description:
Primary and secondary mode of the bxCAN peripheral is only relevant
if the chip has two CAN peripherals. In that case they share some
of the required logic.
Primary mode of the bxCAN peripheral is only relevant if the chip has
two CAN peripherals in dual CAN configuration. In that case they share
some of the required logic.
Not to be used if the peripheral is in single CAN configuration.
To avoid misunderstandings, it should be noted that ST documentation
uses the terms master/slave instead of primary/secondary.
uses the terms master instead of primary.
type: boolean
st,can-secondary:
description:
Secondary mode of the bxCAN peripheral is only relevant if the chip
has two CAN peripherals in dual CAN configuration. In that case they
share some of the required logic.
Not to be used if the peripheral is in single CAN configuration.
To avoid misunderstandings, it should be noted that ST documentation
uses the terms slave instead of secondary.
type: boolean
reg:

View File

@ -17,20 +17,11 @@ description:
properties:
clocks:
minItems: 3
items:
- description: PCIe bridge clock.
- description: PCIe bus clock.
- description: PCIe PHY clock.
- description: Additional required clock entry for imx6sx-pcie,
imx6sx-pcie-ep, imx8mq-pcie, imx8mq-pcie-ep.
maxItems: 4
clock-names:
minItems: 3
items:
- const: pcie
- const: pcie_bus
- enum: [ pcie_phy, pcie_aux ]
- enum: [ pcie_inbound_axi, pcie_aux ]
maxItems: 4
num-lanes:
const: 1

View File

@ -31,6 +31,19 @@ properties:
- const: dbi
- const: addr_space
clocks:
minItems: 3
items:
- description: PCIe bridge clock.
- description: PCIe bus clock.
- description: PCIe PHY clock.
- description: Additional required clock entry for imx6sx-pcie,
imx6sx-pcie-ep, imx8mq-pcie, imx8mq-pcie-ep.
clock-names:
minItems: 3
maxItems: 4
interrupts:
items:
- description: builtin eDMA interrupter.
@ -49,6 +62,31 @@ required:
allOf:
- $ref: /schemas/pci/snps,dw-pcie-ep.yaml#
- $ref: /schemas/pci/fsl,imx6q-pcie-common.yaml#
- if:
properties:
compatible:
enum:
- fsl,imx8mq-pcie-ep
then:
properties:
clocks:
minItems: 4
clock-names:
items:
- const: pcie
- const: pcie_bus
- const: pcie_phy
- const: pcie_aux
else:
properties:
clocks:
maxItems: 3
clock-names:
items:
- const: pcie
- const: pcie_bus
- const: pcie_aux
unevaluatedProperties: false

View File

@ -40,6 +40,19 @@ properties:
- const: dbi
- const: config
clocks:
minItems: 3
items:
- description: PCIe bridge clock.
- description: PCIe bus clock.
- description: PCIe PHY clock.
- description: Additional required clock entry for imx6sx-pcie,
imx6sx-pcie-ep, imx8mq-pcie, imx8mq-pcie-ep.
clock-names:
minItems: 3
maxItems: 4
interrupts:
items:
- description: builtin MSI controller.
@ -77,6 +90,70 @@ required:
allOf:
- $ref: /schemas/pci/snps,dw-pcie.yaml#
- $ref: /schemas/pci/fsl,imx6q-pcie-common.yaml#
- if:
properties:
compatible:
enum:
- fsl,imx6sx-pcie
then:
properties:
clocks:
minItems: 4
clock-names:
items:
- const: pcie
- const: pcie_bus
- const: pcie_phy
- const: pcie_inbound_axi
- if:
properties:
compatible:
enum:
- fsl,imx8mq-pcie
then:
properties:
clocks:
minItems: 4
clock-names:
items:
- const: pcie
- const: pcie_bus
- const: pcie_phy
- const: pcie_aux
- if:
properties:
compatible:
enum:
- fsl,imx6q-pcie
- fsl,imx6qp-pcie
- fsl,imx7d-pcie
then:
properties:
clocks:
maxItems: 3
clock-names:
items:
- const: pcie
- const: pcie_bus
- const: pcie_phy
- if:
properties:
compatible:
enum:
- fsl,imx8mm-pcie
- fsl,imx8mp-pcie
then:
properties:
clocks:
maxItems: 3
clock-names:
items:
- const: pcie
- const: pcie_bus
- const: pcie_aux
unevaluatedProperties: false

View File

@ -55,7 +55,9 @@ properties:
description: TDM TX current sense time slot.
'#sound-dai-cells':
const: 1
# The codec has a single DAI, the #sound-dai-cells=<1>; case is left in for backward
# compatibility but is deprecated.
enum: [0, 1]
required:
- compatible
@ -72,7 +74,7 @@ examples:
codec: codec@4c {
compatible = "ti,tas2562";
reg = <0x4c>;
#sound-dai-cells = <1>;
#sound-dai-cells = <0>;
interrupt-parent = <&gpio1>;
interrupts = <14>;
shutdown-gpios = <&gpio1 15 0>;

View File

@ -57,7 +57,9 @@ properties:
- 1 # Falling edge
'#sound-dai-cells':
const: 1
# The codec has a single DAI, the #sound-dai-cells=<1>; case is left in for backward
# compatibility but is deprecated.
enum: [0, 1]
required:
- compatible
@ -74,7 +76,7 @@ examples:
codec: codec@41 {
compatible = "ti,tas2770";
reg = <0x41>;
#sound-dai-cells = <1>;
#sound-dai-cells = <0>;
interrupt-parent = <&gpio1>;
interrupts = <14>;
reset-gpio = <&gpio1 15 0>;

View File

@ -50,7 +50,9 @@ properties:
description: TDM TX voltage sense time slot.
'#sound-dai-cells':
const: 1
# The codec has a single DAI, the #sound-dai-cells=<1>; case is left in for backward
# compatibility but is deprecated.
enum: [0, 1]
required:
- compatible
@ -67,7 +69,7 @@ examples:
codec: codec@38 {
compatible = "ti,tas2764";
reg = <0x38>;
#sound-dai-cells = <1>;
#sound-dai-cells = <0>;
interrupt-parent = <&gpio1>;
interrupts = <14>;
reset-gpios = <&gpio1 15 0>;

View File

@ -8,7 +8,7 @@ Required properties:
"ti,tlv320aic32x6" TLV320AIC3206, TLV320AIC3256
"ti,tas2505" TAS2505, TAS2521
- reg: I2C slave address
- supply-*: Required supply regulators are:
- *-supply: Required supply regulators are:
"iov" - digital IO power supply
"ldoin" - LDO power supply
"dv" - Digital core power supply

View File

@ -72,7 +72,6 @@ Documentation for filesystem implementations.
befs
bfs
btrfs
cifs/index
ceph
coda
configfs
@ -111,6 +110,7 @@ Documentation for filesystem implementations.
ramfs-rootfs-initramfs
relay
romfs
smb/index
spufs/index
squashfs
sysfs

View File

@ -6,8 +6,7 @@ Ramfs, rootfs and initramfs
October 17, 2005
Rob Landley <rob@landley.net>
=============================
:Author: Rob Landley <rob@landley.net>
What is ramfs?
--------------

View File

@ -147,6 +147,7 @@ replicas continue to be exactly same.
3) Setting mount states
-----------------------
The mount command (util-linux package) can be used to set mount
states::
@ -612,6 +613,7 @@ replicas continue to be exactly same.
6) Quiz
-------
A. What is the result of the following command sequence?
@ -673,6 +675,7 @@ replicas continue to be exactly same.
/mnt/1/test be?
7) FAQ
------
Q1. Why is bind mount needed? How is it different from symbolic links?
symbolic links can get stale if the destination mount gets
@ -841,6 +844,7 @@ replicas continue to be exactly same.
tmp usr tmp usr tmp usr
8) Implementation
-----------------
8A) Datastructure

View File

@ -59,7 +59,7 @@ the root file system via SMB protocol.
Enables the kernel to mount the root file system via SMB that are
located in the <server-ip> and <share> specified in this option.
The default mount options are set in fs/cifs/cifsroot.c.
The default mount options are set in fs/smb/client/cifsroot.c.
server-ip
IPv4 address of the server.

View File

@ -1,7 +1,7 @@
.. SPDX-License-Identifier: GPL-2.0
====
fpga
FPGA
====
.. toctree::

View File

@ -1,7 +1,7 @@
.. SPDX-License-Identifier: GPL-2.0
=======
locking
Locking
=======
.. toctree::

View File

@ -68,6 +68,9 @@ attribute-sets:
type: nest
nested-attributes: x509
multi-attr: true
-
name: peername
type: string
-
name: done
attributes:
@ -105,6 +108,7 @@ operations:
- auth-mode
- peer-identity
- certificate
- peername
-
name: done
doc: Handler reports handshake completion

View File

@ -776,10 +776,11 @@ peer_notif_delay
Specify the delay, in milliseconds, between each peer
notification (gratuitous ARP and unsolicited IPv6 Neighbor
Advertisement) when they are issued after a failover event.
This delay should be a multiple of the link monitor interval
(arp_interval or miimon, whichever is active). The default
value is 0 which means to match the value of the link monitor
interval.
This delay should be a multiple of the MII link monitor interval
(miimon).
The valid range is 0 - 300000. The default value is 0, which means
to match the value of the MII link monitor interval.
prio
Slave priority. A higher number means higher priority.

View File

@ -116,8 +116,8 @@ Contents:
udplite
vrf
vxlan
x25-iface
x25
x25-iface
xfrm_device
xfrm_proc
xfrm_sync

View File

@ -53,6 +53,7 @@ fills in a structure that contains the parameters of the request:
struct socket *ta_sock;
tls_done_func_t ta_done;
void *ta_data;
const char *ta_peername;
unsigned int ta_timeout_ms;
key_serial_t ta_keyring;
key_serial_t ta_my_cert;
@ -71,6 +72,10 @@ instantiated a struct file in sock->file.
has completed. Further explanation of this function is in the "Handshake
Completion" sesction below.
The consumer can provide a NUL-terminated hostname in the @ta_peername
field that is sent as part of ClientHello. If no peername is provided,
the DNS hostname associated with the server's IP address is used instead.
The consumer can fill in the @ta_timeout_ms field to force the servicing
handshake agent to exit after a number of milliseconds. This enables the
socket to be fully closed once both the kernel and the handshake agent

View File

@ -1,8 +1,7 @@
.. SPDX-License-Identifier: GPL-2.0
============================-
X.25 Device Driver Interface
============================-
============================
Version 1.1

View File

@ -1,7 +1,7 @@
.. SPDX-License-Identifier: GPL-2.0
======
pcmcia
PCMCIA
======
.. toctree::

View File

@ -127,13 +127,32 @@ the value of ``Message-ID`` to the URL above.
Updating patch status
~~~~~~~~~~~~~~~~~~~~~
It may be tempting to help the maintainers and update the state of your
own patches when you post a new version or spot a bug. Please **do not**
do that.
Interfering with the patch status on patchwork will only cause confusion. Leave
it to the maintainer to figure out what is the most recent and current
version that should be applied. If there is any doubt, the maintainer
will reply and ask what should be done.
Contributors and reviewers do not have the permissions to update patch
state directly in patchwork. Patchwork doesn't expose much information
about the history of the state of patches, therefore having multiple
people update the state leads to confusion.
Instead of delegating patchwork permissions netdev uses a simple mail
bot which looks for special commands/lines within the emails sent to
the mailing list. For example to mark a series as Changes Requested
one needs to send the following line anywhere in the email thread::
pw-bot: changes-requested
As a result the bot will set the entire series to Changes Requested.
This may be useful when author discovers a bug in their own series
and wants to prevent it from getting applied.
The use of the bot is entirely optional, if in doubt ignore its existence
completely. Maintainers will classify and update the state of the patches
themselves. No email should ever be sent to the list with the main purpose
of communicating with the bot, the bot commands should be seen as metadata.
The use of the bot is restricted to authors of the patches (the ``From:``
header on patch submission and command must match!), maintainers themselves
and a handful of senior reviewers. Bot records its activity here:
https://patchwork.hopto.org/pw-bot.html
Review timelines
~~~~~~~~~~~~~~~~

View File

@ -551,7 +551,6 @@ These are the steps:
* IOMMU_SUPPORT
* S390
* ZCRYPT
* S390_AP_IOMMU
* VFIO
* KVM

View File

@ -1,5 +1,5 @@
=================================
brief tutorial on CRC computation
Brief tutorial on CRC computation
=================================
A CRC is a long-division remainder. You add the CRC to the message,

View File

@ -1,7 +1,7 @@
.. SPDX-License-Identifier: GPL-2.0
======
timers
Timers
======
.. toctree::

View File

@ -363,7 +363,7 @@ Code Seq# Include File Comments
0xCC 00-0F drivers/misc/ibmvmc.h pseries VMC driver
0xCD 01 linux/reiserfs_fs.h
0xCE 01-02 uapi/linux/cxl_mem.h Compute Express Link Memory Devices
0xCF 02 fs/cifs/ioctl.c
0xCF 02 fs/smb/client/cifs_ioctl.h
0xDB 00-0F drivers/char/mwave/mwavepub.h
0xDD 00-3F ZFCP device driver see drivers/s390/scsi/
<mailto:aherrman@de.ibm.com>

File diff suppressed because it is too large Load Diff

View File

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 4
SUBLEVEL = 0
EXTRAVERSION = -rc1
EXTRAVERSION = -rc3
NAME = Hurr durr I'ma ninja sloth
# *DOCUMENTATION*

View File

@ -387,6 +387,7 @@
interrupt-names = "tx", "rx0", "rx1", "sce";
resets = <&rcc STM32F4_APB1_RESET(CAN2)>;
clocks = <&rcc 0 STM32F4_APB1_CLOCK(CAN2)>;
st,can-secondary;
st,gcan = <&gcan>;
status = "disabled";
};

View File

@ -283,6 +283,88 @@
slew-rate = <2>;
};
};
can1_pins_a: can1-0 {
pins1 {
pinmux = <STM32_PINMUX('A', 12, AF9)>; /* CAN1_TX */
};
pins2 {
pinmux = <STM32_PINMUX('A', 11, AF9)>; /* CAN1_RX */
bias-pull-up;
};
};
can1_pins_b: can1-1 {
pins1 {
pinmux = <STM32_PINMUX('B', 9, AF9)>; /* CAN1_TX */
};
pins2 {
pinmux = <STM32_PINMUX('B', 8, AF9)>; /* CAN1_RX */
bias-pull-up;
};
};
can1_pins_c: can1-2 {
pins1 {
pinmux = <STM32_PINMUX('D', 1, AF9)>; /* CAN1_TX */
};
pins2 {
pinmux = <STM32_PINMUX('D', 0, AF9)>; /* CAN1_RX */
bias-pull-up;
};
};
can1_pins_d: can1-3 {
pins1 {
pinmux = <STM32_PINMUX('H', 13, AF9)>; /* CAN1_TX */
};
pins2 {
pinmux = <STM32_PINMUX('H', 14, AF9)>; /* CAN1_RX */
bias-pull-up;
};
};
can2_pins_a: can2-0 {
pins1 {
pinmux = <STM32_PINMUX('B', 6, AF9)>; /* CAN2_TX */
};
pins2 {
pinmux = <STM32_PINMUX('B', 5, AF9)>; /* CAN2_RX */
bias-pull-up;
};
};
can2_pins_b: can2-1 {
pins1 {
pinmux = <STM32_PINMUX('B', 13, AF9)>; /* CAN2_TX */
};
pins2 {
pinmux = <STM32_PINMUX('B', 12, AF9)>; /* CAN2_RX */
bias-pull-up;
};
};
can3_pins_a: can3-0 {
pins1 {
pinmux = <STM32_PINMUX('A', 15, AF11)>; /* CAN3_TX */
};
pins2 {
pinmux = <STM32_PINMUX('A', 8, AF11)>; /* CAN3_RX */
bias-pull-up;
};
};
can3_pins_b: can3-1 {
pins1 {
pinmux = <STM32_PINMUX('B', 4, AF11)>; /* CAN3_TX */
};
pins2 {
pinmux = <STM32_PINMUX('B', 3, AF11)>; /* CAN3_RX */
bias-pull-up;
};
};
};
};
};

View File

@ -92,7 +92,7 @@
#define RETURN_READ_PMEVCNTRN(n) \
return read_sysreg(PMEVCNTR##n)
static unsigned long read_pmevcntrn(int n)
static inline unsigned long read_pmevcntrn(int n)
{
PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
return 0;
@ -100,14 +100,14 @@ static unsigned long read_pmevcntrn(int n)
#define WRITE_PMEVCNTRN(n) \
write_sysreg(val, PMEVCNTR##n)
static void write_pmevcntrn(int n, unsigned long val)
static inline void write_pmevcntrn(int n, unsigned long val)
{
PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
}
#define WRITE_PMEVTYPERN(n) \
write_sysreg(val, PMEVTYPER##n)
static void write_pmevtypern(int n, unsigned long val)
static inline void write_pmevtypern(int n, unsigned long val)
{
PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
}

View File

@ -308,6 +308,29 @@ static int unwind_exec_pop_subset_r0_to_r3(struct unwind_ctrl_block *ctrl,
return URC_OK;
}
static unsigned long unwind_decode_uleb128(struct unwind_ctrl_block *ctrl)
{
unsigned long bytes = 0;
unsigned long insn;
unsigned long result = 0;
/*
* unwind_get_byte() will advance `ctrl` one instruction at a time, so
* loop until we get an instruction byte where bit 7 is not set.
*
* Note: This decodes a maximum of 4 bytes to output 28 bits data where
* max is 0xfffffff: that will cover a vsp increment of 1073742336, hence
* it is sufficient for unwinding the stack.
*/
do {
insn = unwind_get_byte(ctrl);
result |= (insn & 0x7f) << (bytes * 7);
bytes++;
} while (!!(insn & 0x80) && (bytes != sizeof(result)));
return result;
}
/*
* Execute the current unwind instruction.
*/
@ -361,7 +384,7 @@ static int unwind_exec_insn(struct unwind_ctrl_block *ctrl)
if (ret)
goto error;
} else if (insn == 0xb2) {
unsigned long uleb128 = unwind_get_byte(ctrl);
unsigned long uleb128 = unwind_decode_uleb128(ctrl);
ctrl->vrs[SP] += 0x204 + (uleb128 << 2);
} else {

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-only
/**
/*
* arch/arm/mac-sa1100/jornada720_ssp.c
*
* Copyright (C) 2006/2007 Kristoffer Ericson <Kristoffer.Ericson@gmail.com>
@ -26,6 +26,7 @@ static unsigned long jornada_ssp_flags;
/**
* jornada_ssp_reverse - reverses input byte
* @byte: input byte to reverse
*
* we need to reverse all data we receive from the mcu due to its physical location
* returns : 01110111 -> 11101110
@ -46,6 +47,7 @@ EXPORT_SYMBOL(jornada_ssp_reverse);
/**
* jornada_ssp_byte - waits for ready ssp bus and sends byte
* @byte: input byte to transmit
*
* waits for fifo buffer to clear and then transmits, if it doesn't then we will
* timeout after <timeout> rounds. Needs mcu running before its called.
@ -77,6 +79,7 @@ EXPORT_SYMBOL(jornada_ssp_byte);
/**
* jornada_ssp_inout - decide if input is command or trading byte
* @byte: input byte to send (may be %TXDUMMY)
*
* returns : (jornada_ssp_byte(byte)) on success
* : %-ETIMEDOUT on timeout failure

View File

@ -23,6 +23,9 @@
@
ENTRY(do_vfp)
mov r1, r10
mov r3, r9
b vfp_entry
str lr, [sp, #-8]!
add r3, sp, #4
str r9, [r3]
bl vfp_entry
ldr pc, [sp], #8
ENDPROC(do_vfp)

View File

@ -172,13 +172,14 @@ vfp_hw_state_valid:
@ out before setting an FPEXC that
@ stops us reading stuff
VFPFMXR FPEXC, r1 @ Restore FPEXC last
mov sp, r3 @ we think we have handled things
pop {lr}
sub r2, r2, #4 @ Retry current instruction - if Thumb
str r2, [sp, #S_PC] @ mode it's two 16-bit instructions,
@ else it's one 32-bit instruction, so
@ always subtract 4 from the following
@ instruction address.
mov lr, r3 @ we think we have handled things
local_bh_enable_and_ret:
adr r0, .
mov r1, #SOFTIRQ_DISABLE_OFFSET
@ -209,8 +210,9 @@ skip:
process_exception:
DBGSTR "bounce"
mov sp, r3 @ setup for a return to the user code.
pop {lr}
mov r2, sp @ nothing stacked - regdump is at TOS
mov lr, r3 @ setup for a return to the user code.
@ Now call the C code to package up the bounce to the support code
@ r0 holds the trigger instruction

View File

@ -13,7 +13,7 @@
#define RETURN_READ_PMEVCNTRN(n) \
return read_sysreg(pmevcntr##n##_el0)
static unsigned long read_pmevcntrn(int n)
static inline unsigned long read_pmevcntrn(int n)
{
PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
return 0;
@ -21,14 +21,14 @@ static unsigned long read_pmevcntrn(int n)
#define WRITE_PMEVCNTRN(n) \
write_sysreg(val, pmevcntr##n##_el0)
static void write_pmevcntrn(int n, unsigned long val)
static inline void write_pmevcntrn(int n, unsigned long val)
{
PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
}
#define WRITE_PMEVTYPERN(n) \
write_sysreg(val, pmevtyper##n##_el0)
static void write_pmevtypern(int n, unsigned long val)
static inline void write_pmevtypern(int n, unsigned long val)
{
PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
}

View File

@ -126,6 +126,10 @@
#define APPLE_CPU_PART_M1_FIRESTORM_MAX 0x029
#define APPLE_CPU_PART_M2_BLIZZARD 0x032
#define APPLE_CPU_PART_M2_AVALANCHE 0x033
#define APPLE_CPU_PART_M2_BLIZZARD_PRO 0x034
#define APPLE_CPU_PART_M2_AVALANCHE_PRO 0x035
#define APPLE_CPU_PART_M2_BLIZZARD_MAX 0x038
#define APPLE_CPU_PART_M2_AVALANCHE_MAX 0x039
#define AMPERE_CPU_PART_AMPERE1 0xAC3
@ -181,6 +185,10 @@
#define MIDR_APPLE_M1_FIRESTORM_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM_MAX)
#define MIDR_APPLE_M2_BLIZZARD MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD)
#define MIDR_APPLE_M2_AVALANCHE MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE)
#define MIDR_APPLE_M2_BLIZZARD_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD_PRO)
#define MIDR_APPLE_M2_AVALANCHE_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE_PRO)
#define MIDR_APPLE_M2_BLIZZARD_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD_MAX)
#define MIDR_APPLE_M2_AVALANCHE_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE_MAX)
#define MIDR_AMPERE1 MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1)
/* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */

View File

@ -209,6 +209,7 @@ struct kvm_pgtable_visit_ctx {
kvm_pte_t old;
void *arg;
struct kvm_pgtable_mm_ops *mm_ops;
u64 start;
u64 addr;
u64 end;
u32 level;

View File

@ -66,13 +66,10 @@ void mte_sync_tags(pte_t old_pte, pte_t pte)
return;
/* if PG_mte_tagged is set, tags have already been initialised */
for (i = 0; i < nr_pages; i++, page++) {
if (!page_mte_tagged(page)) {
for (i = 0; i < nr_pages; i++, page++)
if (!page_mte_tagged(page))
mte_sync_page_tags(page, old_pte, check_swap,
pte_is_tagged);
set_page_mte_tagged(page);
}
}
/* ensure the tags are visible before the PTE is set */
smp_wmb();

View File

@ -288,7 +288,7 @@ static int aarch32_alloc_kuser_vdso_page(void)
memcpy((void *)(vdso_page + 0x1000 - kuser_sz), __kuser_helper_start,
kuser_sz);
aarch32_vectors_page = virt_to_page(vdso_page);
aarch32_vectors_page = virt_to_page((void *)vdso_page);
return 0;
}

View File

@ -81,26 +81,34 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
fpsimd_kvm_prepare();
/*
* We will check TIF_FOREIGN_FPSTATE just before entering the
* guest in kvm_arch_vcpu_ctxflush_fp() and override this to
* FP_STATE_FREE if the flag set.
*/
vcpu->arch.fp_state = FP_STATE_HOST_OWNED;
vcpu_clear_flag(vcpu, HOST_SVE_ENABLED);
if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
vcpu_set_flag(vcpu, HOST_SVE_ENABLED);
/*
* We don't currently support SME guests but if we leave
* things in streaming mode then when the guest starts running
* FPSIMD or SVE code it may generate SME traps so as a
* special case if we are in streaming mode we force the host
* state to be saved now and exit streaming mode so that we
* don't have to handle any SME traps for valid guest
* operations. Do this for ZA as well for now for simplicity.
*/
if (system_supports_sme()) {
vcpu_clear_flag(vcpu, HOST_SME_ENABLED);
if (read_sysreg(cpacr_el1) & CPACR_EL1_SMEN_EL0EN)
vcpu_set_flag(vcpu, HOST_SME_ENABLED);
/*
* If PSTATE.SM is enabled then save any pending FP
* state and disable PSTATE.SM. If we leave PSTATE.SM
* enabled and the guest does not enable SME via
* CPACR_EL1.SMEN then operations that should be valid
* may generate SME traps from EL1 to EL1 which we
* can't intercept and which would confuse the guest.
*
* Do the same for PSTATE.ZA in the case where there
* is state in the registers which has not already
* been saved, this is very unlikely to happen.
*/
if (read_sysreg_s(SYS_SVCR) & (SVCR_SM_MASK | SVCR_ZA_MASK)) {
vcpu->arch.fp_state = FP_STATE_FREE;
fpsimd_save_and_flush_cpu_state();

View File

@ -177,9 +177,17 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
sve_guest = vcpu_has_sve(vcpu);
esr_ec = kvm_vcpu_trap_get_class(vcpu);
/* Don't handle SVE traps for non-SVE vcpus here: */
if (!sve_guest && esr_ec != ESR_ELx_EC_FP_ASIMD)
/* Only handle traps the vCPU can support here: */
switch (esr_ec) {
case ESR_ELx_EC_FP_ASIMD:
break;
case ESR_ELx_EC_SVE:
if (!sve_guest)
return false;
break;
default:
return false;
}
/* Valid trap. Switch the context: */

View File

@ -58,8 +58,9 @@
struct kvm_pgtable_walk_data {
struct kvm_pgtable_walker *walker;
const u64 start;
u64 addr;
u64 end;
const u64 end;
};
static bool kvm_phys_is_valid(u64 phys)
@ -201,6 +202,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
.old = READ_ONCE(*ptep),
.arg = data->walker->arg,
.mm_ops = mm_ops,
.start = data->start,
.addr = data->addr,
.end = data->end,
.level = level,
@ -293,6 +295,7 @@ int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
struct kvm_pgtable_walker *walker)
{
struct kvm_pgtable_walk_data walk_data = {
.start = ALIGN_DOWN(addr, PAGE_SIZE),
.addr = ALIGN_DOWN(addr, PAGE_SIZE),
.end = PAGE_ALIGN(walk_data.addr + size),
.walker = walker,
@ -349,7 +352,7 @@ int kvm_pgtable_get_leaf(struct kvm_pgtable *pgt, u64 addr,
}
struct hyp_map_data {
u64 phys;
const u64 phys;
kvm_pte_t attr;
};
@ -407,13 +410,12 @@ enum kvm_pgtable_prot kvm_pgtable_hyp_pte_prot(kvm_pte_t pte)
static bool hyp_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx,
struct hyp_map_data *data)
{
u64 phys = data->phys + (ctx->addr - ctx->start);
kvm_pte_t new;
u64 granule = kvm_granule_size(ctx->level), phys = data->phys;
if (!kvm_block_mapping_supported(ctx, phys))
return false;
data->phys += granule;
new = kvm_init_valid_leaf_pte(phys, data->attr, ctx->level);
if (ctx->old == new)
return true;
@ -576,7 +578,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt)
}
struct stage2_map_data {
u64 phys;
const u64 phys;
kvm_pte_t attr;
u8 owner_id;
@ -794,20 +796,43 @@ static bool stage2_pte_executable(kvm_pte_t pte)
return !(pte & KVM_PTE_LEAF_ATTR_HI_S2_XN);
}
static u64 stage2_map_walker_phys_addr(const struct kvm_pgtable_visit_ctx *ctx,
const struct stage2_map_data *data)
{
u64 phys = data->phys;
/*
* Stage-2 walks to update ownership data are communicated to the map
* walker using an invalid PA. Avoid offsetting an already invalid PA,
* which could overflow and make the address valid again.
*/
if (!kvm_phys_is_valid(phys))
return phys;
/*
* Otherwise, work out the correct PA based on how far the walk has
* gotten.
*/
return phys + (ctx->addr - ctx->start);
}
static bool stage2_leaf_mapping_allowed(const struct kvm_pgtable_visit_ctx *ctx,
struct stage2_map_data *data)
{
u64 phys = stage2_map_walker_phys_addr(ctx, data);
if (data->force_pte && (ctx->level < (KVM_PGTABLE_MAX_LEVELS - 1)))
return false;
return kvm_block_mapping_supported(ctx, data->phys);
return kvm_block_mapping_supported(ctx, phys);
}
static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx,
struct stage2_map_data *data)
{
kvm_pte_t new;
u64 granule = kvm_granule_size(ctx->level), phys = data->phys;
u64 phys = stage2_map_walker_phys_addr(ctx, data);
u64 granule = kvm_granule_size(ctx->level);
struct kvm_pgtable *pgt = data->mmu->pgt;
struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
@ -841,8 +866,6 @@ static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx,
stage2_make_pte(ctx, new);
if (kvm_phys_is_valid(phys))
data->phys += granule;
return 0;
}

View File

@ -204,7 +204,7 @@ void kvm_inject_size_fault(struct kvm_vcpu *vcpu)
* Size Fault at level 0, as if exceeding PARange.
*
* Non-LPAE guests will only get the external abort, as there
* is no way to to describe the ASF.
* is no way to describe the ASF.
*/
if (vcpu_el1_is_32bit(vcpu) &&
!(vcpu_read_sys_reg(vcpu, TCR_EL1) & TTBCR_EAE))

View File

@ -616,6 +616,10 @@ static const struct midr_range broken_seis[] = {
MIDR_ALL_VERSIONS(MIDR_APPLE_M1_FIRESTORM_MAX),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_BLIZZARD),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_AVALANCHE),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_BLIZZARD_PRO),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_AVALANCHE_PRO),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_BLIZZARD_MAX),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_AVALANCHE_MAX),
{},
};

View File

@ -47,7 +47,7 @@ static void flush_context(void)
int cpu;
u64 vmid;
bitmap_clear(vmid_map, 0, NUM_USER_VMIDS);
bitmap_zero(vmid_map, NUM_USER_VMIDS);
for_each_possible_cpu(cpu) {
vmid = atomic64_xchg_relaxed(&per_cpu(active_vmids, cpu), 0);
@ -182,8 +182,7 @@ int __init kvm_arm_vmid_alloc_init(void)
*/
WARN_ON(NUM_USER_VMIDS - 1 <= num_possible_cpus());
atomic64_set(&vmid_generation, VMID_FIRST_VERSION);
vmid_map = kcalloc(BITS_TO_LONGS(NUM_USER_VMIDS),
sizeof(*vmid_map), GFP_KERNEL);
vmid_map = bitmap_zalloc(NUM_USER_VMIDS, GFP_KERNEL);
if (!vmid_map)
return -ENOMEM;
@ -192,5 +191,5 @@ int __init kvm_arm_vmid_alloc_init(void)
void __init kvm_arm_vmid_alloc_free(void)
{
kfree(vmid_map);
bitmap_free(vmid_map);
}

View File

@ -21,9 +21,10 @@ void copy_highpage(struct page *to, struct page *from)
copy_page(kto, kfrom);
if (kasan_hw_tags_enabled())
page_kasan_tag_reset(to);
if (system_supports_mte() && page_mte_tagged(from)) {
if (kasan_hw_tags_enabled())
page_kasan_tag_reset(to);
/* It's a new page, shouldn't have been tagged yet */
WARN_ON_ONCE(!try_page_mte_tagging(to));
mte_copy_page_tags(kto, kfrom);

View File

@ -480,8 +480,8 @@ static void do_bad_area(unsigned long far, unsigned long esr,
}
}
#define VM_FAULT_BADMAP 0x010000
#define VM_FAULT_BADACCESS 0x020000
#define VM_FAULT_BADMAP ((__force vm_fault_t)0x010000)
#define VM_FAULT_BADACCESS ((__force vm_fault_t)0x020000)
static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr,
unsigned int mm_flags, unsigned long vm_flags,

View File

@ -858,11 +858,17 @@ static inline int rt_setup_ucontext(struct ucontext __user *uc, struct pt_regs *
}
static inline void __user *
get_sigframe(struct ksignal *ksig, size_t frame_size)
get_sigframe(struct ksignal *ksig, struct pt_regs *tregs, size_t frame_size)
{
unsigned long usp = sigsp(rdusp(), ksig);
unsigned long gap = 0;
return (void __user *)((usp - frame_size) & -8UL);
if (CPU_IS_020_OR_030 && tregs->format == 0xb) {
/* USP is unreliable so use worst-case value */
gap = 256;
}
return (void __user *)((usp - gap - frame_size) & -8UL);
}
static int setup_frame(struct ksignal *ksig, sigset_t *set,
@ -880,7 +886,7 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
return -EFAULT;
}
frame = get_sigframe(ksig, sizeof(*frame) + fsize);
frame = get_sigframe(ksig, tregs, sizeof(*frame) + fsize);
if (fsize)
err |= copy_to_user (frame + 1, regs + 1, fsize);
@ -952,7 +958,7 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
return -EFAULT;
}
frame = get_sigframe(ksig, sizeof(*frame));
frame = get_sigframe(ksig, tregs, sizeof(*frame));
if (fsize)
err |= copy_to_user (&frame->uc.uc_extra, regs + 1, fsize);

View File

@ -130,6 +130,10 @@ config PM
config STACKTRACE_SUPPORT
def_bool y
config LOCKDEP_SUPPORT
bool
default y
config ISA_DMA_API
bool

View File

@ -1 +1,12 @@
# SPDX-License-Identifier: GPL-2.0
#
config LIGHTWEIGHT_SPINLOCK_CHECK
bool "Enable lightweight spinlock checks"
depends on SMP && !DEBUG_SPINLOCK
default y
help
Add checks with low performance impact to the spinlock functions
to catch memory overwrites at runtime. For more advanced
spinlock debugging you should choose the DEBUG_SPINLOCK option
which will detect unitialized spinlocks too.
If unsure say Y here.

View File

@ -48,6 +48,10 @@ void flush_dcache_page(struct page *page);
#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages)
#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages)
#define flush_dcache_mmap_lock_irqsave(mapping, flags) \
xa_lock_irqsave(&mapping->i_pages, flags)
#define flush_dcache_mmap_unlock_irqrestore(mapping, flags) \
xa_unlock_irqrestore(&mapping->i_pages, flags)
#define flush_icache_page(vma,page) do { \
flush_kernel_dcache_page_addr(page_address(page)); \

View File

@ -413,12 +413,12 @@ extern void paging_init (void);
* For the 64bit version, the offset is extended by 32bit.
*/
#define __swp_type(x) ((x).val & 0x1f)
#define __swp_offset(x) ( (((x).val >> 6) & 0x7) | \
(((x).val >> 8) & ~0x7) )
#define __swp_offset(x) ( (((x).val >> 5) & 0x7) | \
(((x).val >> 10) << 3) )
#define __swp_entry(type, offset) ((swp_entry_t) { \
((type) & 0x1f) | \
((offset & 0x7) << 6) | \
((offset & ~0x7) << 8) })
((offset & 0x7) << 5) | \
((offset >> 3) << 10) })
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })

View File

@ -7,10 +7,26 @@
#include <asm/processor.h>
#include <asm/spinlock_types.h>
#define SPINLOCK_BREAK_INSN 0x0000c006 /* break 6,6 */
static inline void arch_spin_val_check(int lock_val)
{
if (IS_ENABLED(CONFIG_LIGHTWEIGHT_SPINLOCK_CHECK))
asm volatile( "andcm,= %0,%1,%%r0\n"
".word %2\n"
: : "r" (lock_val), "r" (__ARCH_SPIN_LOCK_UNLOCKED_VAL),
"i" (SPINLOCK_BREAK_INSN));
}
static inline int arch_spin_is_locked(arch_spinlock_t *x)
{
volatile unsigned int *a = __ldcw_align(x);
return READ_ONCE(*a) == 0;
volatile unsigned int *a;
int lock_val;
a = __ldcw_align(x);
lock_val = READ_ONCE(*a);
arch_spin_val_check(lock_val);
return (lock_val == 0);
}
static inline void arch_spin_lock(arch_spinlock_t *x)
@ -18,9 +34,18 @@ static inline void arch_spin_lock(arch_spinlock_t *x)
volatile unsigned int *a;
a = __ldcw_align(x);
while (__ldcw(a) == 0)
do {
int lock_val_old;
lock_val_old = __ldcw(a);
arch_spin_val_check(lock_val_old);
if (lock_val_old)
return; /* got lock */
/* wait until we should try to get lock again */
while (*a == 0)
continue;
} while (1);
}
static inline void arch_spin_unlock(arch_spinlock_t *x)
@ -29,15 +54,19 @@ static inline void arch_spin_unlock(arch_spinlock_t *x)
a = __ldcw_align(x);
/* Release with ordered store. */
__asm__ __volatile__("stw,ma %0,0(%1)" : : "r"(1), "r"(a) : "memory");
__asm__ __volatile__("stw,ma %0,0(%1)"
: : "r"(__ARCH_SPIN_LOCK_UNLOCKED_VAL), "r"(a) : "memory");
}
static inline int arch_spin_trylock(arch_spinlock_t *x)
{
volatile unsigned int *a;
int lock_val;
a = __ldcw_align(x);
return __ldcw(a) != 0;
lock_val = __ldcw(a);
arch_spin_val_check(lock_val);
return lock_val != 0;
}
/*

View File

@ -2,13 +2,17 @@
#ifndef __ASM_SPINLOCK_TYPES_H
#define __ASM_SPINLOCK_TYPES_H
#define __ARCH_SPIN_LOCK_UNLOCKED_VAL 0x1a46
typedef struct {
#ifdef CONFIG_PA20
volatile unsigned int slock;
# define __ARCH_SPIN_LOCK_UNLOCKED { 1 }
# define __ARCH_SPIN_LOCK_UNLOCKED { __ARCH_SPIN_LOCK_UNLOCKED_VAL }
#else
volatile unsigned int lock[4];
# define __ARCH_SPIN_LOCK_UNLOCKED { { 1, 1, 1, 1 } }
# define __ARCH_SPIN_LOCK_UNLOCKED \
{ { __ARCH_SPIN_LOCK_UNLOCKED_VAL, __ARCH_SPIN_LOCK_UNLOCKED_VAL, \
__ARCH_SPIN_LOCK_UNLOCKED_VAL, __ARCH_SPIN_LOCK_UNLOCKED_VAL } }
#endif
} arch_spinlock_t;

View File

@ -25,7 +25,7 @@ void __init_or_module apply_alternatives(struct alt_instr *start,
{
struct alt_instr *entry;
int index = 0, applied = 0;
int num_cpus = num_online_cpus();
int num_cpus = num_present_cpus();
u16 cond_check;
cond_check = ALT_COND_ALWAYS |

View File

@ -399,6 +399,7 @@ void flush_dcache_page(struct page *page)
unsigned long offset;
unsigned long addr, old_addr = 0;
unsigned long count = 0;
unsigned long flags;
pgoff_t pgoff;
if (mapping && !mapping_mapped(mapping)) {
@ -420,7 +421,7 @@ void flush_dcache_page(struct page *page)
* to flush one address here for them all to become coherent
* on machines that support equivalent aliasing
*/
flush_dcache_mmap_lock(mapping);
flush_dcache_mmap_lock_irqsave(mapping, flags);
vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) {
offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT;
addr = mpnt->vm_start + offset;
@ -460,7 +461,7 @@ void flush_dcache_page(struct page *page)
}
WARN_ON(++count == 4096);
}
flush_dcache_mmap_unlock(mapping);
flush_dcache_mmap_unlock_irqrestore(mapping, flags);
}
EXPORT_SYMBOL(flush_dcache_page);

View File

@ -4,6 +4,8 @@
#include <linux/console.h>
#include <linux/kexec.h>
#include <linux/delay.h>
#include <linux/reboot.h>
#include <asm/cacheflush.h>
#include <asm/sections.h>

View File

@ -446,11 +446,27 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr,
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
/*
* fdc: The data cache line is written back to memory, if and only if
* it is dirty, and then invalidated from the data cache.
*/
flush_kernel_dcache_range((unsigned long)phys_to_virt(paddr), size);
}
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
flush_kernel_dcache_range((unsigned long)phys_to_virt(paddr), size);
unsigned long addr = (unsigned long) phys_to_virt(paddr);
switch (dir) {
case DMA_TO_DEVICE:
case DMA_BIDIRECTIONAL:
flush_kernel_dcache_range(addr, size);
return;
case DMA_FROM_DEVICE:
purge_kernel_dcache_range_asm(addr, addr + size);
return;
default:
BUG();
}
}

View File

@ -122,13 +122,18 @@ void machine_power_off(void)
/* It seems we have no way to power the system off via
* software. The user has to press the button himself. */
printk(KERN_EMERG "System shut down completed.\n"
"Please power this system off now.");
printk("Power off or press RETURN to reboot.\n");
/* prevent soft lockup/stalled CPU messages for endless loop. */
rcu_sysrq_start();
lockup_detector_soft_poweroff();
for (;;);
while (1) {
/* reboot if user presses RETURN key */
if (pdc_iodc_getc() == 13) {
printk("Rebooting...\n");
machine_restart(NULL);
}
}
}
void (*pm_power_off)(void);

View File

@ -47,6 +47,10 @@
#include <linux/kgdb.h>
#include <linux/kprobes.h>
#if defined(CONFIG_LIGHTWEIGHT_SPINLOCK_CHECK)
#include <asm/spinlock.h>
#endif
#include "../math-emu/math-emu.h" /* for handle_fpe() */
static void parisc_show_stack(struct task_struct *task,
@ -291,24 +295,30 @@ static void handle_break(struct pt_regs *regs)
}
#ifdef CONFIG_KPROBES
if (unlikely(iir == PARISC_KPROBES_BREAK_INSN)) {
if (unlikely(iir == PARISC_KPROBES_BREAK_INSN && !user_mode(regs))) {
parisc_kprobe_break_handler(regs);
return;
}
if (unlikely(iir == PARISC_KPROBES_BREAK_INSN2)) {
if (unlikely(iir == PARISC_KPROBES_BREAK_INSN2 && !user_mode(regs))) {
parisc_kprobe_ss_handler(regs);
return;
}
#endif
#ifdef CONFIG_KGDB
if (unlikely(iir == PARISC_KGDB_COMPILED_BREAK_INSN ||
iir == PARISC_KGDB_BREAK_INSN)) {
if (unlikely((iir == PARISC_KGDB_COMPILED_BREAK_INSN ||
iir == PARISC_KGDB_BREAK_INSN)) && !user_mode(regs)) {
kgdb_handle_exception(9, SIGTRAP, 0, regs);
return;
}
#endif
#ifdef CONFIG_LIGHTWEIGHT_SPINLOCK_CHECK
if ((iir == SPINLOCK_BREAK_INSN) && !user_mode(regs)) {
die_if_kernel("Spinlock was trashed", regs, 1);
}
#endif
if (unlikely(iir != GDB_BREAK_INSN))
parisc_printk_ratelimited(0, regs,
KERN_DEBUG "break %d,%d: pid=%d command='%s'\n",

View File

@ -34,8 +34,6 @@ endif
BOOTCFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
-fno-strict-aliasing -O2 -msoft-float -mno-altivec -mno-vsx \
$(call cc-option,-mno-prefixed) $(call cc-option,-mno-pcrel) \
$(call cc-option,-mno-mma) \
$(call cc-option,-mno-spe) $(call cc-option,-mspe=no) \
-pipe -fomit-frame-pointer -fno-builtin -fPIC -nostdinc \
$(LINUXINCLUDE)
@ -71,6 +69,10 @@ BOOTAFLAGS := -D__ASSEMBLY__ $(BOOTCFLAGS) -nostdinc
BOOTARFLAGS := -crD
BOOTCFLAGS += $(call cc-option,-mno-prefixed) \
$(call cc-option,-mno-pcrel) \
$(call cc-option,-mno-mma)
ifdef CONFIG_CC_IS_CLANG
BOOTCFLAGS += $(CLANG_FLAGS)
BOOTAFLAGS += $(CLANG_FLAGS)

View File

@ -96,7 +96,7 @@ config CRYPTO_AES_PPC_SPE
config CRYPTO_AES_GCM_P10
tristate "Stitched AES/GCM acceleration support on P10 or later CPU (PPC)"
depends on PPC64 && CPU_LITTLE_ENDIAN
depends on PPC64 && CPU_LITTLE_ENDIAN && VSX
select CRYPTO_LIB_AES
select CRYPTO_ALGAPI
select CRYPTO_AEAD

View File

@ -205,7 +205,6 @@ extern void iommu_register_group(struct iommu_table_group *table_group,
int pci_domain_number, unsigned long pe_num);
extern int iommu_add_device(struct iommu_table_group *table_group,
struct device *dev);
extern void iommu_del_device(struct device *dev);
extern long iommu_tce_xchg(struct mm_struct *mm, struct iommu_table *tbl,
unsigned long entry, unsigned long *hpa,
enum dma_data_direction *direction);
@ -229,10 +228,6 @@ static inline int iommu_add_device(struct iommu_table_group *table_group,
{
return 0;
}
static inline void iommu_del_device(struct device *dev)
{
}
#endif /* !CONFIG_IOMMU_API */
u64 dma_iommu_get_required_mask(struct device *dev);

View File

@ -144,7 +144,7 @@ static bool dma_iommu_bypass_supported(struct device *dev, u64 mask)
/* We support DMA to/from any memory page via the iommu */
int dma_iommu_dma_supported(struct device *dev, u64 mask)
{
struct iommu_table *tbl = get_iommu_table_base(dev);
struct iommu_table *tbl;
if (dev_is_pci(dev) && dma_iommu_bypass_supported(dev, mask)) {
/*
@ -162,6 +162,8 @@ int dma_iommu_dma_supported(struct device *dev, u64 mask)
return 1;
}
tbl = get_iommu_table_base(dev);
if (!tbl) {
dev_err(dev, "Warning: IOMMU dma not supported: mask 0x%08llx, table unavailable\n", mask);
return 0;

View File

@ -518,7 +518,7 @@ int ppc_iommu_map_sg(struct device *dev, struct iommu_table *tbl,
/* Convert entry to a dma_addr_t */
entry += tbl->it_offset;
dma_addr = entry << tbl->it_page_shift;
dma_addr |= (s->offset & ~IOMMU_PAGE_MASK(tbl));
dma_addr |= (vaddr & ~IOMMU_PAGE_MASK(tbl));
DBG(" - %lu pages, entry: %lx, dma_addr: %lx\n",
npages, entry, dma_addr);
@ -905,6 +905,7 @@ void *iommu_alloc_coherent(struct device *dev, struct iommu_table *tbl,
unsigned int order;
unsigned int nio_pages, io_order;
struct page *page;
int tcesize = (1 << tbl->it_page_shift);
size = PAGE_ALIGN(size);
order = get_order(size);
@ -931,7 +932,8 @@ void *iommu_alloc_coherent(struct device *dev, struct iommu_table *tbl,
memset(ret, 0, size);
/* Set up tces to cover the allocated range */
nio_pages = size >> tbl->it_page_shift;
nio_pages = IOMMU_PAGE_ALIGN(size, tbl) >> tbl->it_page_shift;
io_order = get_iommu_order(size, tbl);
mapping = iommu_alloc(dev, tbl, ret, nio_pages, DMA_BIDIRECTIONAL,
mask >> tbl->it_page_shift, io_order, 0);
@ -939,7 +941,8 @@ void *iommu_alloc_coherent(struct device *dev, struct iommu_table *tbl,
free_pages((unsigned long)ret, order);
return NULL;
}
*dma_handle = mapping;
*dma_handle = mapping | ((u64)ret & (tcesize - 1));
return ret;
}
@ -950,7 +953,7 @@ void iommu_free_coherent(struct iommu_table *tbl, size_t size,
unsigned int nio_pages;
size = PAGE_ALIGN(size);
nio_pages = size >> tbl->it_page_shift;
nio_pages = IOMMU_PAGE_ALIGN(size, tbl) >> tbl->it_page_shift;
iommu_free(tbl, dma_handle, nio_pages);
size = PAGE_ALIGN(size);
free_pages((unsigned long)vaddr, get_order(size));
@ -1168,23 +1171,6 @@ int iommu_add_device(struct iommu_table_group *table_group, struct device *dev)
}
EXPORT_SYMBOL_GPL(iommu_add_device);
void iommu_del_device(struct device *dev)
{
/*
* Some devices might not have IOMMU table and group
* and we needn't detach them from the associated
* IOMMU groups
*/
if (!device_iommu_mapped(dev)) {
pr_debug("iommu_tce: skipping device %s with no tbl\n",
dev_name(dev));
return;
}
iommu_group_remove_device(dev);
}
EXPORT_SYMBOL_GPL(iommu_del_device);
/*
* A simple iommu_table_group_ops which only allows reusing the existing
* iommu_table. This handles VFIO for POWER7 or the nested KVM.

View File

@ -93,11 +93,12 @@ static int process_ISA_OF_ranges(struct device_node *isa_node,
}
inval_range:
if (!phb_io_base_phys) {
if (phb_io_base_phys) {
pr_err("no ISA IO ranges or unexpected isa range, mapping 64k\n");
remap_isa_base(phb_io_base_phys, 0x10000);
return 0;
}
return 0;
return -EINVAL;
}

View File

@ -1040,8 +1040,8 @@ void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
pte_t entry, unsigned long address, int psize)
{
struct mm_struct *mm = vma->vm_mm;
unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_ACCESSED |
_PAGE_RW | _PAGE_EXEC);
unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_SOFT_DIRTY |
_PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC);
unsigned long change = pte_val(entry) ^ pte_val(*ptep);
/*

View File

@ -101,6 +101,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
bpf_hdr = jit_data->header;
proglen = jit_data->proglen;
extra_pass = true;
/* During extra pass, ensure index is reset before repopulating extable entries */
cgctx.exentry_idx = 0;
goto skip_init_ctx;
}

View File

@ -265,6 +265,7 @@ config CPM2
config FSL_ULI1575
bool "ULI1575 PCIe south bridge support"
depends on FSL_SOC_BOOKE || PPC_86xx
depends on PCI
select FSL_PCI
select GENERIC_ISA_DMA
help

View File

@ -865,28 +865,3 @@ void __init pnv_pci_init(void)
/* Configure IOMMU DMA hooks */
set_pci_dma_ops(&dma_iommu_ops);
}
static int pnv_tce_iommu_bus_notifier(struct notifier_block *nb,
unsigned long action, void *data)
{
struct device *dev = data;
switch (action) {
case BUS_NOTIFY_DEL_DEVICE:
iommu_del_device(dev);
return 0;
default:
return 0;
}
}
static struct notifier_block pnv_tce_iommu_bus_nb = {
.notifier_call = pnv_tce_iommu_bus_notifier,
};
static int __init pnv_tce_iommu_bus_notifier_init(void)
{
bus_register_notifier(&pci_bus_type, &pnv_tce_iommu_bus_nb);
return 0;
}
machine_subsys_initcall_sync(powernv, pnv_tce_iommu_bus_notifier_init);

View File

@ -91,19 +91,24 @@ static struct iommu_table_group *iommu_pseries_alloc_group(int node)
static void iommu_pseries_free_group(struct iommu_table_group *table_group,
const char *node_name)
{
struct iommu_table *tbl;
if (!table_group)
return;
tbl = table_group->tables[0];
#ifdef CONFIG_IOMMU_API
if (table_group->group) {
iommu_group_put(table_group->group);
BUG_ON(table_group->group);
}
#endif
iommu_tce_table_put(tbl);
/* Default DMA window table is at index 0, while DDW at 1. SR-IOV
* adapters only have table on index 1.
*/
if (table_group->tables[0])
iommu_tce_table_put(table_group->tables[0]);
if (table_group->tables[1])
iommu_tce_table_put(table_group->tables[1]);
kfree(table_group);
}
@ -1695,31 +1700,6 @@ static int __init disable_multitce(char *str)
__setup("multitce=", disable_multitce);
static int tce_iommu_bus_notifier(struct notifier_block *nb,
unsigned long action, void *data)
{
struct device *dev = data;
switch (action) {
case BUS_NOTIFY_DEL_DEVICE:
iommu_del_device(dev);
return 0;
default:
return 0;
}
}
static struct notifier_block tce_iommu_bus_nb = {
.notifier_call = tce_iommu_bus_notifier,
};
static int __init tce_iommu_bus_notifier_init(void)
{
bus_register_notifier(&pci_bus_type, &tce_iommu_bus_nb);
return 0;
}
machine_subsys_initcall_sync(pseries, tce_iommu_bus_notifier_init);
#ifdef CONFIG_SPAPR_TCE_IOMMU
struct iommu_group *pSeries_pci_device_group(struct pci_controller *hose,
struct pci_dev *pdev)

View File

@ -22,7 +22,7 @@ KCOV_INSTRUMENT := n
$(obj)/%.pi.o: OBJCOPYFLAGS := --prefix-symbols=__pi_ \
--remove-section=.note.gnu.property \
--prefix-alloc-sections=.init
--prefix-alloc-sections=.init.pi
$(obj)/%.pi.o: $(obj)/%.o FORCE
$(call if_changed,objcopy)

View File

@ -4,3 +4,5 @@ obj-$(CONFIG_RETHOOK) += rethook.o rethook_trampoline.o
obj-$(CONFIG_KPROBES_ON_FTRACE) += ftrace.o
obj-$(CONFIG_UPROBES) += uprobes.o decode-insn.o simulate-insn.o
CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_rethook.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_rethook_trampoline.o = $(CC_FLAGS_FTRACE)

View File

@ -84,11 +84,8 @@ SECTIONS
__init_data_begin = .;
INIT_DATA_SECTION(16)
/* Those sections result from the compilation of kernel/pi/string.c */
.init.pidata : {
*(.init.srodata.cst8*)
*(.init__bug_table*)
*(.init.sdata*)
.init.pi : {
*(.init.pi*)
}
.init.bss : {

View File

@ -469,19 +469,11 @@ config SCHED_SMT
config SCHED_MC
def_bool n
config SCHED_BOOK
def_bool n
config SCHED_DRAWER
def_bool n
config SCHED_TOPOLOGY
def_bool y
prompt "Topology scheduler support"
select SCHED_SMT
select SCHED_MC
select SCHED_BOOK
select SCHED_DRAWER
help
Topology scheduler support improves the CPU scheduler's decision
making when dealing with machines that have multi-threading,
@ -716,7 +708,6 @@ config EADM_SCH
config VFIO_CCW
def_tristate n
prompt "Support for VFIO-CCW subchannels"
depends on S390_CCW_IOMMU
depends on VFIO
select VFIO_MDEV
help
@ -728,7 +719,7 @@ config VFIO_CCW
config VFIO_AP
def_tristate n
prompt "VFIO support for AP devices"
depends on S390_AP_IOMMU && KVM
depends on KVM
depends on VFIO
depends on ZCRYPT
select VFIO_MDEV

View File

@ -591,8 +591,6 @@ CONFIG_VIRTIO_BALLOON=m
CONFIG_VIRTIO_INPUT=y
CONFIG_VHOST_NET=m
CONFIG_VHOST_VSOCK=m
CONFIG_S390_CCW_IOMMU=y
CONFIG_S390_AP_IOMMU=y
CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
@ -703,6 +701,7 @@ CONFIG_IMA_DEFAULT_HASH_SHA256=y
CONFIG_IMA_WRITE_POLICY=y
CONFIG_IMA_APPRAISE=y
CONFIG_LSM="yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor"
CONFIG_INIT_STACK_NONE=y
CONFIG_CRYPTO_USER=m
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
CONFIG_CRYPTO_PCRYPT=m

View File

@ -580,8 +580,6 @@ CONFIG_VIRTIO_BALLOON=m
CONFIG_VIRTIO_INPUT=y
CONFIG_VHOST_NET=m
CONFIG_VHOST_VSOCK=m
CONFIG_S390_CCW_IOMMU=y
CONFIG_S390_AP_IOMMU=y
CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
@ -686,6 +684,7 @@ CONFIG_IMA_DEFAULT_HASH_SHA256=y
CONFIG_IMA_WRITE_POLICY=y
CONFIG_IMA_APPRAISE=y
CONFIG_LSM="yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor"
CONFIG_INIT_STACK_NONE=y
CONFIG_CRYPTO_FIPS=y
CONFIG_CRYPTO_USER=m
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set

View File

@ -67,6 +67,7 @@ CONFIG_ZFCP=y
# CONFIG_MISC_FILESYSTEMS is not set
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_LSM="yama,loadpin,safesetid,integrity"
CONFIG_INIT_STACK_NONE=y
# CONFIG_ZLIB_DFLTCC is not set
CONFIG_XZ_DEC_MICROLZMA=y
CONFIG_PRINTK_TIME=y

View File

@ -82,7 +82,7 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src,
* it cannot handle a block of data or less, but otherwise
* it can handle data of arbitrary size
*/
if (bytes <= CHACHA_BLOCK_SIZE || nrounds != 20)
if (bytes <= CHACHA_BLOCK_SIZE || nrounds != 20 || !MACHINE_HAS_VX)
chacha_crypt_generic(state, dst, src, bytes, nrounds);
else
chacha20_crypt_s390(state, dst, src, bytes,

View File

@ -112,7 +112,7 @@ struct compat_statfs64 {
u32 f_namelen;
u32 f_frsize;
u32 f_flags;
u32 f_spare[4];
u32 f_spare[5];
};
/*

View File

@ -30,7 +30,7 @@ struct statfs {
unsigned int f_namelen;
unsigned int f_frsize;
unsigned int f_flags;
unsigned int f_spare[4];
unsigned int f_spare[5];
};
struct statfs64 {
@ -45,7 +45,7 @@ struct statfs64 {
unsigned int f_namelen;
unsigned int f_frsize;
unsigned int f_flags;
unsigned int f_spare[4];
unsigned int f_spare[5];
};
#endif

View File

@ -10,6 +10,7 @@ CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE)
# Do not trace early setup code
CFLAGS_REMOVE_early.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_rethook.o = $(CC_FLAGS_FTRACE)
endif

View File

@ -1935,14 +1935,13 @@ static struct shutdown_action __refdata dump_action = {
static void dump_reipl_run(struct shutdown_trigger *trigger)
{
unsigned long ipib = (unsigned long) reipl_block_actual;
struct lowcore *abs_lc;
unsigned int csum;
csum = (__force unsigned int)
csum_partial(reipl_block_actual, reipl_block_actual->hdr.len, 0);
abs_lc = get_abs_lowcore();
abs_lc->ipib = ipib;
abs_lc->ipib = __pa(reipl_block_actual);
abs_lc->ipib_checksum = csum;
put_abs_lowcore(abs_lc);
dump_run(trigger);

View File

@ -95,7 +95,7 @@ out:
static void cpu_thread_map(cpumask_t *dst, unsigned int cpu)
{
static cpumask_t mask;
int i;
unsigned int max_cpu;
cpumask_clear(&mask);
if (!cpumask_test_cpu(cpu, &cpu_setup_mask))
@ -104,9 +104,10 @@ static void cpu_thread_map(cpumask_t *dst, unsigned int cpu)
if (topology_mode != TOPOLOGY_MODE_HW)
goto out;
cpu -= cpu % (smp_cpu_mtid + 1);
for (i = 0; i <= smp_cpu_mtid; i++) {
if (cpumask_test_cpu(cpu + i, &cpu_setup_mask))
cpumask_set_cpu(cpu + i, &mask);
max_cpu = min(cpu + smp_cpu_mtid, nr_cpu_ids - 1);
for (; cpu <= max_cpu; cpu++) {
if (cpumask_test_cpu(cpu, &cpu_setup_mask))
cpumask_set_cpu(cpu, &mask);
}
out:
cpumask_copy(dst, &mask);
@ -123,25 +124,26 @@ static void add_cpus_to_mask(struct topology_core *tl_core,
unsigned int core;
for_each_set_bit(core, &tl_core->mask, TOPOLOGY_CORE_BITS) {
unsigned int rcore;
int lcpu, i;
unsigned int max_cpu, rcore;
int cpu;
rcore = TOPOLOGY_CORE_BITS - 1 - core + tl_core->origin;
lcpu = smp_find_processor_id(rcore << smp_cpu_mt_shift);
if (lcpu < 0)
cpu = smp_find_processor_id(rcore << smp_cpu_mt_shift);
if (cpu < 0)
continue;
for (i = 0; i <= smp_cpu_mtid; i++) {
topo = &cpu_topology[lcpu + i];
max_cpu = min(cpu + smp_cpu_mtid, nr_cpu_ids - 1);
for (; cpu <= max_cpu; cpu++) {
topo = &cpu_topology[cpu];
topo->drawer_id = drawer->id;
topo->book_id = book->id;
topo->socket_id = socket->id;
topo->core_id = rcore;
topo->thread_id = lcpu + i;
topo->thread_id = cpu;
topo->dedicated = tl_core->d;
cpumask_set_cpu(lcpu + i, &drawer->mask);
cpumask_set_cpu(lcpu + i, &book->mask);
cpumask_set_cpu(lcpu + i, &socket->mask);
smp_cpu_set_polarization(lcpu + i, tl_core->pp);
cpumask_set_cpu(cpu, &drawer->mask);
cpumask_set_cpu(cpu, &book->mask);
cpumask_set_cpu(cpu, &socket->mask);
smp_cpu_set_polarization(cpu, tl_core->pp);
}
}
}

View File

@ -16,7 +16,8 @@ mconsole-objs := mconsole_kern.o mconsole_user.o
hostaudio-objs := hostaudio_kern.o
ubd-objs := ubd_kern.o ubd_user.o
port-objs := port_kern.o port_user.o
harddog-objs := harddog_kern.o harddog_user.o
harddog-objs := harddog_kern.o
harddog-builtin-$(CONFIG_UML_WATCHDOG) := harddog_user.o harddog_user_exp.o
rtc-objs := rtc_kern.o rtc_user.o
LDFLAGS_pcap.o = $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libpcap.a)
@ -60,6 +61,7 @@ obj-$(CONFIG_PTY_CHAN) += pty.o
obj-$(CONFIG_TTY_CHAN) += tty.o
obj-$(CONFIG_XTERM_CHAN) += xterm.o xterm_kern.o
obj-$(CONFIG_UML_WATCHDOG) += harddog.o
obj-y += $(harddog-builtin-y) $(harddog-builtin-m)
obj-$(CONFIG_BLK_DEV_COW_COMMON) += cow_user.o
obj-$(CONFIG_UML_RANDOM) += random.o
obj-$(CONFIG_VIRTIO_UML) += virtio_uml.o

View File

@ -0,0 +1,9 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef UM_WATCHDOG_H
#define UM_WATCHDOG_H
int start_watchdog(int *in_fd_ret, int *out_fd_ret, char *sock);
void stop_watchdog(int in_fd, int out_fd);
int ping_watchdog(int fd);
#endif /* UM_WATCHDOG_H */

View File

@ -47,6 +47,7 @@
#include <linux/spinlock.h>
#include <linux/uaccess.h>
#include "mconsole.h"
#include "harddog.h"
MODULE_LICENSE("GPL");
@ -60,8 +61,6 @@ static int harddog_out_fd = -1;
* Allow only one person to hold it open
*/
extern int start_watchdog(int *in_fd_ret, int *out_fd_ret, char *sock);
static int harddog_open(struct inode *inode, struct file *file)
{
int err = -EBUSY;
@ -92,8 +91,6 @@ err:
return err;
}
extern void stop_watchdog(int in_fd, int out_fd);
static int harddog_release(struct inode *inode, struct file *file)
{
/*
@ -112,8 +109,6 @@ static int harddog_release(struct inode *inode, struct file *file)
return 0;
}
extern int ping_watchdog(int fd);
static ssize_t harddog_write(struct file *file, const char __user *data, size_t len,
loff_t *ppos)
{

Some files were not shown because too many files have changed in this diff Show More