Linux 5.12-rc5

-----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCAA8FiEEq68RxlopcLEwq+PEeb4+QwBBGIYFAmBhB7AeHHRvcnZhbGRz
 QGxpbnV4LWZvdW5kYXRpb24ub3JnAAoJEHm+PkMAQRiGCPUH+KKkSoOlN2YNu1oc
 iy2nznwZoSQTk5ZLz7PypO/WWmmtgzudkObG7yqIURdrncsAkHR17Wu2P7rdBr1j
 Ma+VhF9MQ+xx+r86upH7c3gYfhyfdUMvzuLy0rwLQ1Yrzrb7xFcVkj3BHk54TAQA
 w05sRPuVJ3/c/HPYV2iXkkdnnMbXSTCebeDDwjFb9D3qagr4vcd/PjDHmGbfNF8R
 o6gLpbK5Ly6ww1nth9gGGUjzrW95yVItvcroP6vQWljxhuy+NE1lXRm8LsGhxqtW
 foFFptJup5nhSNJXWtQt/U3huVD6mZ3W3y9cOThPjXZRy2wva3I1IpBKoEFReUpG
 /Tq8EA==
 =tPUY
 -----END PGP SIGNATURE-----

Merge tag 'v5.12-rc5' into WIP.x86/core, to pick up recent NOP related changes

In particular we want to have this upstream commit:

  b908297047: ("bpf: Use NOP_ATOMIC5 instead of emit_nops(&prog, 5) for BPF_TRAMP_F_CALL_ORIG")

... before merging in x86/cpu changes and the removal of the NOP optimizations, and
applying PeterZ's !retpoline objtool series.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Ingo Molnar 2021-04-02 12:33:16 +02:00
commit e855e80d00
783 changed files with 6218 additions and 3453 deletions

View File

@ -36,6 +36,7 @@ Andrew Morton <akpm@linux-foundation.org>
Andrew Murray <amurray@thegoodpenguin.co.uk> <amurray@embedded-bits.co.uk>
Andrew Murray <amurray@thegoodpenguin.co.uk> <andrew.murray@arm.com>
Andrew Vasquez <andrew.vasquez@qlogic.com>
Andrey Konovalov <andreyknvl@gmail.com> <andreyknvl@google.com>
Andrey Ryabinin <ryabinin.a.a@gmail.com> <a.ryabinin@samsung.com>
Andrey Ryabinin <ryabinin.a.a@gmail.com> <aryabinin@virtuozzo.com>
Andy Adamson <andros@citi.umich.edu>
@ -65,6 +66,8 @@ Changbin Du <changbin.du@intel.com> <changbin.du@gmail.com>
Changbin Du <changbin.du@intel.com> <changbin.du@intel.com>
Chao Yu <chao@kernel.org> <chao2.yu@samsung.com>
Chao Yu <chao@kernel.org> <yuchao0@huawei.com>
Chris Chiu <chris.chiu@canonical.com> <chiu@endlessm.com>
Chris Chiu <chris.chiu@canonical.com> <chiu@endlessos.org>
Christophe Ricard <christophe.ricard@gmail.com>
Christoph Hellwig <hch@lst.de>
Corey Minyard <minyard@acm.org>

View File

@ -33,7 +33,7 @@ Contact: xfs@oss.sgi.com
Description:
The current state of the log write grant head. It
represents the total log reservation of all currently
oustanding transactions, including regrants due to
outstanding transactions, including regrants due to
rolling transactions. The grant head is exported in
"cycle:bytes" format.
Users: xfstests

View File

@ -17,12 +17,12 @@ For ACPI on arm64, tables also fall into the following categories:
- Recommended: BERT, EINJ, ERST, HEST, PCCT, SSDT
- Optional: BGRT, CPEP, CSRT, DBG2, DRTM, ECDT, FACS, FPDT, IORT,
MCHI, MPST, MSCT, NFIT, PMTT, RASF, SBST, SLIT, SPMI, SRAT, STAO,
TCPA, TPM2, UEFI, XENV
- Optional: BGRT, CPEP, CSRT, DBG2, DRTM, ECDT, FACS, FPDT, IBFT,
IORT, MCHI, MPST, MSCT, NFIT, PMTT, RASF, SBST, SLIT, SPMI, SRAT,
STAO, TCPA, TPM2, UEFI, XENV
- Not supported: BOOT, DBGP, DMAR, ETDT, HPET, IBFT, IVRS, LPIT,
MSDM, OEMx, PSDT, RSDT, SLIC, WAET, WDAT, WDRT, WPBT
- Not supported: BOOT, DBGP, DMAR, ETDT, HPET, IVRS, LPIT, MSDM, OEMx,
PSDT, RSDT, SLIC, WAET, WDAT, WDRT, WPBT
====== ========================================================================
Table Usage for ARMv8 Linux

View File

@ -130,6 +130,9 @@ stable kernels.
| Marvell | ARM-MMU-500 | #582743 | N/A |
+----------------+-----------------+-----------------+-----------------------------+
+----------------+-----------------+-----------------+-----------------------------+
| NVIDIA | Carmel Core | N/A | NVIDIA_CARMEL_CNP_ERRATUM |
+----------------+-----------------+-----------------+-----------------------------+
+----------------+-----------------+-----------------+-----------------------------+
| Freescale/NXP | LS2080A/LS1043A | A-008585 | FSL_ERRATUM_A008585 |
+----------------+-----------------+-----------------+-----------------------------+
+----------------+-----------------+-----------------+-----------------------------+

View File

@ -21,6 +21,10 @@ properties:
- fsl,vf610-spdif
- fsl,imx6sx-spdif
- fsl,imx8qm-spdif
- fsl,imx8qxp-spdif
- fsl,imx8mq-spdif
- fsl,imx8mm-spdif
- fsl,imx8mn-spdif
reg:
maxItems: 1

View File

@ -267,7 +267,7 @@ DATA PATH
Tx
--
end_start_xmit() is called by the stack. This function does the following:
ena_start_xmit() is called by the stack. This function does the following:
- Maps data buffers (skb->data and frags).
- Populates ena_buf for the push buffer (if the driver and device are

View File

@ -52,7 +52,7 @@ purposes as a standard complementary tool. The system's view from
``devlink-dpipe`` should change according to the changes done by the
standard configuration tools.
For example, its quiet common to implement Access Control Lists (ACL)
For example, its quite common to implement Access Control Lists (ACL)
using Ternary Content Addressable Memory (TCAM). The TCAM memory can be
divided into TCAM regions. Complex TC filters can have multiple rules with
different priorities and different lookup keys. On the other hand hardware

View File

@ -151,7 +151,7 @@ representor netdevice.
-------------
A subfunction devlink port is created but it is not active yet. That means the
entities are created on devlink side, the e-switch port representor is created,
but the subfunction device itself it not created. A user might use e-switch port
but the subfunction device itself is not created. A user might use e-switch port
representor to do settings, putting it into bridge, adding TC rules, etc. A user
might as well configure the hardware address (such as MAC address) of the
subfunction while subfunction is inactive.
@ -173,7 +173,7 @@ Terms and Definitions
* - Term
- Definitions
* - ``PCI device``
- A physical PCI device having one or more PCI bus consists of one or
- A physical PCI device having one or more PCI buses consists of one or
more PCI controllers.
* - ``PCI controller``
- A controller consists of potentially multiple physical functions,

View File

@ -50,7 +50,7 @@ Callbacks to implement
The NIC driver offering ipsec offload will need to implement these
callbacks to make the offload available to the network stack's
XFRM subsytem. Additionally, the feature bits NETIF_F_HW_ESP and
XFRM subsystem. Additionally, the feature bits NETIF_F_HW_ESP and
NETIF_F_HW_ESP_TX_CSUM will signal the availability of the offload.

View File

@ -1495,7 +1495,8 @@ Fails if any VCPU has already been created.
Define which vcpu is the Bootstrap Processor (BSP). Values are the same
as the vcpu id in KVM_CREATE_VCPU. If this ioctl is not called, the default
is vcpu 0.
is vcpu 0. This ioctl has to be called before vcpu creation,
otherwise it will return EBUSY error.
4.42 KVM_GET_XSAVE
@ -4806,8 +4807,10 @@ If an MSR access is not permitted through the filtering, it generates a
allows user space to deflect and potentially handle various MSR accesses
into user space.
If a vCPU is in running state while this ioctl is invoked, the vCPU may
experience inconsistent filtering behavior on MSR accesses.
Note, invoking this ioctl with a vCPU is running is inherently racy. However,
KVM does guarantee that vCPUs will see either the previous filter or the new
filter, e.g. MSRs with identical settings in both the old and new filter will
have deterministic behavior.
4.127 KVM_XEN_HVM_SET_ATTR
--------------------------

View File

@ -1181,7 +1181,7 @@ M: Joel Fernandes <joel@joelfernandes.org>
M: Christian Brauner <christian@brauner.io>
M: Hridya Valsaraju <hridya@google.com>
M: Suren Baghdasaryan <surenb@google.com>
L: devel@driverdev.osuosl.org
L: linux-kernel@vger.kernel.org
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
F: drivers/android/
@ -2489,7 +2489,7 @@ N: sc27xx
N: sc2731
ARM/STI ARCHITECTURE
M: Patrice Chotard <patrice.chotard@st.com>
M: Patrice Chotard <patrice.chotard@foss.st.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
W: http://www.stlinux.com
@ -2522,7 +2522,7 @@ F: include/linux/remoteproc/st_slim_rproc.h
ARM/STM32 ARCHITECTURE
M: Maxime Coquelin <mcoquelin.stm32@gmail.com>
M: Alexandre Torgue <alexandre.torgue@st.com>
M: Alexandre Torgue <alexandre.torgue@foss.st.com>
L: linux-stm32@st-md-mailman.stormreply.com (moderated for non-subscribers)
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
@ -3115,7 +3115,7 @@ C: irc://irc.oftc.net/bcache
F: drivers/md/bcache/
BDISP ST MEDIA DRIVER
M: Fabien Dessenne <fabien.dessenne@st.com>
M: Fabien Dessenne <fabien.dessenne@foss.st.com>
L: linux-media@vger.kernel.org
S: Supported
W: https://linuxtv.org
@ -3675,7 +3675,7 @@ M: bcm-kernel-feedback-list@broadcom.com
L: linux-pm@vger.kernel.org
S: Maintained
T: git git://github.com/broadcom/stblinux.git
F: drivers/soc/bcm/bcm-pmb.c
F: drivers/soc/bcm/bcm63xx/bcm-pmb.c
F: include/dt-bindings/soc/bcm-pmb.h
BROADCOM SPECIFIC AMBA DRIVER (BCMA)
@ -5080,7 +5080,7 @@ S: Maintained
F: drivers/platform/x86/dell/dell-wmi.c
DELTA ST MEDIA DRIVER
M: Hugues Fruchet <hugues.fruchet@st.com>
M: Hugues Fruchet <hugues.fruchet@foss.st.com>
L: linux-media@vger.kernel.org
S: Supported
W: https://linuxtv.org
@ -6006,7 +6006,6 @@ F: drivers/gpu/drm/rockchip/
DRM DRIVERS FOR STI
M: Benjamin Gaignard <benjamin.gaignard@linaro.org>
M: Vincent Abriou <vincent.abriou@st.com>
L: dri-devel@lists.freedesktop.org
S: Maintained
T: git git://anongit.freedesktop.org/drm/drm-misc
@ -6014,10 +6013,9 @@ F: Documentation/devicetree/bindings/display/st,stih4xx.txt
F: drivers/gpu/drm/sti
DRM DRIVERS FOR STM
M: Yannick Fertre <yannick.fertre@st.com>
M: Philippe Cornu <philippe.cornu@st.com>
M: Yannick Fertre <yannick.fertre@foss.st.com>
M: Philippe Cornu <philippe.cornu@foss.st.com>
M: Benjamin Gaignard <benjamin.gaignard@linaro.org>
M: Vincent Abriou <vincent.abriou@st.com>
L: dri-devel@lists.freedesktop.org
S: Maintained
T: git git://anongit.freedesktop.org/drm/drm-misc
@ -8116,7 +8114,6 @@ F: drivers/crypto/hisilicon/sec2/sec_main.c
HISILICON STAGING DRIVERS FOR HIKEY 960/970
M: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
L: devel@driverdev.osuosl.org
S: Maintained
F: drivers/staging/hikey9xx/
@ -8231,7 +8228,7 @@ F: include/linux/hugetlb.h
F: mm/hugetlb.c
HVA ST MEDIA DRIVER
M: Jean-Christophe Trotin <jean-christophe.trotin@st.com>
M: Jean-Christophe Trotin <jean-christophe.trotin@foss.st.com>
L: linux-media@vger.kernel.org
S: Supported
W: https://linuxtv.org
@ -8521,6 +8518,7 @@ IBM Power SRIOV Virtual NIC Device Driver
M: Dany Madden <drt@linux.ibm.com>
M: Lijun Pan <ljp@linux.ibm.com>
M: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
R: Thomas Falcon <tlfalcon@linux.ibm.com>
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/ethernet/ibm/ibmvnic.*
@ -10030,7 +10028,6 @@ F: scripts/leaking_addresses.pl
LED SUBSYSTEM
M: Pavel Machek <pavel@ucw.cz>
R: Dan Murphy <dmurphy@ti.com>
L: linux-leds@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/pavel/linux-leds.git
@ -10906,7 +10903,6 @@ T: git git://linuxtv.org/media_tree.git
F: drivers/media/radio/radio-maxiradio*
MCAN MMIO DEVICE DRIVER
M: Dan Murphy <dmurphy@ti.com>
M: Pankaj Sharma <pankj.sharma@samsung.com>
L: linux-can@vger.kernel.org
S: Maintained
@ -11167,7 +11163,7 @@ T: git git://linuxtv.org/media_tree.git
F: drivers/media/dvb-frontends/stv6111*
MEDIA DRIVERS FOR STM32 - DCMI
M: Hugues Fruchet <hugues.fruchet@st.com>
M: Hugues Fruchet <hugues.fruchet@foss.st.com>
L: linux-media@vger.kernel.org
S: Supported
T: git git://linuxtv.org/media_tree.git
@ -12538,7 +12534,7 @@ NETWORKING [MPTCP]
M: Mat Martineau <mathew.j.martineau@linux.intel.com>
M: Matthieu Baerts <matthieu.baerts@tessares.net>
L: netdev@vger.kernel.org
L: mptcp@lists.01.org
L: mptcp@lists.linux.dev
S: Maintained
W: https://github.com/multipath-tcp/mptcp_net-next/wiki
B: https://github.com/multipath-tcp/mptcp_net-next/issues
@ -14709,15 +14705,11 @@ F: drivers/net/ethernet/qlogic/qlcnic/
QLOGIC QLGE 10Gb ETHERNET DRIVER
M: Manish Chopra <manishc@marvell.com>
M: GR-Linux-NIC-Dev@marvell.com
L: netdev@vger.kernel.org
S: Supported
F: drivers/staging/qlge/
QLOGIC QLGE 10Gb ETHERNET DRIVER
M: Coiby Xu <coiby.xu@gmail.com>
L: netdev@vger.kernel.org
S: Maintained
S: Supported
F: Documentation/networking/device_drivers/qlogic/qlge.rst
F: drivers/staging/qlge/
QM1D1B0004 MEDIA DRIVER
M: Akihiro Tsukada <tskd08@gmail.com>
@ -16887,8 +16879,10 @@ F: tools/spi/
SPIDERNET NETWORK DRIVER for CELL
M: Ishizaki Kou <kou.ishizaki@toshiba.co.jp>
M: Geoff Levand <geoff@infradead.org>
L: netdev@vger.kernel.org
S: Supported
L: linuxppc-dev@lists.ozlabs.org
S: Maintained
F: Documentation/networking/device_drivers/ethernet/toshiba/spider_net.rst
F: drivers/net/ethernet/toshiba/spider_net*
@ -16942,7 +16936,8 @@ F: Documentation/devicetree/bindings/media/i2c/st,st-mipid02.txt
F: drivers/media/i2c/st-mipid02.c
ST STM32 I2C/SMBUS DRIVER
M: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
M: Pierre-Yves MORDRET <pierre-yves.mordret@foss.st.com>
M: Alain Volmat <alain.volmat@foss.st.com>
L: linux-i2c@vger.kernel.org
S: Maintained
F: drivers/i2c/busses/i2c-stm32*
@ -17040,7 +17035,7 @@ F: drivers/staging/vt665?/
STAGING SUBSYSTEM
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: devel@driverdev.osuosl.org
L: linux-staging@lists.linux.dev
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
F: drivers/staging/
@ -17067,7 +17062,7 @@ F: kernel/jump_label.c
F: kernel/static_call.c
STI AUDIO (ASoC) DRIVERS
M: Arnaud Pouliquen <arnaud.pouliquen@st.com>
M: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Maintained
F: Documentation/devicetree/bindings/sound/st,sti-asoc-card.txt
@ -17087,15 +17082,15 @@ T: git git://linuxtv.org/media_tree.git
F: drivers/media/usb/stk1160/
STM32 AUDIO (ASoC) DRIVERS
M: Olivier Moysan <olivier.moysan@st.com>
M: Arnaud Pouliquen <arnaud.pouliquen@st.com>
M: Olivier Moysan <olivier.moysan@foss.st.com>
M: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Maintained
F: Documentation/devicetree/bindings/iio/adc/st,stm32-*.yaml
F: sound/soc/stm/
STM32 TIMER/LPTIMER DRIVERS
M: Fabrice Gasnier <fabrice.gasnier@st.com>
M: Fabrice Gasnier <fabrice.gasnier@foss.st.com>
S: Maintained
F: Documentation/ABI/testing/*timer-stm32
F: Documentation/devicetree/bindings/*/*stm32-*timer*
@ -17105,7 +17100,7 @@ F: include/linux/*/stm32-*tim*
STMMAC ETHERNET DRIVER
M: Giuseppe Cavallaro <peppe.cavallaro@st.com>
M: Alexandre Torgue <alexandre.torgue@st.com>
M: Alexandre Torgue <alexandre.torgue@foss.st.com>
M: Jose Abreu <joabreu@synopsys.com>
L: netdev@vger.kernel.org
S: Supported
@ -17847,7 +17842,6 @@ S: Maintained
F: drivers/thermal/ti-soc-thermal/
TI BQ27XXX POWER SUPPLY DRIVER
R: Dan Murphy <dmurphy@ti.com>
F: drivers/power/supply/bq27xxx_battery.c
F: drivers/power/supply/bq27xxx_battery_i2c.c
F: include/linux/power/bq27xxx_battery.h
@ -17982,7 +17976,6 @@ S: Odd Fixes
F: sound/soc/codecs/tas571x*
TI TCAN4X5X DEVICE DRIVER
M: Dan Murphy <dmurphy@ti.com>
L: linux-can@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/net/can/tcan4x5x.txt
@ -19135,7 +19128,7 @@ VME SUBSYSTEM
M: Martyn Welch <martyn@welchs.me.uk>
M: Manohar Vanga <manohar.vanga@gmail.com>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: devel@driverdev.osuosl.org
L: linux-kernel@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git
F: Documentation/driver-api/vme.rst

View File

@ -2,7 +2,7 @@
VERSION = 5
PATCHLEVEL = 12
SUBLEVEL = 0
EXTRAVERSION = -rc3
EXTRAVERSION = -rc5
NAME = Frozen Wasteland
# *DOCUMENTATION*

View File

@ -40,6 +40,9 @@
ethernet1 = &cpsw_emac1;
spi0 = &spi0;
spi1 = &spi1;
mmc0 = &mmc1;
mmc1 = &mmc2;
mmc2 = &mmc3;
};
cpus {

View File

@ -334,14 +334,6 @@
};
&pinctrl {
atmel,mux-mask = <
/* A B C */
0xFFFFFE7F 0xC0E0397F 0xEF00019D /* pioA */
0x03FFFFFF 0x02FC7E68 0x00780000 /* pioB */
0xffffffff 0xF83FFFFF 0xB800F3FC /* pioC */
0x003FFFFF 0x003F8000 0x00000000 /* pioD */
>;
adc {
pinctrl_adc_default: adc_default {
atmel,pins = <AT91_PIOB 15 AT91_PERIPH_A AT91_PINCTRL_NONE>;

View File

@ -84,8 +84,8 @@
pinctrl-0 = <&pinctrl_macb0_default>;
phy-mode = "rmii";
ethernet-phy@0 {
reg = <0x0>;
ethernet-phy@7 {
reg = <0x7>;
interrupt-parent = <&pioA>;
interrupts = <PIN_PD31 IRQ_TYPE_LEVEL_LOW>;
pinctrl-names = "default";

View File

@ -210,9 +210,6 @@
micrel,led-mode = <1>;
clocks = <&clks IMX6UL_CLK_ENET_REF>;
clock-names = "rmii-ref";
reset-gpios = <&gpio_spi 1 GPIO_ACTIVE_LOW>;
reset-assert-us = <10000>;
reset-deassert-us = <100>;
};
@ -222,9 +219,6 @@
micrel,led-mode = <1>;
clocks = <&clks IMX6UL_CLK_ENET2_REF>;
clock-names = "rmii-ref";
reset-gpios = <&gpio_spi 2 GPIO_ACTIVE_LOW>;
reset-assert-us = <10000>;
reset-deassert-us = <100>;
};
};
};
@ -243,6 +237,22 @@
status = "okay";
};
&gpio_spi {
eth0-phy-hog {
gpio-hog;
gpios = <1 GPIO_ACTIVE_HIGH>;
output-high;
line-name = "eth0-phy";
};
eth1-phy-hog {
gpio-hog;
gpios = <2 GPIO_ACTIVE_HIGH>;
output-high;
line-name = "eth1-phy";
};
};
&i2c1 {
clock-frequency = <100000>;
pinctrl-names = "default";

View File

@ -14,5 +14,6 @@
};
&gpmi {
fsl,use-minimum-ecc;
status = "okay";
};

View File

@ -606,6 +606,15 @@
compatible = "microchip,sam9x60-pinctrl", "atmel,at91sam9x5-pinctrl", "atmel,at91rm9200-pinctrl", "simple-bus";
ranges = <0xfffff400 0xfffff400 0x800>;
/* mux-mask corresponding to sam9x60 SoC in TFBGA228L package */
atmel,mux-mask = <
/* A B C */
0xffffffff 0xffe03fff 0xef00019d /* pioA */
0x03ffffff 0x02fc7e7f 0x00780000 /* pioB */
0xffffffff 0xffffffff 0xf83fffff /* pioC */
0x003fffff 0x003f8000 0x00000000 /* pioD */
>;
pioA: gpio@fffff400 {
compatible = "microchip,sam9x60-gpio", "atmel,at91sam9x5-gpio", "atmel,at91rm9200-gpio";
reg = <0xfffff400 0x200>;

View File

@ -7,6 +7,7 @@
#include <linux/module.h>
#include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/irqchip.h>
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_address.h>
@ -162,7 +163,7 @@ static void __exception_irq_entry avic_handle_irq(struct pt_regs *regs)
* interrupts. It registers the interrupt enable and disable functions
* to the kernel for each interrupt source.
*/
void __init mxc_init_irq(void __iomem *irqbase)
static void __init mxc_init_irq(void __iomem *irqbase)
{
struct device_node *np;
int irq_base;
@ -220,3 +221,16 @@ void __init mxc_init_irq(void __iomem *irqbase)
printk(KERN_INFO "MXC IRQ initialized\n");
}
static int __init imx_avic_init(struct device_node *node,
struct device_node *parent)
{
void __iomem *avic_base;
avic_base = of_iomap(node, 0);
BUG_ON(!avic_base);
mxc_init_irq(avic_base);
return 0;
}
IRQCHIP_DECLARE(imx_avic, "fsl,avic", imx_avic_init);

View File

@ -22,7 +22,6 @@ void mx35_map_io(void);
void imx21_init_early(void);
void imx31_init_early(void);
void imx35_init_early(void);
void mxc_init_irq(void __iomem *);
void mx31_init_irq(void);
void mx35_init_irq(void);
void mxc_set_cpu_type(unsigned int type);

View File

@ -17,16 +17,6 @@ static void __init imx1_init_early(void)
mxc_set_cpu_type(MXC_CPU_MX1);
}
static void __init imx1_init_irq(void)
{
void __iomem *avic_addr;
avic_addr = ioremap(MX1_AVIC_ADDR, SZ_4K);
WARN_ON(!avic_addr);
mxc_init_irq(avic_addr);
}
static const char * const imx1_dt_board_compat[] __initconst = {
"fsl,imx1",
NULL
@ -34,7 +24,6 @@ static const char * const imx1_dt_board_compat[] __initconst = {
DT_MACHINE_START(IMX1_DT, "Freescale i.MX1 (Device Tree Support)")
.init_early = imx1_init_early,
.init_irq = imx1_init_irq,
.dt_compat = imx1_dt_board_compat,
.restart = mxc_restart,
MACHINE_END

View File

@ -22,17 +22,6 @@ static void __init imx25_dt_init(void)
imx_aips_allow_unprivileged_access("fsl,imx25-aips");
}
static void __init mx25_init_irq(void)
{
struct device_node *np;
void __iomem *avic_base;
np = of_find_compatible_node(NULL, NULL, "fsl,avic");
avic_base = of_iomap(np, 0);
BUG_ON(!avic_base);
mxc_init_irq(avic_base);
}
static const char * const imx25_dt_board_compat[] __initconst = {
"fsl,imx25",
NULL
@ -42,6 +31,5 @@ DT_MACHINE_START(IMX25_DT, "Freescale i.MX25 (Device Tree Support)")
.init_early = imx25_init_early,
.init_machine = imx25_dt_init,
.init_late = imx25_pm_init,
.init_irq = mx25_init_irq,
.dt_compat = imx25_dt_board_compat,
MACHINE_END

View File

@ -56,17 +56,6 @@ static void __init imx27_init_early(void)
mxc_set_cpu_type(MXC_CPU_MX27);
}
static void __init mx27_init_irq(void)
{
void __iomem *avic_base;
struct device_node *np;
np = of_find_compatible_node(NULL, NULL, "fsl,avic");
avic_base = of_iomap(np, 0);
BUG_ON(!avic_base);
mxc_init_irq(avic_base);
}
static const char * const imx27_dt_board_compat[] __initconst = {
"fsl,imx27",
NULL
@ -75,7 +64,6 @@ static const char * const imx27_dt_board_compat[] __initconst = {
DT_MACHINE_START(IMX27_DT, "Freescale i.MX27 (Device Tree Support)")
.map_io = mx27_map_io,
.init_early = imx27_init_early,
.init_irq = mx27_init_irq,
.init_late = imx27_pm_init,
.dt_compat = imx27_dt_board_compat,
MACHINE_END

View File

@ -14,6 +14,5 @@ static const char * const imx31_dt_board_compat[] __initconst = {
DT_MACHINE_START(IMX31_DT, "Freescale i.MX31 (Device Tree Support)")
.map_io = mx31_map_io,
.init_early = imx31_init_early,
.init_irq = mx31_init_irq,
.dt_compat = imx31_dt_board_compat,
MACHINE_END

View File

@ -27,6 +27,5 @@ DT_MACHINE_START(IMX35_DT, "Freescale i.MX35 (Device Tree Support)")
.l2c_aux_mask = ~0,
.map_io = mx35_map_io,
.init_early = imx35_init_early,
.init_irq = mx35_init_irq,
.dt_compat = imx35_dt_board_compat,
MACHINE_END

View File

@ -109,18 +109,6 @@ void __init imx31_init_early(void)
mx3_ccm_base = of_iomap(np, 0);
BUG_ON(!mx3_ccm_base);
}
void __init mx31_init_irq(void)
{
void __iomem *avic_base;
struct device_node *np;
np = of_find_compatible_node(NULL, NULL, "fsl,imx31-avic");
avic_base = of_iomap(np, 0);
BUG_ON(!avic_base);
mxc_init_irq(avic_base);
}
#endif /* ifdef CONFIG_SOC_IMX31 */
#ifdef CONFIG_SOC_IMX35
@ -158,16 +146,4 @@ void __init imx35_init_early(void)
mx3_ccm_base = of_iomap(np, 0);
BUG_ON(!mx3_ccm_base);
}
void __init mx35_init_irq(void)
{
void __iomem *avic_base;
struct device_node *np;
np = of_find_compatible_node(NULL, NULL, "fsl,imx35-avic");
avic_base = of_iomap(np, 0);
BUG_ON(!avic_base);
mxc_init_irq(avic_base);
}
#endif /* ifdef CONFIG_SOC_IMX35 */

View File

@ -88,34 +88,26 @@ static void __init sr_set_nvalues(struct omap_volt_data *volt_data,
extern struct omap_sr_data omap_sr_pdata[];
static int __init sr_dev_init(struct omap_hwmod *oh, void *user)
static int __init sr_init_by_name(const char *name, const char *voltdm)
{
struct omap_sr_data *sr_data = NULL;
struct omap_volt_data *volt_data;
struct omap_smartreflex_dev_attr *sr_dev_attr;
static int i;
if (!strncmp(oh->name, "smartreflex_mpu_iva", 20) ||
!strncmp(oh->name, "smartreflex_mpu", 16))
if (!strncmp(name, "smartreflex_mpu_iva", 20) ||
!strncmp(name, "smartreflex_mpu", 16))
sr_data = &omap_sr_pdata[OMAP_SR_MPU];
else if (!strncmp(oh->name, "smartreflex_core", 17))
else if (!strncmp(name, "smartreflex_core", 17))
sr_data = &omap_sr_pdata[OMAP_SR_CORE];
else if (!strncmp(oh->name, "smartreflex_iva", 16))
else if (!strncmp(name, "smartreflex_iva", 16))
sr_data = &omap_sr_pdata[OMAP_SR_IVA];
if (!sr_data) {
pr_err("%s: Unknown instance %s\n", __func__, oh->name);
pr_err("%s: Unknown instance %s\n", __func__, name);
return -EINVAL;
}
sr_dev_attr = (struct omap_smartreflex_dev_attr *)oh->dev_attr;
if (!sr_dev_attr || !sr_dev_attr->sensor_voltdm_name) {
pr_err("%s: No voltage domain specified for %s. Cannot initialize\n",
__func__, oh->name);
goto exit;
}
sr_data->name = oh->name;
sr_data->name = name;
if (cpu_is_omap343x())
sr_data->ip_type = 1;
else
@ -136,10 +128,10 @@ static int __init sr_dev_init(struct omap_hwmod *oh, void *user)
}
}
sr_data->voltdm = voltdm_lookup(sr_dev_attr->sensor_voltdm_name);
sr_data->voltdm = voltdm_lookup(voltdm);
if (!sr_data->voltdm) {
pr_err("%s: Unable to get voltage domain pointer for VDD %s\n",
__func__, sr_dev_attr->sensor_voltdm_name);
__func__, voltdm);
goto exit;
}
@ -160,6 +152,20 @@ exit:
return 0;
}
static int __init sr_dev_init(struct omap_hwmod *oh, void *user)
{
struct omap_smartreflex_dev_attr *sr_dev_attr;
sr_dev_attr = (struct omap_smartreflex_dev_attr *)oh->dev_attr;
if (!sr_dev_attr || !sr_dev_attr->sensor_voltdm_name) {
pr_err("%s: No voltage domain specified for %s. Cannot initialize\n",
__func__, oh->name);
return 0;
}
return sr_init_by_name(oh->name, sr_dev_attr->sensor_voltdm_name);
}
/*
* API to be called from board files to enable smartreflex
* autocompensation at init.
@ -169,7 +175,42 @@ void __init omap_enable_smartreflex_on_init(void)
sr_enable_on_init = true;
}
static const char * const omap4_sr_instances[] = {
"mpu",
"iva",
"core",
};
static const char * const dra7_sr_instances[] = {
"mpu",
"core",
};
int __init omap_devinit_smartreflex(void)
{
const char * const *sr_inst;
int i, nr_sr = 0;
if (soc_is_omap44xx()) {
sr_inst = omap4_sr_instances;
nr_sr = ARRAY_SIZE(omap4_sr_instances);
} else if (soc_is_dra7xx()) {
sr_inst = dra7_sr_instances;
nr_sr = ARRAY_SIZE(dra7_sr_instances);
}
if (nr_sr) {
const char *name, *voltdm;
for (i = 0; i < nr_sr; i++) {
name = kasprintf(GFP_KERNEL, "smartreflex_%s", sr_inst[i]);
voltdm = sr_inst[i];
sr_init_by_name(name, voltdm);
}
return 0;
}
return omap_hwmod_for_each_by_class("smartreflex", sr_dev_init, NULL);
}

View File

@ -810,6 +810,16 @@ config QCOM_FALKOR_ERRATUM_E1041
If unsure, say Y.
config NVIDIA_CARMEL_CNP_ERRATUM
bool "NVIDIA Carmel CNP: CNP on Carmel semantically different than ARM cores"
default y
help
If CNP is enabled on Carmel cores, non-sharable TLBIs on a core will not
invalidate shared TLB entries installed by a different core, as it would
on standard ARM cores.
If unsure, say Y.
config SOCIONEXT_SYNQUACER_PREITS
bool "Socionext Synquacer: Workaround for GICv3 pre-ITS"
default y

View File

@ -198,6 +198,7 @@
ranges = <0x0 0x00 0x1700000 0x100000>;
reg = <0x00 0x1700000 0x0 0x100000>;
interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>;
dma-coherent;
sec_jr0: jr@10000 {
compatible = "fsl,sec-v5.4-job-ring",

View File

@ -348,6 +348,7 @@
ranges = <0x0 0x00 0x1700000 0x100000>;
reg = <0x00 0x1700000 0x0 0x100000>;
interrupts = <0 75 0x4>;
dma-coherent;
sec_jr0: jr@10000 {
compatible = "fsl,sec-v5.4-job-ring",

View File

@ -354,6 +354,7 @@
ranges = <0x0 0x00 0x1700000 0x100000>;
reg = <0x00 0x1700000 0x0 0x100000>;
interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>;
dma-coherent;
sec_jr0: jr@10000 {
compatible = "fsl,sec-v5.4-job-ring",

View File

@ -35,7 +35,7 @@
&i2c2 {
clock-frequency = <400000>;
pinctrl-names = "default";
pinctrl-names = "default", "gpio";
pinctrl-0 = <&pinctrl_i2c2>;
pinctrl-1 = <&pinctrl_i2c2_gpio>;
sda-gpios = <&gpio5 17 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;

View File

@ -67,7 +67,7 @@
&i2c1 {
clock-frequency = <400000>;
pinctrl-names = "default";
pinctrl-names = "default", "gpio";
pinctrl-0 = <&pinctrl_i2c1>;
pinctrl-1 = <&pinctrl_i2c1_gpio>;
sda-gpios = <&gpio5 15 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;

View File

@ -37,7 +37,7 @@ static inline __sum16 ip_fast_csum(const void *iph, unsigned int ihl)
} while (--n > 0);
sum += ((sum >> 32) | (sum << 32));
return csum_fold((__force u32)(sum >> 32));
return csum_fold((__force __wsum)(sum >> 32));
}
#define ip_fast_csum ip_fast_csum

View File

@ -66,7 +66,8 @@
#define ARM64_WORKAROUND_1508412 58
#define ARM64_HAS_LDAPR 59
#define ARM64_KVM_PROTECTED_MODE 60
#define ARM64_WORKAROUND_NVIDIA_CARMEL_CNP 61
#define ARM64_NCAPS 61
#define ARM64_NCAPS 62
#endif /* __ASM_CPUCAPS_H */

View File

@ -251,6 +251,8 @@ unsigned long get_wchan(struct task_struct *p);
extern struct task_struct *cpu_switch_to(struct task_struct *prev,
struct task_struct *next);
asmlinkage void arm64_preempt_schedule_irq(void);
#define task_pt_regs(p) \
((struct pt_regs *)(THREAD_SIZE + task_stack_page(p)) - 1)

View File

@ -55,6 +55,8 @@ void arch_setup_new_exec(void);
#define arch_setup_new_exec arch_setup_new_exec
void arch_release_task_struct(struct task_struct *tsk);
int arch_dup_task_struct(struct task_struct *dst,
struct task_struct *src);
#endif

View File

@ -525,6 +525,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
0, 0,
1, 0),
},
#endif
#ifdef CONFIG_NVIDIA_CARMEL_CNP_ERRATUM
{
/* NVIDIA Carmel */
.desc = "NVIDIA Carmel CNP erratum",
.capability = ARM64_WORKAROUND_NVIDIA_CARMEL_CNP,
ERRATA_MIDR_ALL_VERSIONS(MIDR_NVIDIA_CARMEL),
},
#endif
{
}

View File

@ -1321,7 +1321,10 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
* may share TLB entries with a CPU stuck in the crashed
* kernel.
*/
if (is_kdump_kernel())
if (is_kdump_kernel())
return false;
if (cpus_have_const_cap(ARM64_WORKAROUND_NVIDIA_CARMEL_CNP))
return false;
return has_cpuid_feature(entry, scope);

View File

@ -353,7 +353,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
* with the CLIDR_EL1 fields to avoid triggering false warnings
* when there is a mismatch across the CPUs. Keep track of the
* effective value of the CTR_EL0 in our internal records for
* acurate sanity check and feature enablement.
* accurate sanity check and feature enablement.
*/
info->reg_ctr = read_cpuid_effective_cachetype();
info->reg_dczid = read_cpuid(DCZID_EL0);

View File

@ -64,5 +64,7 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
{
memcpy(buf, phys_to_virt((phys_addr_t)*ppos), count);
*ppos += count;
return count;
}

View File

@ -57,6 +57,8 @@
#include <asm/processor.h>
#include <asm/pointer_auth.h>
#include <asm/stacktrace.h>
#include <asm/switch_to.h>
#include <asm/system_misc.h>
#if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_STACKPROTECTOR_PER_TASK)
#include <linux/stackprotector.h>

View File

@ -194,8 +194,9 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
#ifdef CONFIG_STACKTRACE
void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
struct task_struct *task, struct pt_regs *regs)
noinline void arch_stack_walk(stack_trace_consume_fn consume_entry,
void *cookie, struct task_struct *task,
struct pt_regs *regs)
{
struct stackframe frame;
@ -203,8 +204,8 @@ void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
start_backtrace(&frame, regs->regs[29], regs->pc);
else if (task == current)
start_backtrace(&frame,
(unsigned long)__builtin_frame_address(0),
(unsigned long)arch_stack_walk);
(unsigned long)__builtin_frame_address(1),
(unsigned long)__builtin_return_address(0));
else
start_backtrace(&frame, thread_saved_fp(task),
thread_saved_pc(task));

View File

@ -1448,6 +1448,22 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
struct range arch_get_mappable_range(void)
{
struct range mhp_range;
u64 start_linear_pa = __pa(_PAGE_OFFSET(vabits_actual));
u64 end_linear_pa = __pa(PAGE_END - 1);
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
/*
* Check for a wrap, it is possible because of randomized linear
* mapping the start physical address is actually bigger than
* the end physical address. In this case set start to zero
* because [0, end_linear_pa] range must still be able to cover
* all addressable physical addresses.
*/
if (start_linear_pa > end_linear_pa)
start_linear_pa = 0;
}
WARN_ON(start_linear_pa > end_linear_pa);
/*
* Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)]
@ -1455,8 +1471,9 @@ struct range arch_get_mappable_range(void)
* range which can be mapped inside this linear mapping range, must
* also be derived from its end points.
*/
mhp_range.start = __pa(_PAGE_OFFSET(vabits_actual));
mhp_range.end = __pa(PAGE_END - 1);
mhp_range.start = start_linear_pa;
mhp_range.end = end_linear_pa;
return mhp_range;
}

View File

@ -9,7 +9,7 @@ int arch_check_ftrace_location(struct kprobe *p)
return 0;
}
/* Ftrace callback handler for kprobes -- called under preepmt disabed */
/* Ftrace callback handler for kprobes -- called under preepmt disabled */
void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *ops, struct ftrace_regs *fregs)
{

View File

@ -59,7 +59,7 @@ show_##name(struct device *dev, struct device_attribute *attr, \
char *buf) \
{ \
u32 cpu=dev->id; \
return sprintf(buf, "%lx\n", name[cpu]); \
return sprintf(buf, "%llx\n", name[cpu]); \
}
#define store(name) \
@ -86,9 +86,9 @@ store_call_start(struct device *dev, struct device_attribute *attr,
#ifdef ERR_INJ_DEBUG
printk(KERN_DEBUG "pal_mc_err_inject for cpu%d:\n", cpu);
printk(KERN_DEBUG "err_type_info=%lx,\n", err_type_info[cpu]);
printk(KERN_DEBUG "err_struct_info=%lx,\n", err_struct_info[cpu]);
printk(KERN_DEBUG "err_data_buffer=%lx, %lx, %lx.\n",
printk(KERN_DEBUG "err_type_info=%llx,\n", err_type_info[cpu]);
printk(KERN_DEBUG "err_struct_info=%llx,\n", err_struct_info[cpu]);
printk(KERN_DEBUG "err_data_buffer=%llx, %llx, %llx.\n",
err_data_buffer[cpu].data1,
err_data_buffer[cpu].data2,
err_data_buffer[cpu].data3);
@ -117,8 +117,8 @@ store_call_start(struct device *dev, struct device_attribute *attr,
#ifdef ERR_INJ_DEBUG
printk(KERN_DEBUG "Returns: status=%d,\n", (int)status[cpu]);
printk(KERN_DEBUG "capabilities=%lx,\n", capabilities[cpu]);
printk(KERN_DEBUG "resources=%lx\n", resources[cpu]);
printk(KERN_DEBUG "capabilities=%llx,\n", capabilities[cpu]);
printk(KERN_DEBUG "resources=%llx\n", resources[cpu]);
#endif
return size;
}
@ -131,7 +131,7 @@ show_virtual_to_phys(struct device *dev, struct device_attribute *attr,
char *buf)
{
unsigned int cpu=dev->id;
return sprintf(buf, "%lx\n", phys_addr[cpu]);
return sprintf(buf, "%llx\n", phys_addr[cpu]);
}
static ssize_t
@ -145,7 +145,7 @@ store_virtual_to_phys(struct device *dev, struct device_attribute *attr,
ret = get_user_pages_fast(virt_addr, 1, FOLL_WRITE, NULL);
if (ret<=0) {
#ifdef ERR_INJ_DEBUG
printk("Virtual address %lx is not existing.\n",virt_addr);
printk("Virtual address %llx is not existing.\n", virt_addr);
#endif
return -EINVAL;
}
@ -163,7 +163,7 @@ show_err_data_buffer(struct device *dev,
{
unsigned int cpu=dev->id;
return sprintf(buf, "%lx, %lx, %lx\n",
return sprintf(buf, "%llx, %llx, %llx\n",
err_data_buffer[cpu].data1,
err_data_buffer[cpu].data2,
err_data_buffer[cpu].data3);
@ -178,13 +178,13 @@ store_err_data_buffer(struct device *dev,
int ret;
#ifdef ERR_INJ_DEBUG
printk("write err_data_buffer=[%lx,%lx,%lx] on cpu%d\n",
printk("write err_data_buffer=[%llx,%llx,%llx] on cpu%d\n",
err_data_buffer[cpu].data1,
err_data_buffer[cpu].data2,
err_data_buffer[cpu].data3,
cpu);
#endif
ret=sscanf(buf, "%lx, %lx, %lx",
ret = sscanf(buf, "%llx, %llx, %llx",
&err_data_buffer[cpu].data1,
&err_data_buffer[cpu].data2,
&err_data_buffer[cpu].data3);

View File

@ -1824,7 +1824,7 @@ ia64_mca_cpu_init(void *cpu_data)
data = mca_bootmem();
first_time = 0;
} else
data = (void *)__get_free_pages(GFP_KERNEL,
data = (void *)__get_free_pages(GFP_ATOMIC,
get_order(sz));
if (!data)
panic("Could not allocate MCA memory for cpu %d\n",

View File

@ -176,7 +176,7 @@ SECTIONS
.fill : {
FILL(0);
BYTE(0);
. = ALIGN(8);
STRUCT_ALIGN();
}
__appended_dtb = .;
/* leave space for appended DTB */

View File

@ -7,7 +7,7 @@
#include <linux/bug.h>
#include <asm/cputable.h>
static inline bool early_cpu_has_feature(unsigned long feature)
static __always_inline bool early_cpu_has_feature(unsigned long feature)
{
return !!((CPU_FTRS_ALWAYS & feature) ||
(CPU_FTRS_POSSIBLE & cur_cpu_spec->cpu_features & feature));
@ -46,7 +46,7 @@ static __always_inline bool cpu_has_feature(unsigned long feature)
return static_branch_likely(&cpu_feature_keys[i]);
}
#else
static inline bool cpu_has_feature(unsigned long feature)
static __always_inline bool cpu_has_feature(unsigned long feature)
{
return early_cpu_has_feature(feature);
}

View File

@ -65,3 +65,14 @@ V_FUNCTION_END(__kernel_clock_getres)
V_FUNCTION_BEGIN(__kernel_time)
cvdso_call_time __c_kernel_time
V_FUNCTION_END(__kernel_time)
/* Routines for restoring integer registers, called by the compiler. */
/* Called with r11 pointing to the stack header word of the caller of the */
/* function, just beyond the end of the integer restore area. */
_GLOBAL(_restgpr_31_x)
_GLOBAL(_rest32gpr_31_x)
lwz r0,4(r11)
lwz r31,-4(r11)
mtlr r0
mr r1,r11
blr

View File

@ -93,7 +93,6 @@ config RISCV
select PCI_MSI if PCI
select RISCV_INTC
select RISCV_TIMER if RISCV_SBI
select SPARSEMEM_STATIC if 32BIT
select SPARSE_IRQ
select SYSCTL_EXCEPTION_TRACE
select THREAD_INFO_IN_TASK
@ -154,7 +153,8 @@ config ARCH_FLATMEM_ENABLE
config ARCH_SPARSEMEM_ENABLE
def_bool y
depends on MMU
select SPARSEMEM_VMEMMAP_ENABLE
select SPARSEMEM_STATIC if 32BIT && SPARSMEM
select SPARSEMEM_VMEMMAP_ENABLE if 64BIT
config ARCH_SELECT_MEMORY_MODEL
def_bool ARCH_SPARSEMEM_ENABLE

View File

@ -31,6 +31,8 @@ config SOC_CANAAN
select SIFIVE_PLIC
select ARCH_HAS_RESET_CONTROLLER
select PINCTRL
select COMMON_CLK
select COMMON_CLK_K210
help
This enables support for Canaan Kendryte K210 SoC platform hardware.

View File

@ -9,4 +9,20 @@ long long __lshrti3(long long a, int b);
long long __ashrti3(long long a, int b);
long long __ashlti3(long long a, int b);
#define DECLARE_DO_ERROR_INFO(name) asmlinkage void name(struct pt_regs *regs)
DECLARE_DO_ERROR_INFO(do_trap_unknown);
DECLARE_DO_ERROR_INFO(do_trap_insn_misaligned);
DECLARE_DO_ERROR_INFO(do_trap_insn_fault);
DECLARE_DO_ERROR_INFO(do_trap_insn_illegal);
DECLARE_DO_ERROR_INFO(do_trap_load_fault);
DECLARE_DO_ERROR_INFO(do_trap_load_misaligned);
DECLARE_DO_ERROR_INFO(do_trap_store_misaligned);
DECLARE_DO_ERROR_INFO(do_trap_store_fault);
DECLARE_DO_ERROR_INFO(do_trap_ecall_u);
DECLARE_DO_ERROR_INFO(do_trap_ecall_s);
DECLARE_DO_ERROR_INFO(do_trap_ecall_m);
DECLARE_DO_ERROR_INFO(do_trap_break);
#endif /* _ASM_RISCV_PROTOTYPES_H */

View File

@ -12,4 +12,6 @@
#include <asm-generic/irq.h>
extern void __init init_IRQ(void);
#endif /* _ASM_RISCV_IRQ_H */

View File

@ -71,6 +71,7 @@ int riscv_of_processor_hartid(struct device_node *node);
int riscv_of_parent_hartid(struct device_node *node);
extern void riscv_fill_hwcap(void);
extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src);
#endif /* __ASSEMBLY__ */

View File

@ -119,6 +119,11 @@ extern int regs_query_register_offset(const char *name);
extern unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs,
unsigned int n);
void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
unsigned long frame_pointer);
int do_syscall_trace_enter(struct pt_regs *regs);
void do_syscall_trace_exit(struct pt_regs *regs);
/**
* regs_get_register() - get register value from its offset
* @regs: pt_regs from which register value is gotten

View File

@ -51,10 +51,10 @@ enum sbi_ext_rfence_fid {
SBI_EXT_RFENCE_REMOTE_FENCE_I = 0,
SBI_EXT_RFENCE_REMOTE_SFENCE_VMA,
SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID,
SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA,
SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA_VMID,
SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA,
SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA,
SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA_ASID,
SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA,
};
enum sbi_ext_hsm_fid {

View File

@ -88,4 +88,6 @@ static inline int read_current_timer(unsigned long *timer_val)
return 0;
}
extern void time_init(void);
#endif /* _ASM_RISCV_TIMEX_H */

View File

@ -8,6 +8,7 @@ CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_patch.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_sbi.o = $(CC_FLAGS_FTRACE)
endif
CFLAGS_syscall_table.o += $(call cc-option,-Wno-override-init,)
extra-y += head.o
extra-y += vmlinux.lds

View File

@ -2,39 +2,41 @@
#include <linux/kprobes.h>
/* Ftrace callback handler for kprobes -- called under preepmt disabed */
/* Ftrace callback handler for kprobes -- called under preepmt disabled */
void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *ops, struct ftrace_regs *regs)
struct ftrace_ops *ops, struct ftrace_regs *fregs)
{
struct kprobe *p;
struct pt_regs *regs;
struct kprobe_ctlblk *kcb;
p = get_kprobe((kprobe_opcode_t *)ip);
if (unlikely(!p) || kprobe_disabled(p))
return;
regs = ftrace_get_regs(fregs);
kcb = get_kprobe_ctlblk();
if (kprobe_running()) {
kprobes_inc_nmissed_count(p);
} else {
unsigned long orig_ip = instruction_pointer(&(regs->regs));
unsigned long orig_ip = instruction_pointer(regs);
instruction_pointer_set(&(regs->regs), ip);
instruction_pointer_set(regs, ip);
__this_cpu_write(current_kprobe, p);
kcb->kprobe_status = KPROBE_HIT_ACTIVE;
if (!p->pre_handler || !p->pre_handler(p, &(regs->regs))) {
if (!p->pre_handler || !p->pre_handler(p, regs)) {
/*
* Emulate singlestep (and also recover regs->pc)
* as if there is a nop
*/
instruction_pointer_set(&(regs->regs),
instruction_pointer_set(regs,
(unsigned long)p->addr + MCOUNT_INSN_SIZE);
if (unlikely(p->post_handler)) {
kcb->kprobe_status = KPROBE_HIT_SSDONE;
p->post_handler(p, &(regs->regs), 0);
p->post_handler(p, regs, 0);
}
instruction_pointer_set(&(regs->regs), orig_ip);
instruction_pointer_set(regs, orig_ip);
}
/*

View File

@ -256,8 +256,7 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int trapnr)
* normal page fault.
*/
regs->epc = (unsigned long) cur->addr;
if (!instruction_pointer(regs))
BUG();
BUG_ON(!instruction_pointer(regs));
if (kcb->kprobe_status == KPROBE_REENTER)
restore_previous_kprobe(kcb);

View File

@ -10,6 +10,7 @@
#include <linux/cpu.h>
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/sched/debug.h>
#include <linux/sched/task_stack.h>
#include <linux/tick.h>
#include <linux/ptrace.h>

View File

@ -116,7 +116,7 @@ void sbi_clear_ipi(void)
EXPORT_SYMBOL(sbi_clear_ipi);
/**
* sbi_set_timer_v01() - Program the timer for next timer event.
* __sbi_set_timer_v01() - Program the timer for next timer event.
* @stime_value: The value after which next timer event should fire.
*
* Return: None

View File

@ -147,7 +147,8 @@ static void __init init_resources(void)
bss_res.end = __pa_symbol(__bss_stop) - 1;
bss_res.flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
mem_res_sz = (memblock.memory.cnt + memblock.reserved.cnt) * sizeof(*mem_res);
/* + 1 as memblock_alloc() might increase memblock.reserved.cnt */
mem_res_sz = (memblock.memory.cnt + memblock.reserved.cnt + 1) * sizeof(*mem_res);
mem_res = memblock_alloc(mem_res_sz, SMP_CACHE_BYTES);
if (!mem_res)
panic("%s: Failed to allocate %zu bytes\n", __func__, mem_res_sz);

View File

@ -9,6 +9,7 @@
#include <linux/delay.h>
#include <asm/sbi.h>
#include <asm/processor.h>
#include <asm/timex.h>
unsigned long riscv_timebase;
EXPORT_SYMBOL_GPL(riscv_timebase);

View File

@ -17,6 +17,7 @@
#include <linux/module.h>
#include <linux/irq.h>
#include <asm/asm-prototypes.h>
#include <asm/bug.h>
#include <asm/processor.h>
#include <asm/ptrace.h>

View File

@ -155,7 +155,7 @@ static void __init kasan_populate(void *start, void *end)
memset(start, KASAN_SHADOW_INIT, end - start);
}
void __init kasan_shallow_populate(void *start, void *end)
static void __init kasan_shallow_populate(void *start, void *end)
{
unsigned long vaddr = (unsigned long)start & PAGE_MASK;
unsigned long vend = PAGE_ALIGN((unsigned long)end);
@ -187,6 +187,8 @@ void __init kasan_shallow_populate(void *start, void *end)
}
vaddr += PAGE_SIZE;
}
local_flush_tlb_all();
}
void __init kasan_init(void)

View File

@ -202,7 +202,7 @@ extern unsigned int s390_pci_no_rid;
----------------------------------------------------------------------------- */
/* Base stuff */
int zpci_create_device(u32 fid, u32 fh, enum zpci_state state);
void zpci_remove_device(struct zpci_dev *zdev);
void zpci_remove_device(struct zpci_dev *zdev, bool set_error);
int zpci_enable_device(struct zpci_dev *);
int zpci_disable_device(struct zpci_dev *);
int zpci_register_ioat(struct zpci_dev *, u8, u64, u64, u64);

View File

@ -968,7 +968,7 @@ static int cf_diag_all_start(void)
*/
static size_t cf_diag_needspace(unsigned int sets)
{
struct cpu_cf_events *cpuhw = this_cpu_ptr(&cpu_cf_events);
struct cpu_cf_events *cpuhw = get_cpu_ptr(&cpu_cf_events);
size_t bytes = 0;
int i;
@ -984,6 +984,7 @@ static size_t cf_diag_needspace(unsigned int sets)
sizeof(((struct s390_ctrset_cpudata *)0)->no_sets));
debug_sprintf_event(cf_diag_dbg, 5, "%s bytes %ld\n", __func__,
bytes);
put_cpu_ptr(&cpu_cf_events);
return bytes;
}

View File

@ -214,7 +214,7 @@ void vtime_flush(struct task_struct *tsk)
avg_steal = S390_lowcore.avg_steal_timer / 2;
if ((s64) steal > 0) {
S390_lowcore.steal_timer = 0;
account_steal_time(steal);
account_steal_time(cputime_to_nsecs(steal));
avg_steal += steal;
}
S390_lowcore.avg_steal_timer = avg_steal;

View File

@ -682,16 +682,36 @@ int zpci_disable_device(struct zpci_dev *zdev)
}
EXPORT_SYMBOL_GPL(zpci_disable_device);
void zpci_remove_device(struct zpci_dev *zdev)
/* zpci_remove_device - Removes the given zdev from the PCI core
* @zdev: the zdev to be removed from the PCI core
* @set_error: if true the device's error state is set to permanent failure
*
* Sets a zPCI device to a configured but offline state; the zPCI
* device is still accessible through its hotplug slot and the zPCI
* API but is removed from the common code PCI bus, making it
* no longer available to drivers.
*/
void zpci_remove_device(struct zpci_dev *zdev, bool set_error)
{
struct zpci_bus *zbus = zdev->zbus;
struct pci_dev *pdev;
if (!zdev->zbus->bus)
return;
pdev = pci_get_slot(zbus->bus, zdev->devfn);
if (pdev) {
if (pdev->is_virtfn)
return zpci_iov_remove_virtfn(pdev, zdev->vfn);
if (set_error)
pdev->error_state = pci_channel_io_perm_failure;
if (pdev->is_virtfn) {
zpci_iov_remove_virtfn(pdev, zdev->vfn);
/* balance pci_get_slot */
pci_dev_put(pdev);
return;
}
pci_stop_and_remove_bus_device_locked(pdev);
/* balance pci_get_slot */
pci_dev_put(pdev);
}
}
@ -765,7 +785,7 @@ void zpci_release_device(struct kref *kref)
struct zpci_dev *zdev = container_of(kref, struct zpci_dev, kref);
if (zdev->zbus->bus)
zpci_remove_device(zdev);
zpci_remove_device(zdev, false);
switch (zdev->state) {
case ZPCI_FN_STATE_ONLINE:

View File

@ -76,13 +76,10 @@ void zpci_event_error(void *data)
static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
{
struct zpci_dev *zdev = get_zdev_by_fid(ccdf->fid);
struct pci_dev *pdev = NULL;
enum zpci_state state;
struct pci_dev *pdev;
int ret;
if (zdev && zdev->zbus->bus)
pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn);
zpci_err("avail CCDF:\n");
zpci_err_hex(ccdf, sizeof(*ccdf));
@ -124,8 +121,7 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
case 0x0303: /* Deconfiguration requested */
if (!zdev)
break;
if (pdev)
zpci_remove_device(zdev);
zpci_remove_device(zdev, false);
ret = zpci_disable_device(zdev);
if (ret)
@ -140,12 +136,10 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
case 0x0304: /* Configured -> Standby|Reserved */
if (!zdev)
break;
if (pdev) {
/* Give the driver a hint that the function is
* already unusable. */
pdev->error_state = pci_channel_io_perm_failure;
zpci_remove_device(zdev);
}
/* Give the driver a hint that the function is
* already unusable.
*/
zpci_remove_device(zdev, true);
zdev->fh = ccdf->fh;
zpci_disable_device(zdev);

View File

@ -27,7 +27,7 @@ endif
REALMODE_CFLAGS := -m16 -g -Os -DDISABLE_BRANCH_PROFILING \
-Wall -Wstrict-prototypes -march=i386 -mregparm=3 \
-fno-strict-aliasing -fomit-frame-pointer -fno-pic \
-mno-mmx -mno-sse
-mno-mmx -mno-sse $(call cc-option,-fcf-protection=none)
REALMODE_CFLAGS += -ffreestanding
REALMODE_CFLAGS += -fno-stack-protector

View File

@ -3659,6 +3659,9 @@ static int intel_pmu_hw_config(struct perf_event *event)
return ret;
if (event->attr.precise_ip) {
if ((event->attr.config & INTEL_ARCH_EVENT_MASK) == INTEL_FIXED_VLBR_EVENT)
return -EINVAL;
if (!(event->attr.freq || (event->attr.wakeup_events && !event->attr.watermark))) {
event->hw.flags |= PERF_X86_EVENT_AUTO_RELOAD;
if (!(event->attr.sample_type &

View File

@ -2009,7 +2009,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sample_d
*/
if (!pebs_status && cpuc->pebs_enabled &&
!(cpuc->pebs_enabled & (cpuc->pebs_enabled-1)))
pebs_status = cpuc->pebs_enabled;
pebs_status = p->status = cpuc->pebs_enabled;
bit = find_first_bit((unsigned long *)&pebs_status,
x86_pmu.max_pebs_events);

View File

@ -884,12 +884,29 @@ struct kvm_hv_syndbg {
u64 options;
};
/* Current state of Hyper-V TSC page clocksource */
enum hv_tsc_page_status {
/* TSC page was not set up or disabled */
HV_TSC_PAGE_UNSET = 0,
/* TSC page MSR was written by the guest, update pending */
HV_TSC_PAGE_GUEST_CHANGED,
/* TSC page MSR was written by KVM userspace, update pending */
HV_TSC_PAGE_HOST_CHANGED,
/* TSC page was properly set up and is currently active */
HV_TSC_PAGE_SET,
/* TSC page is currently being updated and therefore is inactive */
HV_TSC_PAGE_UPDATING,
/* TSC page was set up with an inaccessible GPA */
HV_TSC_PAGE_BROKEN,
};
/* Hyper-V emulation context */
struct kvm_hv {
struct mutex hv_lock;
u64 hv_guest_os_id;
u64 hv_hypercall;
u64 hv_tsc_page;
enum hv_tsc_page_status hv_tsc_page_status;
/* Hyper-v based guest crash (NT kernel bugcheck) parameters */
u64 hv_crash_param[HV_X64_MSR_CRASH_PARAMS];
@ -931,6 +948,12 @@ enum kvm_irqchip_mode {
KVM_IRQCHIP_SPLIT, /* created with KVM_CAP_SPLIT_IRQCHIP */
};
struct kvm_x86_msr_filter {
u8 count;
bool default_allow:1;
struct msr_bitmap_range ranges[16];
};
#define APICV_INHIBIT_REASON_DISABLE 0
#define APICV_INHIBIT_REASON_HYPERV 1
#define APICV_INHIBIT_REASON_NESTED 2
@ -1025,16 +1048,11 @@ struct kvm_arch {
bool guest_can_read_msr_platform_info;
bool exception_payload_enabled;
bool bus_lock_detection_enabled;
/* Deflect RDMSR and WRMSR to user space when they trigger a #GP */
u32 user_space_msr_mask;
struct {
u8 count;
bool default_allow:1;
struct msr_bitmap_range ranges[16];
} msr_filter;
bool bus_lock_detection_enabled;
struct kvm_x86_msr_filter __rcu *msr_filter;
struct kvm_pmu_event_filter __rcu *pmu_event_filter;
struct task_struct *nx_lpage_recovery_thread;

View File

@ -544,15 +544,6 @@ static inline void arch_thread_struct_whitelist(unsigned long *offset,
*size = fpu_kernel_xstate_size;
}
/*
* Thread-synchronous status.
*
* This is different from the flags in that nobody else
* ever touches our thread-synchronous status, so we don't
* have to worry about atomic accesses.
*/
#define TS_COMPAT 0x0002 /* 32bit syscall active (64BIT)*/
static inline void
native_load_sp0(unsigned long sp0)
{

View File

@ -205,10 +205,23 @@ static inline int arch_within_stack_frames(const void * const stack,
#endif
/*
* Thread-synchronous status.
*
* This is different from the flags in that nobody else
* ever touches our thread-synchronous status, so we don't
* have to worry about atomic accesses.
*/
#define TS_COMPAT 0x0002 /* 32bit syscall active (64BIT)*/
#ifndef __ASSEMBLY__
#ifdef CONFIG_COMPAT
#define TS_I386_REGS_POKED 0x0004 /* regs poked by 32-bit ptracer */
#define arch_set_restart_data(restart) \
do { restart->arch_data = current_thread_info()->status; } while (0)
#endif
#ifndef __ASSEMBLY__
#ifdef CONFIG_X86_32
#define in_ia32_syscall() true

View File

@ -86,18 +86,6 @@ clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
}
#endif
/*
* The maximum amount of extra memory compared to the base size. The
* main scaling factor is the size of struct page. At extreme ratios
* of base:extra, all the base memory can be filled with page
* structures for the extra memory, leaving no space for anything
* else.
*
* 10x seems like a reasonable balance between scaling flexibility and
* leaving a practically usable system.
*/
#define XEN_EXTRA_MEM_RATIO (10)
/*
* Helper functions to write or read unsigned long values to/from
* memory, when the access may fault.

View File

@ -2342,6 +2342,11 @@ static int cpuid_to_apicid[] = {
[0 ... NR_CPUS - 1] = -1,
};
bool arch_match_cpu_phys_id(int cpu, u64 phys_id)
{
return phys_id == cpuid_to_apicid[cpu];
}
#ifdef CONFIG_SMP
/**
* apic_id_is_primary_thread - Check whether APIC ID belongs to a primary thread

View File

@ -1032,6 +1032,16 @@ static int mp_map_pin_to_irq(u32 gsi, int idx, int ioapic, int pin,
if (idx >= 0 && test_bit(mp_irqs[idx].srcbus, mp_bus_not_pci)) {
irq = mp_irqs[idx].srcbusirq;
legacy = mp_is_legacy_irq(irq);
/*
* IRQ2 is unusable for historical reasons on systems which
* have a legacy PIC. See the comment vs. IRQ2 further down.
*
* If this gets removed at some point then the related code
* in lapic_assign_system_vectors() needs to be adjusted as
* well.
*/
if (legacy && irq == PIC_CASCADE_IR)
return -EINVAL;
}
mutex_lock(&ioapic_mutex);

View File

@ -12,7 +12,7 @@
#include "common.h"
/* Ftrace callback handler for kprobes -- called under preepmt disabed */
/* Ftrace callback handler for kprobes -- called under preepmt disabled */
void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *ops, struct ftrace_regs *fregs)
{

View File

@ -836,28 +836,25 @@ static void kvm_kick_cpu(int cpu)
static void kvm_wait(u8 *ptr, u8 val)
{
unsigned long flags;
if (in_nmi())
return;
local_irq_save(flags);
if (READ_ONCE(*ptr) != val)
goto out;
/*
* halt until it's our turn and kicked. Note that we do safe halt
* for irq enabled case to avoid hang when lock info is overwritten
* in irq spinlock slowpath and no spurious interrupt occur to save us.
*/
if (arch_irqs_disabled_flags(flags))
halt();
else
safe_halt();
if (irqs_disabled()) {
if (READ_ONCE(*ptr) == val)
halt();
} else {
local_irq_disable();
out:
local_irq_restore(flags);
if (READ_ONCE(*ptr) == val)
safe_halt();
local_irq_enable();
}
}
#ifdef CONFIG_X86_32

View File

@ -766,30 +766,8 @@ handle_signal(struct ksignal *ksig, struct pt_regs *regs)
static inline unsigned long get_nr_restart_syscall(const struct pt_regs *regs)
{
/*
* This function is fundamentally broken as currently
* implemented.
*
* The idea is that we want to trigger a call to the
* restart_block() syscall and that we want in_ia32_syscall(),
* in_x32_syscall(), etc. to match whatever they were in the
* syscall being restarted. We assume that the syscall
* instruction at (regs->ip - 2) matches whatever syscall
* instruction we used to enter in the first place.
*
* The problem is that we can get here when ptrace pokes
* syscall-like values into regs even if we're not in a syscall
* at all.
*
* For now, we maintain historical behavior and guess based on
* stored state. We could do better by saving the actual
* syscall arch in restart_block or (with caveats on x32) by
* checking if regs->ip points to 'int $0x80'. The current
* behavior is incorrect if a tracer has a different bitness
* than the tracee.
*/
#ifdef CONFIG_IA32_EMULATION
if (current_thread_info()->status & (TS_COMPAT|TS_I386_REGS_POKED))
if (current->restart_block.arch_data & TS_COMPAT)
return __NR_ia32_restart_syscall;
#endif
#ifdef CONFIG_X86_X32_ABI

View File

@ -520,10 +520,10 @@ static u64 get_time_ref_counter(struct kvm *kvm)
u64 tsc;
/*
* The guest has not set up the TSC page or the clock isn't
* stable, fall back to get_kvmclock_ns.
* Fall back to get_kvmclock_ns() when TSC page hasn't been set up,
* is broken, disabled or being updated.
*/
if (!hv->tsc_ref.tsc_sequence)
if (hv->hv_tsc_page_status != HV_TSC_PAGE_SET)
return div_u64(get_kvmclock_ns(kvm), 100);
vcpu = kvm_get_vcpu(kvm, 0);
@ -1077,6 +1077,21 @@ static bool compute_tsc_page_parameters(struct pvclock_vcpu_time_info *hv_clock,
return true;
}
/*
* Don't touch TSC page values if the guest has opted for TSC emulation after
* migration. KVM doesn't fully support reenlightenment notifications and TSC
* access emulation and Hyper-V is known to expect the values in TSC page to
* stay constant before TSC access emulation is disabled from guest side
* (HV_X64_MSR_TSC_EMULATION_STATUS). KVM userspace is expected to preserve TSC
* frequency and guest visible TSC value across migration (and prevent it when
* TSC scaling is unsupported).
*/
static inline bool tsc_page_update_unsafe(struct kvm_hv *hv)
{
return (hv->hv_tsc_page_status != HV_TSC_PAGE_GUEST_CHANGED) &&
hv->hv_tsc_emulation_control;
}
void kvm_hv_setup_tsc_page(struct kvm *kvm,
struct pvclock_vcpu_time_info *hv_clock)
{
@ -1087,7 +1102,8 @@ void kvm_hv_setup_tsc_page(struct kvm *kvm,
BUILD_BUG_ON(sizeof(tsc_seq) != sizeof(hv->tsc_ref.tsc_sequence));
BUILD_BUG_ON(offsetof(struct ms_hyperv_tsc_page, tsc_sequence) != 0);
if (!(hv->hv_tsc_page & HV_X64_MSR_TSC_REFERENCE_ENABLE))
if (hv->hv_tsc_page_status == HV_TSC_PAGE_BROKEN ||
hv->hv_tsc_page_status == HV_TSC_PAGE_UNSET)
return;
mutex_lock(&hv->hv_lock);
@ -1101,7 +1117,15 @@ void kvm_hv_setup_tsc_page(struct kvm *kvm,
*/
if (unlikely(kvm_read_guest(kvm, gfn_to_gpa(gfn),
&tsc_seq, sizeof(tsc_seq))))
goto out_err;
if (tsc_seq && tsc_page_update_unsafe(hv)) {
if (kvm_read_guest(kvm, gfn_to_gpa(gfn), &hv->tsc_ref, sizeof(hv->tsc_ref)))
goto out_err;
hv->hv_tsc_page_status = HV_TSC_PAGE_SET;
goto out_unlock;
}
/*
* While we're computing and writing the parameters, force the
@ -1110,15 +1134,15 @@ void kvm_hv_setup_tsc_page(struct kvm *kvm,
hv->tsc_ref.tsc_sequence = 0;
if (kvm_write_guest(kvm, gfn_to_gpa(gfn),
&hv->tsc_ref, sizeof(hv->tsc_ref.tsc_sequence)))
goto out_unlock;
goto out_err;
if (!compute_tsc_page_parameters(hv_clock, &hv->tsc_ref))
goto out_unlock;
goto out_err;
/* Ensure sequence is zero before writing the rest of the struct. */
smp_wmb();
if (kvm_write_guest(kvm, gfn_to_gpa(gfn), &hv->tsc_ref, sizeof(hv->tsc_ref)))
goto out_unlock;
goto out_err;
/*
* Now switch to the TSC page mechanism by writing the sequence.
@ -1131,8 +1155,45 @@ void kvm_hv_setup_tsc_page(struct kvm *kvm,
smp_wmb();
hv->tsc_ref.tsc_sequence = tsc_seq;
kvm_write_guest(kvm, gfn_to_gpa(gfn),
&hv->tsc_ref, sizeof(hv->tsc_ref.tsc_sequence));
if (kvm_write_guest(kvm, gfn_to_gpa(gfn),
&hv->tsc_ref, sizeof(hv->tsc_ref.tsc_sequence)))
goto out_err;
hv->hv_tsc_page_status = HV_TSC_PAGE_SET;
goto out_unlock;
out_err:
hv->hv_tsc_page_status = HV_TSC_PAGE_BROKEN;
out_unlock:
mutex_unlock(&hv->hv_lock);
}
void kvm_hv_invalidate_tsc_page(struct kvm *kvm)
{
struct kvm_hv *hv = to_kvm_hv(kvm);
u64 gfn;
if (hv->hv_tsc_page_status == HV_TSC_PAGE_BROKEN ||
hv->hv_tsc_page_status == HV_TSC_PAGE_UNSET ||
tsc_page_update_unsafe(hv))
return;
mutex_lock(&hv->hv_lock);
if (!(hv->hv_tsc_page & HV_X64_MSR_TSC_REFERENCE_ENABLE))
goto out_unlock;
/* Preserve HV_TSC_PAGE_GUEST_CHANGED/HV_TSC_PAGE_HOST_CHANGED states */
if (hv->hv_tsc_page_status == HV_TSC_PAGE_SET)
hv->hv_tsc_page_status = HV_TSC_PAGE_UPDATING;
gfn = hv->hv_tsc_page >> HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT;
hv->tsc_ref.tsc_sequence = 0;
if (kvm_write_guest(kvm, gfn_to_gpa(gfn),
&hv->tsc_ref, sizeof(hv->tsc_ref.tsc_sequence)))
hv->hv_tsc_page_status = HV_TSC_PAGE_BROKEN;
out_unlock:
mutex_unlock(&hv->hv_lock);
}
@ -1193,8 +1254,15 @@ static int kvm_hv_set_msr_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data,
}
case HV_X64_MSR_REFERENCE_TSC:
hv->hv_tsc_page = data;
if (hv->hv_tsc_page & HV_X64_MSR_TSC_REFERENCE_ENABLE)
if (hv->hv_tsc_page & HV_X64_MSR_TSC_REFERENCE_ENABLE) {
if (!host)
hv->hv_tsc_page_status = HV_TSC_PAGE_GUEST_CHANGED;
else
hv->hv_tsc_page_status = HV_TSC_PAGE_HOST_CHANGED;
kvm_make_request(KVM_REQ_MASTERCLOCK_UPDATE, vcpu);
} else {
hv->hv_tsc_page_status = HV_TSC_PAGE_UNSET;
}
break;
case HV_X64_MSR_CRASH_P0 ... HV_X64_MSR_CRASH_P4:
return kvm_hv_msr_set_crash_data(kvm,
@ -1229,6 +1297,9 @@ static int kvm_hv_set_msr_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data,
hv->hv_tsc_emulation_control = data;
break;
case HV_X64_MSR_TSC_EMULATION_STATUS:
if (data && !host)
return 1;
hv->hv_tsc_emulation_status = data;
break;
case HV_X64_MSR_TIME_REF_COUNT:

View File

@ -133,6 +133,7 @@ void kvm_hv_process_stimers(struct kvm_vcpu *vcpu);
void kvm_hv_setup_tsc_page(struct kvm *kvm,
struct pvclock_vcpu_time_info *hv_clock);
void kvm_hv_invalidate_tsc_page(struct kvm *kvm);
void kvm_hv_init_vm(struct kvm *kvm);
void kvm_hv_destroy_vm(struct kvm *kvm);

View File

@ -78,6 +78,11 @@ static inline struct kvm_mmu_page *sptep_to_sp(u64 *sptep)
return to_shadow_page(__pa(sptep));
}
static inline int kvm_mmu_page_as_id(struct kvm_mmu_page *sp)
{
return sp->role.smm ? 1 : 0;
}
static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu)
{
/*

View File

@ -20,6 +20,21 @@ static gfn_t round_gfn_for_level(gfn_t gfn, int level)
return gfn & -KVM_PAGES_PER_HPAGE(level);
}
/*
* Return the TDP iterator to the root PT and allow it to continue its
* traversal over the paging structure from there.
*/
void tdp_iter_restart(struct tdp_iter *iter)
{
iter->yielded_gfn = iter->next_last_level_gfn;
iter->level = iter->root_level;
iter->gfn = round_gfn_for_level(iter->next_last_level_gfn, iter->level);
tdp_iter_refresh_sptep(iter);
iter->valid = true;
}
/*
* Sets a TDP iterator to walk a pre-order traversal of the paging structure
* rooted at root_pt, starting with the walk to translate next_last_level_gfn.
@ -31,16 +46,12 @@ void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level,
WARN_ON(root_level > PT64_ROOT_MAX_LEVEL);
iter->next_last_level_gfn = next_last_level_gfn;
iter->yielded_gfn = iter->next_last_level_gfn;
iter->root_level = root_level;
iter->min_level = min_level;
iter->level = root_level;
iter->pt_path[iter->level - 1] = (tdp_ptep_t)root_pt;
iter->pt_path[iter->root_level - 1] = (tdp_ptep_t)root_pt;
iter->as_id = kvm_mmu_page_as_id(sptep_to_sp(root_pt));
iter->gfn = round_gfn_for_level(iter->next_last_level_gfn, iter->level);
tdp_iter_refresh_sptep(iter);
iter->valid = true;
tdp_iter_restart(iter);
}
/*
@ -159,8 +170,3 @@ void tdp_iter_next(struct tdp_iter *iter)
iter->valid = false;
}
tdp_ptep_t tdp_iter_root_pt(struct tdp_iter *iter)
{
return iter->pt_path[iter->root_level - 1];
}

View File

@ -36,6 +36,8 @@ struct tdp_iter {
int min_level;
/* The iterator's current level within the paging structure */
int level;
/* The address space ID, i.e. SMM vs. regular. */
int as_id;
/* A snapshot of the value at sptep */
u64 old_spte;
/*
@ -62,6 +64,6 @@ tdp_ptep_t spte_to_child_pt(u64 pte, int level);
void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level,
int min_level, gfn_t next_last_level_gfn);
void tdp_iter_next(struct tdp_iter *iter);
tdp_ptep_t tdp_iter_root_pt(struct tdp_iter *iter);
void tdp_iter_restart(struct tdp_iter *iter);
#endif /* __KVM_X86_MMU_TDP_ITER_H */

View File

@ -203,11 +203,6 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
u64 old_spte, u64 new_spte, int level,
bool shared);
static int kvm_mmu_page_as_id(struct kvm_mmu_page *sp)
{
return sp->role.smm ? 1 : 0;
}
static void handle_changed_spte_acc_track(u64 old_spte, u64 new_spte, int level)
{
bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte);
@ -301,11 +296,16 @@ static void tdp_mmu_unlink_page(struct kvm *kvm, struct kvm_mmu_page *sp,
*
* Given a page table that has been removed from the TDP paging structure,
* iterates through the page table to clear SPTEs and free child page tables.
*
* Note that pt is passed in as a tdp_ptep_t, but it does not need RCU
* protection. Since this thread removed it from the paging structure,
* this thread will be responsible for ensuring the page is freed. Hence the
* early rcu_dereferences in the function.
*/
static void handle_removed_tdp_mmu_page(struct kvm *kvm, u64 *pt,
static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt,
bool shared)
{
struct kvm_mmu_page *sp = sptep_to_sp(pt);
struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(pt));
int level = sp->role.level;
gfn_t base_gfn = sp->gfn;
u64 old_child_spte;
@ -318,7 +318,7 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, u64 *pt,
tdp_mmu_unlink_page(kvm, sp, shared);
for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
sptep = pt + i;
sptep = rcu_dereference(pt) + i;
gfn = base_gfn + (i * KVM_PAGES_PER_HPAGE(level - 1));
if (shared) {
@ -492,10 +492,6 @@ static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm,
struct tdp_iter *iter,
u64 new_spte)
{
u64 *root_pt = tdp_iter_root_pt(iter);
struct kvm_mmu_page *root = sptep_to_sp(root_pt);
int as_id = kvm_mmu_page_as_id(root);
lockdep_assert_held_read(&kvm->mmu_lock);
/*
@ -509,8 +505,8 @@ static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm,
new_spte) != iter->old_spte)
return false;
handle_changed_spte(kvm, as_id, iter->gfn, iter->old_spte, new_spte,
iter->level, true);
handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte,
new_spte, iter->level, true);
return true;
}
@ -538,7 +534,7 @@ static inline bool tdp_mmu_zap_spte_atomic(struct kvm *kvm,
* here since the SPTE is going from non-present
* to non-present.
*/
WRITE_ONCE(*iter->sptep, 0);
WRITE_ONCE(*rcu_dereference(iter->sptep), 0);
return true;
}
@ -564,10 +560,6 @@ static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter,
u64 new_spte, bool record_acc_track,
bool record_dirty_log)
{
tdp_ptep_t root_pt = tdp_iter_root_pt(iter);
struct kvm_mmu_page *root = sptep_to_sp(root_pt);
int as_id = kvm_mmu_page_as_id(root);
lockdep_assert_held_write(&kvm->mmu_lock);
/*
@ -581,13 +573,13 @@ static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter,
WRITE_ONCE(*rcu_dereference(iter->sptep), new_spte);
__handle_changed_spte(kvm, as_id, iter->gfn, iter->old_spte, new_spte,
iter->level, false);
__handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte,
new_spte, iter->level, false);
if (record_acc_track)
handle_changed_spte_acc_track(iter->old_spte, new_spte,
iter->level);
if (record_dirty_log)
handle_changed_spte_dirty_log(kvm, as_id, iter->gfn,
handle_changed_spte_dirty_log(kvm, iter->as_id, iter->gfn,
iter->old_spte, new_spte,
iter->level);
}
@ -659,9 +651,7 @@ static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm,
WARN_ON(iter->gfn > iter->next_last_level_gfn);
tdp_iter_start(iter, iter->pt_path[iter->root_level - 1],
iter->root_level, iter->min_level,
iter->next_last_level_gfn);
tdp_iter_restart(iter);
return true;
}

View File

@ -1526,35 +1526,44 @@ EXPORT_SYMBOL_GPL(kvm_enable_efer_bits);
bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type)
{
struct kvm_x86_msr_filter *msr_filter;
struct msr_bitmap_range *ranges;
struct kvm *kvm = vcpu->kvm;
struct msr_bitmap_range *ranges = kvm->arch.msr_filter.ranges;
u32 count = kvm->arch.msr_filter.count;
u32 i;
bool r = kvm->arch.msr_filter.default_allow;
bool allowed;
int idx;
u32 i;
/* MSR filtering not set up or x2APIC enabled, allow everything */
if (!count || (index >= 0x800 && index <= 0x8ff))
/* x2APIC MSRs do not support filtering. */
if (index >= 0x800 && index <= 0x8ff)
return true;
/* Prevent collision with set_msr_filter */
idx = srcu_read_lock(&kvm->srcu);
for (i = 0; i < count; i++) {
msr_filter = srcu_dereference(kvm->arch.msr_filter, &kvm->srcu);
if (!msr_filter) {
allowed = true;
goto out;
}
allowed = msr_filter->default_allow;
ranges = msr_filter->ranges;
for (i = 0; i < msr_filter->count; i++) {
u32 start = ranges[i].base;
u32 end = start + ranges[i].nmsrs;
u32 flags = ranges[i].flags;
unsigned long *bitmap = ranges[i].bitmap;
if ((index >= start) && (index < end) && (flags & type)) {
r = !!test_bit(index - start, bitmap);
allowed = !!test_bit(index - start, bitmap);
break;
}
}
out:
srcu_read_unlock(&kvm->srcu, idx);
return r;
return allowed;
}
EXPORT_SYMBOL_GPL(kvm_msr_allowed);
@ -2551,6 +2560,8 @@ static void kvm_gen_update_masterclock(struct kvm *kvm)
struct kvm_vcpu *vcpu;
struct kvm_arch *ka = &kvm->arch;
kvm_hv_invalidate_tsc_page(kvm);
spin_lock(&ka->pvclock_gtod_sync_lock);
kvm_make_mclock_inprogress_request(kvm);
/* no guest entries from this point */
@ -5352,25 +5363,34 @@ split_irqchip_unlock:
return r;
}
static void kvm_clear_msr_filter(struct kvm *kvm)
static struct kvm_x86_msr_filter *kvm_alloc_msr_filter(bool default_allow)
{
u32 i;
u32 count = kvm->arch.msr_filter.count;
struct msr_bitmap_range ranges[16];
struct kvm_x86_msr_filter *msr_filter;
mutex_lock(&kvm->lock);
kvm->arch.msr_filter.count = 0;
memcpy(ranges, kvm->arch.msr_filter.ranges, count * sizeof(ranges[0]));
mutex_unlock(&kvm->lock);
synchronize_srcu(&kvm->srcu);
msr_filter = kzalloc(sizeof(*msr_filter), GFP_KERNEL_ACCOUNT);
if (!msr_filter)
return NULL;
for (i = 0; i < count; i++)
kfree(ranges[i].bitmap);
msr_filter->default_allow = default_allow;
return msr_filter;
}
static int kvm_add_msr_filter(struct kvm *kvm, struct kvm_msr_filter_range *user_range)
static void kvm_free_msr_filter(struct kvm_x86_msr_filter *msr_filter)
{
u32 i;
if (!msr_filter)
return;
for (i = 0; i < msr_filter->count; i++)
kfree(msr_filter->ranges[i].bitmap);
kfree(msr_filter);
}
static int kvm_add_msr_filter(struct kvm_x86_msr_filter *msr_filter,
struct kvm_msr_filter_range *user_range)
{
struct msr_bitmap_range *ranges = kvm->arch.msr_filter.ranges;
struct msr_bitmap_range range;
unsigned long *bitmap = NULL;
size_t bitmap_size;
@ -5404,11 +5424,9 @@ static int kvm_add_msr_filter(struct kvm *kvm, struct kvm_msr_filter_range *user
goto err;
}
/* Everything ok, add this range identifier to our global pool */
ranges[kvm->arch.msr_filter.count] = range;
/* Make sure we filled the array before we tell anyone to walk it */
smp_wmb();
kvm->arch.msr_filter.count++;
/* Everything ok, add this range identifier. */
msr_filter->ranges[msr_filter->count] = range;
msr_filter->count++;
return 0;
err:
@ -5419,10 +5437,11 @@ err:
static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
{
struct kvm_msr_filter __user *user_msr_filter = argp;
struct kvm_x86_msr_filter *new_filter, *old_filter;
struct kvm_msr_filter filter;
bool default_allow;
int r = 0;
bool empty = true;
int r = 0;
u32 i;
if (copy_from_user(&filter, user_msr_filter, sizeof(filter)))
@ -5435,25 +5454,32 @@ static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
if (empty && !default_allow)
return -EINVAL;
kvm_clear_msr_filter(kvm);
new_filter = kvm_alloc_msr_filter(default_allow);
if (!new_filter)
return -ENOMEM;
kvm->arch.msr_filter.default_allow = default_allow;
/*
* Protect from concurrent calls to this function that could trigger
* a TOCTOU violation on kvm->arch.msr_filter.count.
*/
mutex_lock(&kvm->lock);
for (i = 0; i < ARRAY_SIZE(filter.ranges); i++) {
r = kvm_add_msr_filter(kvm, &filter.ranges[i]);
if (r)
break;
r = kvm_add_msr_filter(new_filter, &filter.ranges[i]);
if (r) {
kvm_free_msr_filter(new_filter);
return r;
}
}
mutex_lock(&kvm->lock);
/* The per-VM filter is protected by kvm->lock... */
old_filter = srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1);
rcu_assign_pointer(kvm->arch.msr_filter, new_filter);
synchronize_srcu(&kvm->srcu);
kvm_free_msr_filter(old_filter);
kvm_make_all_cpus_request(kvm, KVM_REQ_MSR_FILTER_CHANGED);
mutex_unlock(&kvm->lock);
return r;
return 0;
}
long kvm_arch_vm_ioctl(struct file *filp,
@ -6603,7 +6629,7 @@ static int kvm_emulate_wbinvd_noskip(struct kvm_vcpu *vcpu)
int cpu = get_cpu();
cpumask_set_cpu(cpu, vcpu->arch.wbinvd_dirty_mask);
smp_call_function_many(vcpu->arch.wbinvd_dirty_mask,
on_each_cpu_mask(vcpu->arch.wbinvd_dirty_mask,
wbinvd_ipi, NULL, 1);
put_cpu();
cpumask_clear(vcpu->arch.wbinvd_dirty_mask);
@ -10634,8 +10660,6 @@ void kvm_arch_pre_destroy_vm(struct kvm *kvm)
void kvm_arch_destroy_vm(struct kvm *kvm)
{
u32 i;
if (current->mm == kvm->mm) {
/*
* Free memory regions allocated on behalf of userspace,
@ -10651,8 +10675,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
mutex_unlock(&kvm->slots_lock);
}
static_call_cond(kvm_x86_vm_destroy)(kvm);
for (i = 0; i < kvm->arch.msr_filter.count; i++)
kfree(kvm->arch.msr_filter.ranges[i].bitmap);
kvm_free_msr_filter(srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1));
kvm_pic_destroy(kvm);
kvm_ioapic_destroy(kvm);
kvm_free_vcpus(kvm);

View File

@ -263,7 +263,7 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
if (pgprot_val(old_prot) == pgprot_val(new_prot))
return;
pa = pfn << page_level_shift(level);
pa = pfn << PAGE_SHIFT;
size = page_level_size(level);
/*

View File

@ -1936,7 +1936,7 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
* add rsp, 8 // skip eth_type_trans's frame
* ret // return to its caller
*/
int arch_prepare_bpf_trampoline(void *image, void *image_end,
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,
const struct btf_func_model *m, u32 flags,
struct bpf_tramp_progs *tprogs,
void *orig_call)
@ -1975,6 +1975,15 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end,
save_regs(m, &prog, nr_args, stack_size);
if (flags & BPF_TRAMP_F_CALL_ORIG) {
/* arg1: mov rdi, im */
emit_mov_imm64(&prog, BPF_REG_1, (long) im >> 32, (u32) (long) im);
if (emit_call(&prog, __bpf_tramp_enter, prog)) {
ret = -EINVAL;
goto cleanup;
}
}
if (fentry->nr_progs)
if (invoke_bpf(m, &prog, fentry, stack_size))
return -EINVAL;
@ -1993,8 +2002,7 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end,
}
if (flags & BPF_TRAMP_F_CALL_ORIG) {
if (fentry->nr_progs || fmod_ret->nr_progs)
restore_regs(m, &prog, nr_args, stack_size);
restore_regs(m, &prog, nr_args, stack_size);
/* call original function */
if (emit_call(&prog, orig_call, prog)) {
@ -2003,6 +2011,9 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end,
}
/* remember return value in a stack for bpf prog to access */
emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
im->ip_after_call = prog;
memcpy(prog, ideal_nops[NOP_ATOMIC5], X86_PATCH_SIZE);
prog += X86_PATCH_SIZE;
}
if (fmod_ret->nr_progs) {
@ -2033,9 +2044,17 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end,
* the return value is only updated on the stack and still needs to be
* restored to R0.
*/
if (flags & BPF_TRAMP_F_CALL_ORIG)
if (flags & BPF_TRAMP_F_CALL_ORIG) {
im->ip_epilogue = prog;
/* arg1: mov rdi, im */
emit_mov_imm64(&prog, BPF_REG_1, (long) im >> 32, (u32) (long) im);
if (emit_call(&prog, __bpf_tramp_exit, prog)) {
ret = -EINVAL;
goto cleanup;
}
/* restore original return value back into RAX */
emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
}
EMIT1(0x5B); /* pop rbx */
EMIT1(0xC9); /* leave */
@ -2225,7 +2244,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
padding = true;
goto skip_init_addrs;
}
addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL);
addrs = kvmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL);
if (!addrs) {
prog = orig_prog;
goto out_addrs;
@ -2317,7 +2336,7 @@ out_image:
if (image)
bpf_prog_fill_jited_linfo(prog, addrs + 1);
out_addrs:
kfree(addrs);
kvfree(addrs);
kfree(jit_data);
prog->aux->jit_data = NULL;
}

View File

@ -27,7 +27,6 @@
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Sébastien Hinderer <Sebastien.Hinderer@ens-lyon.org>");
MODULE_DESCRIPTION("A power_off handler for Iris devices from EuroBraille");
MODULE_SUPPORTED_DEVICE("Eurobraille/Iris");
static bool force;

View File

@ -98,8 +98,8 @@ EXPORT_SYMBOL_GPL(xen_p2m_size);
unsigned long xen_max_p2m_pfn __read_mostly;
EXPORT_SYMBOL_GPL(xen_max_p2m_pfn);
#ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG_LIMIT
#define P2M_LIMIT CONFIG_XEN_BALLOON_MEMORY_HOTPLUG_LIMIT
#ifdef CONFIG_XEN_MEMORY_HOTPLUG_LIMIT
#define P2M_LIMIT CONFIG_XEN_MEMORY_HOTPLUG_LIMIT
#else
#define P2M_LIMIT 0
#endif
@ -416,9 +416,6 @@ void __init xen_vmalloc_p2m_tree(void)
xen_p2m_last_pfn = xen_max_p2m_pfn;
p2m_limit = (phys_addr_t)P2M_LIMIT * 1024 * 1024 * 1024 / PAGE_SIZE;
if (!p2m_limit && IS_ENABLED(CONFIG_XEN_UNPOPULATED_ALLOC))
p2m_limit = xen_start_info->nr_pages * XEN_EXTRA_MEM_RATIO;
vm.flags = VM_ALLOC;
vm.size = ALIGN(sizeof(unsigned long) * max(xen_max_p2m_pfn, p2m_limit),
PMD_SIZE * PMDS_PER_MID_PAGE);

View File

@ -59,6 +59,18 @@ static struct {
} xen_remap_buf __initdata __aligned(PAGE_SIZE);
static unsigned long xen_remap_mfn __initdata = INVALID_P2M_ENTRY;
/*
* The maximum amount of extra memory compared to the base size. The
* main scaling factor is the size of struct page. At extreme ratios
* of base:extra, all the base memory can be filled with page
* structures for the extra memory, leaving no space for anything
* else.
*
* 10x seems like a reasonable balance between scaling flexibility and
* leaving a practically usable system.
*/
#define EXTRA_MEM_RATIO (10)
static bool xen_512gb_limit __initdata = IS_ENABLED(CONFIG_XEN_512GB);
static void __init xen_parse_512gb(void)
@ -778,13 +790,13 @@ char * __init xen_memory_setup(void)
extra_pages += max_pages - max_pfn;
/*
* Clamp the amount of extra memory to a XEN_EXTRA_MEM_RATIO
* Clamp the amount of extra memory to a EXTRA_MEM_RATIO
* factor the base size.
*
* Make sure we have no memory above max_pages, as this area
* isn't handled by the p2m management.
*/
extra_pages = min3(XEN_EXTRA_MEM_RATIO * min(max_pfn, PFN_DOWN(MAXMEM)),
extra_pages = min3(EXTRA_MEM_RATIO * min(max_pfn, PFN_DOWN(MAXMEM)),
extra_pages, max_pages - max_pfn);
i = 0;
addr = xen_e820_table.entries[0].addr;

View File

@ -949,7 +949,7 @@ void bio_release_pages(struct bio *bio, bool mark_dirty)
}
EXPORT_SYMBOL_GPL(bio_release_pages);
static int bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter)
static void __bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter)
{
WARN_ON_ONCE(bio->bi_max_vecs);
@ -959,11 +959,26 @@ static int bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter)
bio->bi_iter.bi_size = iter->count;
bio_set_flag(bio, BIO_NO_PAGE_REF);
bio_set_flag(bio, BIO_CLONED);
}
static int bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter)
{
__bio_iov_bvec_set(bio, iter);
iov_iter_advance(iter, iter->count);
return 0;
}
static int bio_iov_bvec_set_append(struct bio *bio, struct iov_iter *iter)
{
struct request_queue *q = bio->bi_bdev->bd_disk->queue;
struct iov_iter i = *iter;
iov_iter_truncate(&i, queue_max_zone_append_sectors(q) << 9);
__bio_iov_bvec_set(bio, &i);
iov_iter_advance(iter, i.count);
return 0;
}
#define PAGE_PTRS_PER_BVEC (sizeof(struct bio_vec) / sizeof(struct page *))
/**
@ -1094,8 +1109,8 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
int ret = 0;
if (iov_iter_is_bvec(iter)) {
if (WARN_ON_ONCE(bio_op(bio) == REQ_OP_ZONE_APPEND))
return -EINVAL;
if (bio_op(bio) == REQ_OP_ZONE_APPEND)
return bio_iov_bvec_set_append(bio, iter);
return bio_iov_bvec_set(bio, iter);
}

View File

@ -382,6 +382,14 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
switch (bio_op(rq->bio)) {
case REQ_OP_DISCARD:
case REQ_OP_SECURE_ERASE:
if (queue_max_discard_segments(rq->q) > 1) {
struct bio *bio = rq->bio;
for_each_bio(bio)
nr_phys_segs++;
return nr_phys_segs;
}
return 1;
case REQ_OP_WRITE_ZEROES:
return 0;
case REQ_OP_WRITE_SAME:

View File

@ -322,6 +322,13 @@ static struct block_device *add_partition(struct gendisk *disk, int partno,
const char *dname;
int err;
/*
* disk_max_parts() won't be zero, either GENHD_FL_EXT_DEVT is set
* or 'minors' is passed to alloc_disk().
*/
if (partno >= disk_max_parts(disk))
return ERR_PTR(-EINVAL);
/*
* Partitions are not supported on zoned block devices that are used as
* such.

View File

@ -99,13 +99,12 @@ acpi_status acpi_ns_root_initialize(void)
* just create and link the new node(s) here.
*/
new_node =
ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_namespace_node));
acpi_ns_create_node(*ACPI_CAST_PTR(u32, init_val->name));
if (!new_node) {
status = AE_NO_MEMORY;
goto unlock_and_exit;
}
ACPI_COPY_NAMESEG(new_node->name.ascii, init_val->name);
new_node->descriptor_type = ACPI_DESC_TYPE_NAMED;
new_node->type = init_val->type;

Some files were not shown because too many files have changed in this diff Show More