Merge 5.17-rc6 into char-misc-next

We need the char-misc fixes in here.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Greg Kroah-Hartman 2022-02-28 07:30:32 +01:00
commit 085686fb84
509 changed files with 4156 additions and 2588 deletions

View File

@ -333,6 +333,9 @@ Rémi Denis-Courmont <rdenis@simphalempin.com>
Ricardo Ribalda <ribalda@kernel.org> <ricardo@ribalda.com>
Ricardo Ribalda <ribalda@kernel.org> Ricardo Ribalda Delgado <ribalda@kernel.org>
Ricardo Ribalda <ribalda@kernel.org> <ricardo.ribalda@gmail.com>
Roman Gushchin <roman.gushchin@linux.dev> <guro@fb.com>
Roman Gushchin <roman.gushchin@linux.dev> <guroan@gmail.com>
Roman Gushchin <roman.gushchin@linux.dev> <klamm@yandex-team.ru>
Ross Zwisler <zwisler@kernel.org> <ross.zwisler@linux.intel.com>
Rudolf Marek <R.Marek@sh.cvut.cz>
Rui Saraiva <rmps@joel.ist.utl.pt>

View File

@ -468,6 +468,7 @@ Description:
auto: Charge normally, respect thresholds
inhibit-charge: Do not charge while AC is attached
force-discharge: Force discharge while AC is attached
================ ====================================
What: /sys/class/power_supply/<supply_name>/technology
Date: May 2007

View File

@ -130,3 +130,11 @@ accesses to DMA buffers in both privileged "supervisor" and unprivileged
subsystem that the buffer is fully accessible at the elevated privilege
level (and ideally inaccessible or at least read-only at the
lesser-privileged levels).
DMA_ATTR_OVERWRITE
------------------
This is a hint to the DMA-mapping subsystem that the device is expected to
overwrite the entire mapped size, thus the caller does not require any of the
previous buffer contents to be preserved. This allows bounce-buffering
implementations to optimise DMA_FROM_DEVICE transfers.

View File

@ -75,6 +75,9 @@ And optionally
.resume - A pointer to a per-policy resume function which is called
with interrupts disabled and _before_ the governor is started again.
.ready - A pointer to a per-policy ready function which is called after
the policy is fully initialized.
.attr - A pointer to a NULL-terminated list of "struct freq_attr" which
allow to export values to sysfs.

View File

@ -7,7 +7,6 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: SiFive GPIO controller
maintainers:
- Yash Shah <yash.shah@sifive.com>
- Paul Walmsley <paul.walmsley@sifive.com>
properties:

View File

@ -20,7 +20,7 @@ description: |
maintainers:
- Kishon Vijay Abraham I <kishon@ti.com>
- Roger Quadros <rogerq@ti.com
- Roger Quadros <rogerq@kernel.org>
properties:
compatible:

View File

@ -8,7 +8,7 @@ title: OMAP USB2 PHY
maintainers:
- Kishon Vijay Abraham I <kishon@ti.com>
- Roger Quadros <rogerq@ti.com>
- Roger Quadros <rogerq@kernel.org>
properties:
compatible:

View File

@ -8,7 +8,6 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: SiFive PWM controller
maintainers:
- Yash Shah <yash.shah@sifive.com>
- Sagar Kadam <sagar.kadam@sifive.com>
- Paul Walmsley <paul.walmsley@sifive.com>

View File

@ -9,7 +9,6 @@ title: SiFive L2 Cache Controller
maintainers:
- Sagar Kadam <sagar.kadam@sifive.com>
- Yash Shah <yash.shah@sifive.com>
- Paul Walmsley <paul.walmsley@sifive.com>
description:

View File

@ -8,6 +8,7 @@ title: Audio codec controlled by ChromeOS EC
maintainers:
- Cheng-Yi Chiang <cychiang@chromium.org>
- Tzung-Bi Shih <tzungbi@google.com>
description: |
Google's ChromeOS EC codec is a digital mic codec provided by the

View File

@ -7,7 +7,7 @@ $schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: Bindings for the TI wrapper module for the Cadence USBSS-DRD controller
maintainers:
- Roger Quadros <rogerq@ti.com>
- Roger Quadros <rogerq@kernel.org>
properties:
compatible:

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: TI Keystone Soc USB Controller
maintainers:
- Roger Quadros <rogerq@ti.com>
- Roger Quadros <rogerq@kernel.org>
properties:
compatible:

View File

@ -2,7 +2,7 @@
Set the histogram bucket size (default *1*).
**-e**, **--entries** *N*
**-E**, **--entries** *N*
Set the number of entries of the histogram (default 256).

View File

@ -1,7 +1,7 @@
The **rtla osnoise** tool is an interface for the *osnoise* tracer. The
*osnoise* tracer dispatches a kernel thread per-cpu. These threads read the
time in a loop while with preemption, softirq and IRQs enabled, thus
allowing all the sources of operating systme noise during its execution.
allowing all the sources of operating system noise during its execution.
The *osnoise*'s tracer threads take note of the delta between each time
read, along with an interference counter of all sources of interference.
At the end of each period, the *osnoise* tracer displays a summary of

View File

@ -36,7 +36,7 @@ default). The reason for reducing the runtime is to avoid starving the
**rtla** tool. The tool is also set to run for *one minute*. The output
histogram is set to group outputs in buckets of *10us* and *25* entries::
[root@f34 ~/]# rtla osnoise hist -P F:1 -c 0-11 -r 900000 -d 1M -b 10 -e 25
[root@f34 ~/]# rtla osnoise hist -P F:1 -c 0-11 -r 900000 -d 1M -b 10 -E 25
# RTLA osnoise histogram
# Time unit is microseconds (us)
# Duration: 0 00:01:00

View File

@ -84,6 +84,8 @@ CPUfreq核心层注册一个cpufreq_driver结构体。
.resume - 一个指向per-policy恢复函数的指针该函数在关中断且在调节器再一次启动前被
调用。
.ready - 一个指向per-policy准备函数的指针该函数在策略完全初始化之后被调用。
.attr - 一个指向NULL结尾的"struct freq_attr"列表的指针,该列表允许导出值到
sysfs。

View File

@ -1394,7 +1394,7 @@ documentation when it pops into existence).
-------------------
:Capability: KVM_CAP_ENABLE_CAP
:Architectures: mips, ppc, s390
:Architectures: mips, ppc, s390, x86
:Type: vcpu ioctl
:Parameters: struct kvm_enable_cap (in)
:Returns: 0 on success; -1 on error
@ -6997,6 +6997,20 @@ indicated by the fd to the VM this is called on.
This is intended to support intra-host migration of VMs between userspace VMMs,
upgrading the VMM process without interrupting the guest.
7.30 KVM_CAP_PPC_AIL_MODE_3
-------------------------------
:Capability: KVM_CAP_PPC_AIL_MODE_3
:Architectures: ppc
:Type: vm
This capability indicates that the kernel supports the mode 3 setting for the
"Address Translation Mode on Interrupt" aka "Alternate Interrupt Location"
resource that is controlled with the H_SET_MODE hypercall.
This capability allows a guest kernel to use a better-performance mode for
handling interrupts and system calls.
8. Other capabilities.
======================

View File

@ -3147,11 +3147,9 @@ W: https://wireless.wiki.kernel.org/en/users/Drivers/ath5k
F: drivers/net/wireless/ath/ath5k/
ATHEROS ATH6KL WIRELESS DRIVER
M: Kalle Valo <kvalo@kernel.org>
L: linux-wireless@vger.kernel.org
S: Supported
S: Orphan
W: https://wireless.wiki.kernel.org/en/users/Drivers/ath6kl
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git
F: drivers/net/wireless/ath/ath6kl/
ATI_REMOTE2 DRIVER
@ -4557,6 +4555,7 @@ F: drivers/platform/chrome/
CHROMEOS EC CODEC DRIVER
M: Cheng-Yi Chiang <cychiang@chromium.org>
M: Tzung-Bi Shih <tzungbi@google.com>
R: Guenter Roeck <groeck@chromium.org>
S: Maintained
F: Documentation/devicetree/bindings/sound/google,cros-ec-codec.yaml
@ -4922,7 +4921,8 @@ F: kernel/cgroup/cpuset.c
CONTROL GROUP - MEMORY RESOURCE CONTROLLER (MEMCG)
M: Johannes Weiner <hannes@cmpxchg.org>
M: Michal Hocko <mhocko@kernel.org>
M: Vladimir Davydov <vdavydov.dev@gmail.com>
M: Roman Gushchin <roman.gushchin@linux.dev>
M: Shakeel Butt <shakeelb@google.com>
L: cgroups@vger.kernel.org
L: linux-mm@kvack.org
S: Maintained
@ -7029,12 +7029,6 @@ L: linux-edac@vger.kernel.org
S: Maintained
F: drivers/edac/sb_edac.c
EDAC-SIFIVE
M: Yash Shah <yash.shah@sifive.com>
L: linux-edac@vger.kernel.org
S: Supported
F: drivers/edac/sifive_edac.c
EDAC-SKYLAKE
M: Tony Luck <tony.luck@intel.com>
L: linux-edac@vger.kernel.org
@ -7205,7 +7199,7 @@ F: drivers/net/can/usb/etas_es58x/
ETHERNET BRIDGE
M: Roopa Prabhu <roopa@nvidia.com>
M: Nikolay Aleksandrov <nikolay@nvidia.com>
M: Nikolay Aleksandrov <razor@blackwall.org>
L: bridge@lists.linux-foundation.org (moderated for non-subscribers)
L: netdev@vger.kernel.org
S: Maintained
@ -9281,6 +9275,15 @@ S: Maintained
W: https://github.com/o2genum/ideapad-slidebar
F: drivers/input/misc/ideapad_slidebar.c
IDMAPPED MOUNTS
M: Christian Brauner <brauner@kernel.org>
L: linux-fsdevel@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git
F: Documentation/filesystems/idmappings.rst
F: tools/testing/selftests/mount_setattr/
F: include/linux/mnt_idmapping.h
IDT VersaClock 5 CLOCK DRIVER
M: Luca Ceresoli <luca@lucaceresoli.net>
S: Maintained
@ -15183,7 +15186,7 @@ M: Ingo Molnar <mingo@redhat.com>
M: Arnaldo Carvalho de Melo <acme@kernel.org>
R: Mark Rutland <mark.rutland@arm.com>
R: Alexander Shishkin <alexander.shishkin@linux.intel.com>
R: Jiri Olsa <jolsa@redhat.com>
R: Jiri Olsa <jolsa@kernel.org>
R: Namhyung Kim <namhyung@kernel.org>
L: linux-perf-users@vger.kernel.org
L: linux-kernel@vger.kernel.org
@ -15601,6 +15604,7 @@ M: Iurii Zaikin <yzaikin@google.com>
L: linux-kernel@vger.kernel.org
L: linux-fsdevel@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux.git sysctl-next
F: fs/proc/proc_sysctl.c
F: include/linux/sysctl.h
F: kernel/sysctl-test.c
@ -15948,6 +15952,7 @@ S: Supported
W: https://wireless.wiki.kernel.org/en/users/Drivers/ath10k
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git
F: drivers/net/wireless/ath/ath10k/
F: Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt
QUALCOMM ATHEROS ATH11K WIRELESS DRIVER
M: Kalle Valo <kvalo@kernel.org>
@ -15955,11 +15960,12 @@ L: ath11k@lists.infradead.org
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git
F: drivers/net/wireless/ath/ath11k/
F: Documentation/devicetree/bindings/net/wireless/qcom,ath11k.txt
QUALCOMM ATHEROS ATH9K WIRELESS DRIVER
M: ath9k-devel@qca.qualcomm.com
M: Toke Høiland-Jørgensen <toke@toke.dk>
L: linux-wireless@vger.kernel.org
S: Supported
S: Maintained
W: https://wireless.wiki.kernel.org/en/users/Drivers/ath9k
F: Documentation/devicetree/bindings/net/wireless/qca,ath9k.yaml
F: drivers/net/wireless/ath/ath9k/
@ -16034,14 +16040,6 @@ F: Documentation/devicetree/bindings/misc/qcom,fastrpc.txt
F: drivers/misc/fastrpc.c
F: include/uapi/misc/fastrpc.h
QUALCOMM GENERIC INTERFACE I2C DRIVER
M: Akash Asthana <akashast@codeaurora.org>
M: Mukesh Savaliya <msavaliy@codeaurora.org>
L: linux-i2c@vger.kernel.org
L: linux-arm-msm@vger.kernel.org
S: Supported
F: drivers/i2c/busses/i2c-qcom-geni.c
QUALCOMM HEXAGON ARCHITECTURE
M: Brian Cain <bcain@codeaurora.org>
L: linux-hexagon@vger.kernel.org
@ -16113,8 +16111,8 @@ F: Documentation/devicetree/bindings/mtd/qcom,nandc.yaml
F: drivers/mtd/nand/raw/qcom_nandc.c
QUALCOMM RMNET DRIVER
M: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
M: Sean Tranchetti <stranche@codeaurora.org>
M: Subash Abhinov Kasiviswanathan <quic_subashab@quicinc.com>
M: Sean Tranchetti <quic_stranche@quicinc.com>
L: netdev@vger.kernel.org
S: Maintained
F: Documentation/networking/device_drivers/cellular/qualcomm/rmnet.rst
@ -16140,11 +16138,10 @@ F: Documentation/devicetree/bindings/media/*venus*
F: drivers/media/platform/qcom/venus/
QUALCOMM WCN36XX WIRELESS DRIVER
M: Kalle Valo <kvalo@kernel.org>
M: Loic Poulain <loic.poulain@linaro.org>
L: wcn36xx@lists.infradead.org
S: Supported
W: https://wireless.wiki.kernel.org/en/users/Drivers/wcn36xx
T: git git://github.com/KrasnikovEugene/wcn36xx.git
F: drivers/net/wireless/ath/wcn36xx/
QUANTENNA QTNFMAC WIRELESS DRIVER
@ -16407,6 +16404,7 @@ F: drivers/watchdog/realtek_otto_wdt.c
REALTEK RTL83xx SMI DSA ROUTER CHIPS
M: Linus Walleij <linus.walleij@linaro.org>
M: Alvin Šipraga <alsi@bang-olufsen.dk>
S: Maintained
F: Documentation/devicetree/bindings/net/dsa/realtek-smi.txt
F: drivers/net/dsa/realtek-smi*
@ -17800,8 +17798,10 @@ M: David Rientjes <rientjes@google.com>
M: Joonsoo Kim <iamjoonsoo.kim@lge.com>
M: Andrew Morton <akpm@linux-foundation.org>
M: Vlastimil Babka <vbabka@suse.cz>
R: Roman Gushchin <roman.gushchin@linux.dev>
L: linux-mm@kvack.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git
F: include/linux/sl?b*.h
F: mm/sl?b*

View File

@ -2,7 +2,7 @@
VERSION = 5
PATCHLEVEL = 17
SUBLEVEL = 0
EXTRAVERSION = -rc4
EXTRAVERSION = -rc6
NAME = Superb Owl
# *DOCUMENTATION*

View File

@ -106,7 +106,7 @@
msr_s SYS_ICC_SRE_EL2, x0
isb // Make sure SRE is now set
mrs_s x0, SYS_ICC_SRE_EL2 // Read SRE back,
tbz x0, #0, 1f // and check that it sticks
tbz x0, #0, .Lskip_gicv3_\@ // and check that it sticks
msr_s SYS_ICH_HCR_EL2, xzr // Reset ICC_HCR_EL2 to defaults
.Lskip_gicv3_\@:
.endm

View File

@ -248,6 +248,8 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
IRQCHIP_STATE_PENDING,
&val);
WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
} else if (vgic_irq_is_mapped_level(irq)) {
val = vgic_get_phys_line_level(irq);
} else {
val = irq_is_pending(irq);
}

View File

@ -12,6 +12,14 @@
#include <asm/barrier.h>
#include <linux/atomic.h>
/* compiler build environment sanity checks: */
#if !defined(CONFIG_64BIT) && defined(__LP64__)
#error "Please use 'ARCH=parisc' to build the 32-bit kernel."
#endif
#if defined(CONFIG_64BIT) && !defined(__LP64__)
#error "Please use 'ARCH=parisc64' to build the 64-bit kernel."
#endif
/* See http://marc.theaimsgroup.com/?t=108826637900003 for discussion
* on use of volatile and __*_bit() (set/clear/change):
* *_bit() want use of volatile.

View File

@ -89,8 +89,8 @@ struct exception_table_entry {
__asm__("1: " ldx " 0(" sr "%2),%0\n" \
"9:\n" \
ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b) \
: "=r"(__gu_val), "=r"(__gu_err) \
: "r"(ptr), "1"(__gu_err)); \
: "=r"(__gu_val), "+r"(__gu_err) \
: "r"(ptr)); \
\
(val) = (__force __typeof__(*(ptr))) __gu_val; \
}
@ -123,8 +123,8 @@ struct exception_table_entry {
"9:\n" \
ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b) \
ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 9b) \
: "=&r"(__gu_tmp.l), "=r"(__gu_err) \
: "r"(ptr), "1"(__gu_err)); \
: "=&r"(__gu_tmp.l), "+r"(__gu_err) \
: "r"(ptr)); \
\
(val) = __gu_tmp.t; \
}
@ -135,13 +135,12 @@ struct exception_table_entry {
#define __put_user_internal(sr, x, ptr) \
({ \
ASM_EXCEPTIONTABLE_VAR(__pu_err); \
__typeof__(*(ptr)) __x = (__typeof__(*(ptr)))(x); \
\
switch (sizeof(*(ptr))) { \
case 1: __put_user_asm(sr, "stb", __x, ptr); break; \
case 2: __put_user_asm(sr, "sth", __x, ptr); break; \
case 4: __put_user_asm(sr, "stw", __x, ptr); break; \
case 8: STD_USER(sr, __x, ptr); break; \
case 1: __put_user_asm(sr, "stb", x, ptr); break; \
case 2: __put_user_asm(sr, "sth", x, ptr); break; \
case 4: __put_user_asm(sr, "stw", x, ptr); break; \
case 8: STD_USER(sr, x, ptr); break; \
default: BUILD_BUG(); \
} \
\
@ -150,7 +149,9 @@ struct exception_table_entry {
#define __put_user(x, ptr) \
({ \
__put_user_internal("%%sr3,", x, ptr); \
__typeof__(&*(ptr)) __ptr = ptr; \
__typeof__(*(__ptr)) __x = (__typeof__(*(__ptr)))(x); \
__put_user_internal("%%sr3,", __x, __ptr); \
})
#define __put_kernel_nofault(dst, src, type, err_label) \
@ -180,8 +181,8 @@ struct exception_table_entry {
"1: " stx " %2,0(" sr "%1)\n" \
"9:\n" \
ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b) \
: "=r"(__pu_err) \
: "r"(ptr), "r"(x), "0"(__pu_err))
: "+r"(__pu_err) \
: "r"(ptr), "r"(x))
#if !defined(CONFIG_64BIT)
@ -193,8 +194,8 @@ struct exception_table_entry {
"9:\n" \
ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b) \
ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 9b) \
: "=r"(__pu_err) \
: "r"(ptr), "r"(__val), "0"(__pu_err)); \
: "+r"(__pu_err) \
: "r"(ptr), "r"(__val)); \
} while (0)
#endif /* !defined(CONFIG_64BIT) */

View File

@ -340,7 +340,7 @@ static int emulate_stw(struct pt_regs *regs, int frreg, int flop)
: "r" (val), "r" (regs->ior), "r" (regs->isr)
: "r19", "r20", "r21", "r22", "r1", FIXUP_BRANCH_CLOBBER );
return 0;
return ret;
}
static int emulate_std(struct pt_regs *regs, int frreg, int flop)
{
@ -397,7 +397,7 @@ static int emulate_std(struct pt_regs *regs, int frreg, int flop)
__asm__ __volatile__ (
" mtsp %4, %%sr1\n"
" zdep %2, 29, 2, %%r19\n"
" dep %%r0, 31, 2, %2\n"
" dep %%r0, 31, 2, %3\n"
" mtsar %%r19\n"
" zvdepi -2, 32, %%r19\n"
"1: ldw 0(%%sr1,%3),%%r20\n"
@ -409,7 +409,7 @@ static int emulate_std(struct pt_regs *regs, int frreg, int flop)
" andcm %%r21, %%r19, %%r21\n"
" or %1, %%r20, %1\n"
" or %2, %%r21, %2\n"
"3: stw %1,0(%%sr1,%1)\n"
"3: stw %1,0(%%sr1,%3)\n"
"4: stw %%r1,4(%%sr1,%3)\n"
"5: stw %2,8(%%sr1,%3)\n"
" copy %%r0, %0\n"
@ -596,7 +596,6 @@ void handle_unaligned(struct pt_regs *regs)
ret = ERR_NOTHANDLED; /* "undefined", but lets kill them. */
break;
}
#ifdef CONFIG_PA20
switch (regs->iir & OPCODE2_MASK)
{
case OPCODE_FLDD_L:
@ -607,22 +606,23 @@ void handle_unaligned(struct pt_regs *regs)
flop=1;
ret = emulate_std(regs, R2(regs->iir),1);
break;
#ifdef CONFIG_PA20
case OPCODE_LDD_L:
ret = emulate_ldd(regs, R2(regs->iir),0);
break;
case OPCODE_STD_L:
ret = emulate_std(regs, R2(regs->iir),0);
break;
}
#endif
}
switch (regs->iir & OPCODE3_MASK)
{
case OPCODE_FLDW_L:
flop=1;
ret = emulate_ldw(regs, R2(regs->iir),0);
ret = emulate_ldw(regs, R2(regs->iir), 1);
break;
case OPCODE_LDW_M:
ret = emulate_ldw(regs, R2(regs->iir),1);
ret = emulate_ldw(regs, R2(regs->iir), 0);
break;
case OPCODE_FSTW_L:

View File

@ -346,6 +346,16 @@ u64 ioread64be(const void __iomem *addr)
return *((u64 *)addr);
}
u64 ioread64_lo_hi(const void __iomem *addr)
{
u32 low, high;
low = ioread32(addr);
high = ioread32(addr + sizeof(u32));
return low + ((u64)high << 32);
}
u64 ioread64_hi_lo(const void __iomem *addr)
{
u32 low, high;
@ -419,6 +429,12 @@ void iowrite64be(u64 datum, void __iomem *addr)
}
}
void iowrite64_lo_hi(u64 val, void __iomem *addr)
{
iowrite32(val, addr);
iowrite32(val >> 32, addr + sizeof(u32));
}
void iowrite64_hi_lo(u64 val, void __iomem *addr)
{
iowrite32(val >> 32, addr + sizeof(u32));
@ -530,6 +546,7 @@ EXPORT_SYMBOL(ioread32);
EXPORT_SYMBOL(ioread32be);
EXPORT_SYMBOL(ioread64);
EXPORT_SYMBOL(ioread64be);
EXPORT_SYMBOL(ioread64_lo_hi);
EXPORT_SYMBOL(ioread64_hi_lo);
EXPORT_SYMBOL(iowrite8);
EXPORT_SYMBOL(iowrite16);
@ -538,6 +555,7 @@ EXPORT_SYMBOL(iowrite32);
EXPORT_SYMBOL(iowrite32be);
EXPORT_SYMBOL(iowrite64);
EXPORT_SYMBOL(iowrite64be);
EXPORT_SYMBOL(iowrite64_lo_hi);
EXPORT_SYMBOL(iowrite64_hi_lo);
EXPORT_SYMBOL(ioread8_rep);
EXPORT_SYMBOL(ioread16_rep);

View File

@ -337,9 +337,9 @@ static void __init setup_bootmem(void)
static bool kernel_set_to_readonly;
static void __init map_pages(unsigned long start_vaddr,
unsigned long start_paddr, unsigned long size,
pgprot_t pgprot, int force)
static void __ref map_pages(unsigned long start_vaddr,
unsigned long start_paddr, unsigned long size,
pgprot_t pgprot, int force)
{
pmd_t *pmd;
pte_t *pg_table;
@ -449,7 +449,7 @@ void __init set_kernel_text_rw(int enable_read_write)
flush_tlb_all();
}
void __ref free_initmem(void)
void free_initmem(void)
{
unsigned long init_begin = (unsigned long)__init_begin;
unsigned long init_end = (unsigned long)__init_end;
@ -463,7 +463,6 @@ void __ref free_initmem(void)
/* The init text pages are marked R-X. We have to
* flush the icache and mark them RW-
*
* This is tricky, because map_pages is in the init section.
* Do a dummy remap of the data section first (the data
* section is already PAGE_KERNEL) to pull in the TLB entries
* for map_kernel */

View File

@ -421,14 +421,14 @@ InstructionTLBMiss:
*/
/* Get PTE (linux-style) and check access */
mfspr r3,SPRN_IMISS
#ifdef CONFIG_MODULES
#if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
lis r1, TASK_SIZE@h /* check if kernel address */
cmplw 0,r1,r3
#endif
mfspr r2, SPRN_SDR1
li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC | _PAGE_USER
rlwinm r2, r2, 28, 0xfffff000
#ifdef CONFIG_MODULES
#if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
bgt- 112f
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC

View File

@ -3264,12 +3264,14 @@ void emulate_update_regs(struct pt_regs *regs, struct instruction_op *op)
case BARRIER_EIEIO:
eieio();
break;
#ifdef CONFIG_PPC64
case BARRIER_LWSYNC:
asm volatile("lwsync" : : : "memory");
break;
case BARRIER_PTESYNC:
asm volatile("ptesync" : : : "memory");
break;
#endif
}
break;

View File

@ -23,7 +23,7 @@ CONFIG_SLOB=y
CONFIG_SOC_CANAAN=y
CONFIG_SMP=y
CONFIG_NR_CPUS=2
CONFIG_CMDLINE="earlycon console=ttySIF0 rootdelay=2 root=/dev/mmcblk0p1 ro"
CONFIG_CMDLINE="earlycon console=ttySIF0 root=/dev/mmcblk0p1 rootwait ro"
CONFIG_CMDLINE_FORCE=y
# CONFIG_SECCOMP is not set
# CONFIG_STACKPROTECTOR is not set

View File

@ -51,6 +51,8 @@ obj-$(CONFIG_MODULE_SECTIONS) += module-sections.o
obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o
obj-$(CONFIG_DYNAMIC_FTRACE) += mcount-dyn.o
obj-$(CONFIG_TRACE_IRQFLAGS) += trace_irq.o
obj-$(CONFIG_RISCV_BASE_PMU) += perf_event.o
obj-$(CONFIG_PERF_EVENTS) += perf_callchain.o
obj-$(CONFIG_HAVE_PERF_REGS) += perf_regs.o

View File

@ -108,7 +108,7 @@ _save_context:
.option pop
#ifdef CONFIG_TRACE_IRQFLAGS
call trace_hardirqs_off
call __trace_hardirqs_off
#endif
#ifdef CONFIG_CONTEXT_TRACKING
@ -143,7 +143,7 @@ skip_context_tracking:
li t0, EXC_BREAKPOINT
beq s4, t0, 1f
#ifdef CONFIG_TRACE_IRQFLAGS
call trace_hardirqs_on
call __trace_hardirqs_on
#endif
csrs CSR_STATUS, SR_IE
@ -234,7 +234,7 @@ ret_from_exception:
REG_L s0, PT_STATUS(sp)
csrc CSR_STATUS, SR_IE
#ifdef CONFIG_TRACE_IRQFLAGS
call trace_hardirqs_off
call __trace_hardirqs_off
#endif
#ifdef CONFIG_RISCV_M_MODE
/* the MPP value is too large to be used as an immediate arg for addi */
@ -270,10 +270,10 @@ restore_all:
REG_L s1, PT_STATUS(sp)
andi t0, s1, SR_PIE
beqz t0, 1f
call trace_hardirqs_on
call __trace_hardirqs_on
j 2f
1:
call trace_hardirqs_off
call __trace_hardirqs_off
2:
#endif
REG_L a0, PT_STATUS(sp)

View File

@ -5,6 +5,7 @@
* Copyright (c) 2020 Western Digital Corporation or its affiliates.
*/
#include <linux/bits.h>
#include <linux/init.h>
#include <linux/pm.h>
#include <linux/reboot.h>
@ -85,7 +86,7 @@ static unsigned long __sbi_v01_cpumask_to_hartmask(const struct cpumask *cpu_mas
pr_warn("Unable to send any request to hartid > BITS_PER_LONG for SBI v0.1\n");
break;
}
hmask |= 1 << hartid;
hmask |= BIT(hartid);
}
return hmask;
@ -160,7 +161,7 @@ static int __sbi_send_ipi_v01(const struct cpumask *cpu_mask)
{
unsigned long hart_mask;
if (!cpu_mask)
if (!cpu_mask || cpumask_empty(cpu_mask))
cpu_mask = cpu_online_mask;
hart_mask = __sbi_v01_cpumask_to_hartmask(cpu_mask);
@ -176,7 +177,7 @@ static int __sbi_rfence_v01(int fid, const struct cpumask *cpu_mask,
int result = 0;
unsigned long hart_mask;
if (!cpu_mask)
if (!cpu_mask || cpumask_empty(cpu_mask))
cpu_mask = cpu_online_mask;
hart_mask = __sbi_v01_cpumask_to_hartmask(cpu_mask);
@ -249,26 +250,37 @@ static void __sbi_set_timer_v02(uint64_t stime_value)
static int __sbi_send_ipi_v02(const struct cpumask *cpu_mask)
{
unsigned long hartid, cpuid, hmask = 0, hbase = 0;
unsigned long hartid, cpuid, hmask = 0, hbase = 0, htop = 0;
struct sbiret ret = {0};
int result;
if (!cpu_mask)
if (!cpu_mask || cpumask_empty(cpu_mask))
cpu_mask = cpu_online_mask;
for_each_cpu(cpuid, cpu_mask) {
hartid = cpuid_to_hartid_map(cpuid);
if (hmask && ((hbase + BITS_PER_LONG) <= hartid)) {
ret = sbi_ecall(SBI_EXT_IPI, SBI_EXT_IPI_SEND_IPI,
hmask, hbase, 0, 0, 0, 0);
if (ret.error)
goto ecall_failed;
hmask = 0;
hbase = 0;
if (hmask) {
if (hartid + BITS_PER_LONG <= htop ||
hbase + BITS_PER_LONG <= hartid) {
ret = sbi_ecall(SBI_EXT_IPI,
SBI_EXT_IPI_SEND_IPI, hmask,
hbase, 0, 0, 0, 0);
if (ret.error)
goto ecall_failed;
hmask = 0;
} else if (hartid < hbase) {
/* shift the mask to fit lower hartid */
hmask <<= hbase - hartid;
hbase = hartid;
}
}
if (!hmask)
if (!hmask) {
hbase = hartid;
hmask |= 1UL << (hartid - hbase);
htop = hartid;
} else if (hartid > htop) {
htop = hartid;
}
hmask |= BIT(hartid - hbase);
}
if (hmask) {
@ -344,25 +356,35 @@ static int __sbi_rfence_v02(int fid, const struct cpumask *cpu_mask,
unsigned long start, unsigned long size,
unsigned long arg4, unsigned long arg5)
{
unsigned long hartid, cpuid, hmask = 0, hbase = 0;
unsigned long hartid, cpuid, hmask = 0, hbase = 0, htop = 0;
int result;
if (!cpu_mask)
if (!cpu_mask || cpumask_empty(cpu_mask))
cpu_mask = cpu_online_mask;
for_each_cpu(cpuid, cpu_mask) {
hartid = cpuid_to_hartid_map(cpuid);
if (hmask && ((hbase + BITS_PER_LONG) <= hartid)) {
result = __sbi_rfence_v02_call(fid, hmask, hbase,
start, size, arg4, arg5);
if (result)
return result;
hmask = 0;
hbase = 0;
if (hmask) {
if (hartid + BITS_PER_LONG <= htop ||
hbase + BITS_PER_LONG <= hartid) {
result = __sbi_rfence_v02_call(fid, hmask,
hbase, start, size, arg4, arg5);
if (result)
return result;
hmask = 0;
} else if (hartid < hbase) {
/* shift the mask to fit lower hartid */
hmask <<= hbase - hartid;
hbase = hartid;
}
}
if (!hmask)
if (!hmask) {
hbase = hartid;
hmask |= 1UL << (hartid - hbase);
htop = hartid;
} else if (hartid > htop) {
htop = hartid;
}
hmask |= BIT(hartid - hbase);
}
if (hmask) {

View File

@ -0,0 +1,27 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2022 Changbin Du <changbin.du@gmail.com>
*/
#include <linux/irqflags.h>
#include <linux/kprobes.h>
#include "trace_irq.h"
/*
* trace_hardirqs_on/off require the caller to setup frame pointer properly.
* Otherwise, CALLER_ADDR1 might trigger an pagging exception in kernel.
* Here we add one extra level so they can be safely called by low
* level entry code which $fp is used for other purpose.
*/
void __trace_hardirqs_on(void)
{
trace_hardirqs_on();
}
NOKPROBE_SYMBOL(__trace_hardirqs_on);
void __trace_hardirqs_off(void)
{
trace_hardirqs_off();
}
NOKPROBE_SYMBOL(__trace_hardirqs_off);

View File

@ -0,0 +1,11 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2022 Changbin Du <changbin.du@gmail.com>
*/
#ifndef __TRACE_IRQ_H
#define __TRACE_IRQ_H
void __trace_hardirqs_on(void);
void __trace_hardirqs_off(void);
#endif /* __TRACE_IRQ_H */

View File

@ -703,7 +703,6 @@ struct kvm_vcpu_arch {
struct fpu_guest guest_fpu;
u64 xcr0;
u64 guest_supported_xcr0;
struct kvm_pio_request pio;
void *pio_data;

View File

@ -476,6 +476,7 @@
#define MSR_AMD64_ICIBSEXTDCTL 0xc001103c
#define MSR_AMD64_IBSOPDATA4 0xc001103d
#define MSR_AMD64_IBS_REG_COUNT_MAX 8 /* includes MSR_AMD64_IBSBRTARGET */
#define MSR_AMD64_SVM_AVIC_DOORBELL 0xc001011b
#define MSR_AMD64_VM_PAGE_FLUSH 0xc001011e
#define MSR_AMD64_SEV_ES_GHCB 0xc0010130
#define MSR_AMD64_SEV 0xc0010131

View File

@ -220,6 +220,42 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define SVM_NESTED_CTL_SEV_ENABLE BIT(1)
#define SVM_NESTED_CTL_SEV_ES_ENABLE BIT(2)
/* AVIC */
#define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK (0xFF)
#define AVIC_LOGICAL_ID_ENTRY_VALID_BIT 31
#define AVIC_LOGICAL_ID_ENTRY_VALID_MASK (1 << 31)
#define AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK (0xFFULL)
#define AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK (0xFFFFFFFFFFULL << 12)
#define AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK (1ULL << 62)
#define AVIC_PHYSICAL_ID_ENTRY_VALID_MASK (1ULL << 63)
#define AVIC_PHYSICAL_ID_TABLE_SIZE_MASK (0xFF)
#define AVIC_DOORBELL_PHYSICAL_ID_MASK (0xFF)
#define AVIC_UNACCEL_ACCESS_WRITE_MASK 1
#define AVIC_UNACCEL_ACCESS_OFFSET_MASK 0xFF0
#define AVIC_UNACCEL_ACCESS_VECTOR_MASK 0xFFFFFFFF
enum avic_ipi_failure_cause {
AVIC_IPI_FAILURE_INVALID_INT_TYPE,
AVIC_IPI_FAILURE_TARGET_NOT_RUNNING,
AVIC_IPI_FAILURE_INVALID_TARGET,
AVIC_IPI_FAILURE_INVALID_BACKING_PAGE,
};
/*
* 0xff is broadcast, so the max index allowed for physical APIC ID
* table is 0xfe. APIC IDs above 0xff are reserved.
*/
#define AVIC_MAX_PHYSICAL_ID_COUNT 0xff
#define AVIC_HPA_MASK ~((0xFFFULL << 52) | 0xFFF)
#define VMCB_AVIC_APIC_BAR_MASK 0xFFFFFFFFFF000ULL
struct vmcb_seg {
u16 selector;
u16 attrib;

View File

@ -344,10 +344,8 @@ static void sgx_reclaim_pages(void)
{
struct sgx_epc_page *chunk[SGX_NR_TO_SCAN];
struct sgx_backing backing[SGX_NR_TO_SCAN];
struct sgx_epc_section *section;
struct sgx_encl_page *encl_page;
struct sgx_epc_page *epc_page;
struct sgx_numa_node *node;
pgoff_t page_index;
int cnt = 0;
int ret;
@ -418,13 +416,7 @@ skip:
kref_put(&encl_page->encl->refcount, sgx_encl_release);
epc_page->flags &= ~SGX_EPC_PAGE_RECLAIMER_TRACKED;
section = &sgx_epc_sections[epc_page->section];
node = section->node;
spin_lock(&node->lock);
list_add_tail(&epc_page->list, &node->free_page_list);
spin_unlock(&node->lock);
atomic_long_inc(&sgx_nr_free_pages);
sgx_free_epc_page(epc_page);
}
}

View File

@ -91,11 +91,9 @@ int xfpregs_set(struct task_struct *target, const struct user_regset *regset,
const void *kbuf, const void __user *ubuf)
{
struct fpu *fpu = &target->thread.fpu;
struct user32_fxsr_struct newstate;
struct fxregs_state newstate;
int ret;
BUILD_BUG_ON(sizeof(newstate) != sizeof(struct fxregs_state));
if (!cpu_feature_enabled(X86_FEATURE_FXSR))
return -ENODEV;
@ -116,9 +114,10 @@ int xfpregs_set(struct task_struct *target, const struct user_regset *regset,
/* Copy the state */
memcpy(&fpu->fpstate->regs.fxsave, &newstate, sizeof(newstate));
/* Clear xmm8..15 */
/* Clear xmm8..15 for 32-bit callers */
BUILD_BUG_ON(sizeof(fpu->__fpstate.regs.fxsave.xmm_space) != 16 * 16);
memset(&fpu->fpstate->regs.fxsave.xmm_space[8], 0, 8 * 16);
if (in_ia32_syscall())
memset(&fpu->fpstate->regs.fxsave.xmm_space[8*4], 0, 8 * 16);
/* Mark FP and SSE as in use when XSAVE is enabled */
if (use_xsave())

View File

@ -1558,7 +1558,10 @@ static int fpstate_realloc(u64 xfeatures, unsigned int ksize,
fpregs_restore_userregs();
newfps->xfeatures = curfps->xfeatures | xfeatures;
newfps->user_xfeatures = curfps->user_xfeatures | xfeatures;
if (!guest_fpu)
newfps->user_xfeatures = curfps->user_xfeatures | xfeatures;
newfps->xfd = curfps->xfd & ~xfeatures;
/* Do the final updates within the locked region */

View File

@ -462,19 +462,22 @@ static bool pv_tlb_flush_supported(void)
{
return (kvm_para_has_feature(KVM_FEATURE_PV_TLB_FLUSH) &&
!kvm_para_has_hint(KVM_HINTS_REALTIME) &&
kvm_para_has_feature(KVM_FEATURE_STEAL_TIME));
kvm_para_has_feature(KVM_FEATURE_STEAL_TIME) &&
(num_possible_cpus() != 1));
}
static bool pv_ipi_supported(void)
{
return kvm_para_has_feature(KVM_FEATURE_PV_SEND_IPI);
return (kvm_para_has_feature(KVM_FEATURE_PV_SEND_IPI) &&
(num_possible_cpus() != 1));
}
static bool pv_sched_yield_supported(void)
{
return (kvm_para_has_feature(KVM_FEATURE_PV_SCHED_YIELD) &&
!kvm_para_has_hint(KVM_HINTS_REALTIME) &&
kvm_para_has_feature(KVM_FEATURE_STEAL_TIME));
kvm_para_has_feature(KVM_FEATURE_STEAL_TIME) &&
(num_possible_cpus() != 1));
}
#define KVM_IPI_CLUSTER_SIZE (2 * BITS_PER_LONG)

View File

@ -1224,7 +1224,7 @@ static struct user_regset x86_64_regsets[] __ro_after_init = {
},
[REGSET_FP] = {
.core_note_type = NT_PRFPREG,
.n = sizeof(struct user_i387_struct) / sizeof(long),
.n = sizeof(struct fxregs_state) / sizeof(long),
.size = sizeof(long), .align = sizeof(long),
.active = regset_xregset_fpregs_active, .regset_get = xfpregs_get, .set = xfpregs_set
},
@ -1271,7 +1271,7 @@ static struct user_regset x86_32_regsets[] __ro_after_init = {
},
[REGSET_XFP] = {
.core_note_type = NT_PRXFPREG,
.n = sizeof(struct user32_fxsr_struct) / sizeof(u32),
.n = sizeof(struct fxregs_state) / sizeof(u32),
.size = sizeof(u32), .align = sizeof(u32),
.active = regset_xregset_fpregs_active, .regset_get = xfpregs_get, .set = xfpregs_set
},

View File

@ -282,6 +282,7 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
{
struct kvm_lapic *apic = vcpu->arch.apic;
struct kvm_cpuid_entry2 *best;
u64 guest_supported_xcr0;
best = kvm_find_cpuid_entry(vcpu, 1, 0);
if (best && apic) {
@ -293,9 +294,11 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
kvm_apic_set_version(vcpu);
}
vcpu->arch.guest_supported_xcr0 =
guest_supported_xcr0 =
cpuid_get_supported_xcr0(vcpu->arch.cpuid_entries, vcpu->arch.cpuid_nent);
vcpu->arch.guest_fpu.fpstate->user_xfeatures = guest_supported_xcr0;
kvm_update_pv_runtime(vcpu);
vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu);

View File

@ -2306,7 +2306,12 @@ void kvm_apic_update_apicv(struct kvm_vcpu *vcpu)
apic->irr_pending = true;
apic->isr_count = 1;
} else {
apic->irr_pending = (apic_search_irr(apic) != -1);
/*
* Don't clear irr_pending, searching the IRR can race with
* updates from the CPU as APICv is still active from hardware's
* perspective. The flag will be cleared as appropriate when
* KVM injects the interrupt.
*/
apic->isr_count = count_vectors(apic->regs + APIC_ISR);
}
}

View File

@ -3889,12 +3889,23 @@ static void shadow_page_table_clear_flood(struct kvm_vcpu *vcpu, gva_t addr)
walk_shadow_page_lockless_end(vcpu);
}
static u32 alloc_apf_token(struct kvm_vcpu *vcpu)
{
/* make sure the token value is not 0 */
u32 id = vcpu->arch.apf.id;
if (id << 12 == 0)
vcpu->arch.apf.id = 1;
return (vcpu->arch.apf.id++ << 12) | vcpu->vcpu_id;
}
static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
gfn_t gfn)
{
struct kvm_arch_async_pf arch;
arch.token = (vcpu->arch.apf.id++ << 12) | vcpu->vcpu_id;
arch.token = alloc_apf_token(vcpu);
arch.gfn = gfn;
arch.direct_map = vcpu->arch.mmu->direct_map;
arch.cr3 = vcpu->arch.mmu->get_guest_pgd(vcpu);

View File

@ -95,7 +95,7 @@ static void kvm_perf_overflow(struct perf_event *perf_event,
}
static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type,
unsigned config, bool exclude_user,
u64 config, bool exclude_user,
bool exclude_kernel, bool intr,
bool in_tx, bool in_tx_cp)
{
@ -181,7 +181,8 @@ static int cmp_u64(const void *a, const void *b)
void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
{
unsigned config, type = PERF_TYPE_RAW;
u64 config;
u32 type = PERF_TYPE_RAW;
struct kvm *kvm = pmc->vcpu->kvm;
struct kvm_pmu_event_filter *filter;
bool allow_event = true;
@ -220,7 +221,7 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
}
if (type == PERF_TYPE_RAW)
config = eventsel & X86_RAW_EVENT_MASK;
config = eventsel & AMD64_RAW_EVENT_MASK;
if (pmc->current_config == eventsel && pmc_resume_counter(pmc))
return;

View File

@ -27,20 +27,6 @@
#include "irq.h"
#include "svm.h"
#define SVM_AVIC_DOORBELL 0xc001011b
#define AVIC_HPA_MASK ~((0xFFFULL << 52) | 0xFFF)
/*
* 0xff is broadcast, so the max index allowed for physical APIC ID
* table is 0xfe. APIC IDs above 0xff are reserved.
*/
#define AVIC_MAX_PHYSICAL_ID_COUNT 255
#define AVIC_UNACCEL_ACCESS_WRITE_MASK 1
#define AVIC_UNACCEL_ACCESS_OFFSET_MASK 0xFF0
#define AVIC_UNACCEL_ACCESS_VECTOR_MASK 0xFFFFFFFF
/* AVIC GATAG is encoded using VM and VCPU IDs */
#define AVIC_VCPU_ID_BITS 8
#define AVIC_VCPU_ID_MASK ((1 << AVIC_VCPU_ID_BITS) - 1)
@ -73,12 +59,6 @@ struct amd_svm_iommu_ir {
void *data; /* Storing pointer to struct amd_ir_data */
};
enum avic_ipi_failure_cause {
AVIC_IPI_FAILURE_INVALID_INT_TYPE,
AVIC_IPI_FAILURE_TARGET_NOT_RUNNING,
AVIC_IPI_FAILURE_INVALID_TARGET,
AVIC_IPI_FAILURE_INVALID_BACKING_PAGE,
};
/* Note:
* This function is called from IOMMU driver to notify
@ -289,6 +269,22 @@ static int avic_init_backing_page(struct kvm_vcpu *vcpu)
return 0;
}
void avic_ring_doorbell(struct kvm_vcpu *vcpu)
{
/*
* Note, the vCPU could get migrated to a different pCPU at any point,
* which could result in signalling the wrong/previous pCPU. But if
* that happens the vCPU is guaranteed to do a VMRUN (after being
* migrated) and thus will process pending interrupts, i.e. a doorbell
* is not needed (and the spurious one is harmless).
*/
int cpu = READ_ONCE(vcpu->cpu);
if (cpu != get_cpu())
wrmsrl(MSR_AMD64_SVM_AVIC_DOORBELL, kvm_cpu_get_apicid(cpu));
put_cpu();
}
static void avic_kick_target_vcpus(struct kvm *kvm, struct kvm_lapic *source,
u32 icrl, u32 icrh)
{
@ -304,8 +300,13 @@ static void avic_kick_target_vcpus(struct kvm *kvm, struct kvm_lapic *source,
kvm_for_each_vcpu(i, vcpu, kvm) {
if (kvm_apic_match_dest(vcpu, source, icrl & APIC_SHORT_MASK,
GET_APIC_DEST_FIELD(icrh),
icrl & APIC_DEST_MASK))
kvm_vcpu_wake_up(vcpu);
icrl & APIC_DEST_MASK)) {
vcpu->arch.apic->irr_pending = true;
svm_complete_interrupt_delivery(vcpu,
icrl & APIC_MODE_MASK,
icrl & APIC_INT_LEVELTRIG,
icrl & APIC_VECTOR_MASK);
}
}
}
@ -345,8 +346,6 @@ int avic_incomplete_ipi_interception(struct kvm_vcpu *vcpu)
avic_kick_target_vcpus(vcpu->kvm, apic, icrl, icrh);
break;
case AVIC_IPI_FAILURE_INVALID_TARGET:
WARN_ONCE(1, "Invalid IPI target: index=%u, vcpu=%d, icr=%#0x:%#0x\n",
index, vcpu->vcpu_id, icrh, icrl);
break;
case AVIC_IPI_FAILURE_INVALID_BACKING_PAGE:
WARN_ONCE(1, "Invalid backing page\n");
@ -669,52 +668,6 @@ void svm_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
return;
}
int svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec)
{
if (!vcpu->arch.apicv_active)
return -1;
kvm_lapic_set_irr(vec, vcpu->arch.apic);
/*
* Pairs with the smp_mb_*() after setting vcpu->guest_mode in
* vcpu_enter_guest() to ensure the write to the vIRR is ordered before
* the read of guest_mode, which guarantees that either VMRUN will see
* and process the new vIRR entry, or that the below code will signal
* the doorbell if the vCPU is already running in the guest.
*/
smp_mb__after_atomic();
/*
* Signal the doorbell to tell hardware to inject the IRQ if the vCPU
* is in the guest. If the vCPU is not in the guest, hardware will
* automatically process AVIC interrupts at VMRUN.
*/
if (vcpu->mode == IN_GUEST_MODE) {
int cpu = READ_ONCE(vcpu->cpu);
/*
* Note, the vCPU could get migrated to a different pCPU at any
* point, which could result in signalling the wrong/previous
* pCPU. But if that happens the vCPU is guaranteed to do a
* VMRUN (after being migrated) and thus will process pending
* interrupts, i.e. a doorbell is not needed (and the spurious
* one is harmless).
*/
if (cpu != get_cpu())
wrmsrl(SVM_AVIC_DOORBELL, kvm_cpu_get_apicid(cpu));
put_cpu();
} else {
/*
* Wake the vCPU if it was blocking. KVM will then detect the
* pending IRQ when checking if the vCPU has a wake event.
*/
kvm_vcpu_wake_up(vcpu);
}
return 0;
}
bool svm_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu)
{
return false;

View File

@ -1457,18 +1457,6 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
!__nested_vmcb_check_save(vcpu, &save_cached))
goto out_free;
/*
* While the nested guest CR3 is already checked and set by
* KVM_SET_SREGS, it was set when nested state was yet loaded,
* thus MMU might not be initialized correctly.
* Set it again to fix this.
*/
ret = nested_svm_load_cr3(&svm->vcpu, vcpu->arch.cr3,
nested_npt_enabled(svm), false);
if (WARN_ON_ONCE(ret))
goto out_free;
/*
* All checks done, we can enter guest mode. Userspace provides
@ -1494,6 +1482,20 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
svm_switch_vmcb(svm, &svm->nested.vmcb02);
nested_vmcb02_prepare_control(svm);
/*
* While the nested guest CR3 is already checked and set by
* KVM_SET_SREGS, it was set when nested state was yet loaded,
* thus MMU might not be initialized correctly.
* Set it again to fix this.
*/
ret = nested_svm_load_cr3(&svm->vcpu, vcpu->arch.cr3,
nested_npt_enabled(svm), false);
if (WARN_ON_ONCE(ret))
goto out_free;
kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
ret = 0;
out_free:

View File

@ -1585,6 +1585,7 @@ void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
{
struct vcpu_svm *svm = to_svm(vcpu);
u64 hcr0 = cr0;
bool old_paging = is_paging(vcpu);
#ifdef CONFIG_X86_64
if (vcpu->arch.efer & EFER_LME && !vcpu->arch.guest_state_protected) {
@ -1601,8 +1602,11 @@ void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
#endif
vcpu->arch.cr0 = cr0;
if (!npt_enabled)
if (!npt_enabled) {
hcr0 |= X86_CR0_PG | X86_CR0_WP;
if (old_paging != is_paging(vcpu))
svm_set_cr4(vcpu, kvm_read_cr4(vcpu));
}
/*
* re-enable caching here because the QEMU bios
@ -1646,8 +1650,12 @@ void svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
svm_flush_tlb(vcpu);
vcpu->arch.cr4 = cr4;
if (!npt_enabled)
if (!npt_enabled) {
cr4 |= X86_CR4_PAE;
if (!is_paging(vcpu))
cr4 &= ~(X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE);
}
cr4 |= host_cr4_mce;
to_svm(vcpu)->vmcb->save.cr4 = cr4;
vmcb_mark_dirty(to_svm(vcpu)->vmcb, VMCB_CR);
@ -2685,8 +2693,23 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
u64 data = msr->data;
switch (ecx) {
case MSR_AMD64_TSC_RATIO:
if (!msr->host_initiated && !svm->tsc_scaling_enabled)
return 1;
if (!svm->tsc_scaling_enabled) {
if (!msr->host_initiated)
return 1;
/*
* In case TSC scaling is not enabled, always
* leave this MSR at the default value.
*
* Due to bug in qemu 6.2.0, it would try to set
* this msr to 0 if tsc scaling is not enabled.
* Ignore this value as well.
*/
if (data != 0 && data != svm->tsc_ratio_msr)
return 1;
break;
}
if (data & TSC_RATIO_RSVD)
return 1;
@ -3291,19 +3314,53 @@ static void svm_set_irq(struct kvm_vcpu *vcpu)
SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_INTR;
}
static void svm_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
int trig_mode, int vector)
void svm_complete_interrupt_delivery(struct kvm_vcpu *vcpu, int delivery_mode,
int trig_mode, int vector)
{
struct kvm_vcpu *vcpu = apic->vcpu;
/*
* vcpu->arch.apicv_active must be read after vcpu->mode.
* Pairs with smp_store_release in vcpu_enter_guest.
*/
bool in_guest_mode = (smp_load_acquire(&vcpu->mode) == IN_GUEST_MODE);
if (svm_deliver_avic_intr(vcpu, vector)) {
kvm_lapic_set_irr(vector, apic);
if (!READ_ONCE(vcpu->arch.apicv_active)) {
/* Process the interrupt via inject_pending_event */
kvm_make_request(KVM_REQ_EVENT, vcpu);
kvm_vcpu_kick(vcpu);
} else {
trace_kvm_apicv_accept_irq(vcpu->vcpu_id, delivery_mode,
trig_mode, vector);
return;
}
trace_kvm_apicv_accept_irq(vcpu->vcpu_id, delivery_mode, trig_mode, vector);
if (in_guest_mode) {
/*
* Signal the doorbell to tell hardware to inject the IRQ. If
* the vCPU exits the guest before the doorbell chimes, hardware
* will automatically process AVIC interrupts at the next VMRUN.
*/
avic_ring_doorbell(vcpu);
} else {
/*
* Wake the vCPU if it was blocking. KVM will then detect the
* pending IRQ when checking if the vCPU has a wake event.
*/
kvm_vcpu_wake_up(vcpu);
}
}
static void svm_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
int trig_mode, int vector)
{
kvm_lapic_set_irr(vector, apic);
/*
* Pairs with the smp_mb_*() after setting vcpu->guest_mode in
* vcpu_enter_guest() to ensure the write to the vIRR is ordered before
* the read of guest_mode. This guarantees that either VMRUN will see
* and process the new vIRR entry, or that svm_complete_interrupt_delivery
* will signal the doorbell if the CPU has already entered the guest.
*/
smp_mb__after_atomic();
svm_complete_interrupt_delivery(apic->vcpu, delivery_mode, trig_mode, vector);
}
static void svm_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
@ -3353,11 +3410,13 @@ static int svm_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
if (svm->nested.nested_run_pending)
return -EBUSY;
if (svm_nmi_blocked(vcpu))
return 0;
/* An NMI must not be injected into L2 if it's supposed to VM-Exit. */
if (for_injection && is_guest_mode(vcpu) && nested_exit_on_nmi(svm))
return -EBUSY;
return !svm_nmi_blocked(vcpu);
return 1;
}
static bool svm_get_nmi_mask(struct kvm_vcpu *vcpu)
@ -3409,9 +3468,13 @@ bool svm_interrupt_blocked(struct kvm_vcpu *vcpu)
static int svm_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
{
struct vcpu_svm *svm = to_svm(vcpu);
if (svm->nested.nested_run_pending)
return -EBUSY;
if (svm_interrupt_blocked(vcpu))
return 0;
/*
* An IRQ must not be injected into L2 if it's supposed to VM-Exit,
* e.g. if the IRQ arrived asynchronously after checking nested events.
@ -3419,7 +3482,7 @@ static int svm_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
if (for_injection && is_guest_mode(vcpu) && nested_exit_on_intr(svm))
return -EBUSY;
return !svm_interrupt_blocked(vcpu);
return 1;
}
static void svm_enable_irq_window(struct kvm_vcpu *vcpu)
@ -4150,11 +4213,14 @@ static int svm_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
if (svm->nested.nested_run_pending)
return -EBUSY;
if (svm_smi_blocked(vcpu))
return 0;
/* An SMI must not be injected into L2 if it's supposed to VM-Exit. */
if (for_injection && is_guest_mode(vcpu) && nested_exit_on_smi(svm))
return -EBUSY;
return !svm_smi_blocked(vcpu);
return 1;
}
static int svm_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
@ -4248,11 +4314,18 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
* Enter the nested guest now
*/
vmcb_mark_all_dirty(svm->vmcb01.ptr);
vmcb12 = map.hva;
nested_copy_vmcb_control_to_cache(svm, &vmcb12->control);
nested_copy_vmcb_save_to_cache(svm, &vmcb12->save);
ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, false);
if (ret)
goto unmap_save;
svm->nested.nested_run_pending = 1;
unmap_save:
kvm_vcpu_unmap(vcpu, &map_save, true);
unmap_map:
@ -4637,6 +4710,7 @@ static __init void svm_set_cpu_caps(void)
/* CPUID 0x80000001 and 0x8000000A (SVM features) */
if (nested) {
kvm_cpu_cap_set(X86_FEATURE_SVM);
kvm_cpu_cap_set(X86_FEATURE_VMCBCLEAN);
if (nrips)
kvm_cpu_cap_set(X86_FEATURE_NRIPS);

View File

@ -489,6 +489,8 @@ void svm_set_gif(struct vcpu_svm *svm, bool value);
int svm_invoke_exit_handler(struct kvm_vcpu *vcpu, u64 exit_code);
void set_msr_interception(struct kvm_vcpu *vcpu, u32 *msrpm, u32 msr,
int read, int write);
void svm_complete_interrupt_delivery(struct kvm_vcpu *vcpu, int delivery_mode,
int trig_mode, int vec);
/* nested.c */
@ -556,17 +558,6 @@ extern struct kvm_x86_nested_ops svm_nested_ops;
/* avic.c */
#define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK (0xFF)
#define AVIC_LOGICAL_ID_ENTRY_VALID_BIT 31
#define AVIC_LOGICAL_ID_ENTRY_VALID_MASK (1 << 31)
#define AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK (0xFFULL)
#define AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK (0xFFFFFFFFFFULL << 12)
#define AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK (1ULL << 62)
#define AVIC_PHYSICAL_ID_ENTRY_VALID_MASK (1ULL << 63)
#define VMCB_AVIC_APIC_BAR_MASK 0xFFFFFFFFFF000ULL
int avic_ga_log_notifier(u32 ga_tag);
void avic_vm_destroy(struct kvm *kvm);
int avic_vm_init(struct kvm *kvm);
@ -583,12 +574,12 @@ bool svm_check_apicv_inhibit_reasons(ulong bit);
void svm_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
void svm_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr);
void svm_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr);
int svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec);
bool svm_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu);
int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
uint32_t guest_irq, bool set);
void avic_vcpu_blocking(struct kvm_vcpu *vcpu);
void avic_vcpu_unblocking(struct kvm_vcpu *vcpu);
void avic_ring_doorbell(struct kvm_vcpu *vcpu);
/* sev.c */

View File

@ -7659,6 +7659,7 @@ static int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
if (ret)
return ret;
vmx->nested.nested_run_pending = 1;
vmx->nested.smm.guest_mode = false;
}
return 0;

View File

@ -984,6 +984,18 @@ void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_load_host_xsave_state);
static inline u64 kvm_guest_supported_xcr0(struct kvm_vcpu *vcpu)
{
return vcpu->arch.guest_fpu.fpstate->user_xfeatures;
}
#ifdef CONFIG_X86_64
static inline u64 kvm_guest_supported_xfd(struct kvm_vcpu *vcpu)
{
return kvm_guest_supported_xcr0(vcpu) & XFEATURE_MASK_USER_DYNAMIC;
}
#endif
static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
{
u64 xcr0 = xcr;
@ -1003,7 +1015,7 @@ static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
* saving. However, xcr0 bit 0 is always set, even if the
* emulated CPU does not support XSAVE (see kvm_vcpu_reset()).
*/
valid_bits = vcpu->arch.guest_supported_xcr0 | XFEATURE_MASK_FP;
valid_bits = kvm_guest_supported_xcr0(vcpu) | XFEATURE_MASK_FP;
if (xcr0 & ~valid_bits)
return 1;
@ -2351,10 +2363,12 @@ static u64 compute_guest_tsc(struct kvm_vcpu *vcpu, s64 kernel_ns)
return tsc;
}
#ifdef CONFIG_X86_64
static inline int gtod_is_based_on_tsc(int mode)
{
return mode == VDSO_CLOCKMODE_TSC || mode == VDSO_CLOCKMODE_HVCLOCK;
}
#endif
static void kvm_track_tsc_matching(struct kvm_vcpu *vcpu)
{
@ -3706,8 +3720,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
!guest_cpuid_has(vcpu, X86_FEATURE_XFD))
return 1;
if (data & ~(XFEATURE_MASK_USER_DYNAMIC &
vcpu->arch.guest_supported_xcr0))
if (data & ~kvm_guest_supported_xfd(vcpu))
return 1;
fpu_update_guest_xfd(&vcpu->arch.guest_fpu, data);
@ -3717,8 +3730,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
!guest_cpuid_has(vcpu, X86_FEATURE_XFD))
return 1;
if (data & ~(XFEATURE_MASK_USER_DYNAMIC &
vcpu->arch.guest_supported_xcr0))
if (data & ~kvm_guest_supported_xfd(vcpu))
return 1;
vcpu->arch.guest_fpu.xfd_err = data;
@ -4233,6 +4245,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_EXIT_ON_EMULATION_FAILURE:
case KVM_CAP_VCPU_ATTRIBUTES:
case KVM_CAP_SYS_ATTRIBUTES:
case KVM_CAP_ENABLE_CAP:
r = 1;
break;
case KVM_CAP_EXIT_HYPERCALL:
@ -8942,6 +8955,13 @@ static int kvm_pv_clock_pairing(struct kvm_vcpu *vcpu, gpa_t paddr,
if (clock_type != KVM_CLOCK_PAIRING_WALLCLOCK)
return -KVM_EOPNOTSUPP;
/*
* When tsc is in permanent catchup mode guests won't be able to use
* pvclock_read_retry loop to get consistent view of pvclock
*/
if (vcpu->arch.tsc_always_catchup)
return -KVM_EOPNOTSUPP;
if (!kvm_get_walltime_and_clockread(&ts, &cycle))
return -KVM_EOPNOTSUPP;
@ -9983,7 +10003,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
* result in virtual interrupt delivery.
*/
local_irq_disable();
vcpu->mode = IN_GUEST_MODE;
/* Store vcpu->apicv_active before vcpu->mode. */
smp_store_release(&vcpu->mode, IN_GUEST_MODE);
srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);

View File

@ -133,32 +133,57 @@ static void kvm_xen_update_runstate(struct kvm_vcpu *v, int state)
void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
{
struct kvm_vcpu_xen *vx = &v->arch.xen;
struct gfn_to_hva_cache *ghc = &vx->runstate_cache;
struct kvm_memslots *slots = kvm_memslots(v->kvm);
bool atomic = (state == RUNSTATE_runnable);
uint64_t state_entry_time;
unsigned int offset;
int __user *user_state;
uint64_t __user *user_times;
kvm_xen_update_runstate(v, state);
if (!vx->runstate_set)
return;
if (unlikely(slots->generation != ghc->generation || kvm_is_error_hva(ghc->hva)) &&
kvm_gfn_to_hva_cache_init(v->kvm, ghc, ghc->gpa, ghc->len))
return;
/* We made sure it fits in a single page */
BUG_ON(!ghc->memslot);
if (atomic)
pagefault_disable();
/*
* The only difference between 32-bit and 64-bit versions of the
* runstate struct us the alignment of uint64_t in 32-bit, which
* means that the 64-bit version has an additional 4 bytes of
* padding after the first field 'state'.
*
* So we use 'int __user *user_state' to point to the state field,
* and 'uint64_t __user *user_times' for runstate_entry_time. So
* the actual array of time[] in each state starts at user_times[1].
*/
BUILD_BUG_ON(offsetof(struct vcpu_runstate_info, state) != 0);
BUILD_BUG_ON(offsetof(struct compat_vcpu_runstate_info, state) != 0);
user_state = (int __user *)ghc->hva;
BUILD_BUG_ON(sizeof(struct compat_vcpu_runstate_info) != 0x2c);
offset = offsetof(struct compat_vcpu_runstate_info, state_entry_time);
user_times = (uint64_t __user *)(ghc->hva +
offsetof(struct compat_vcpu_runstate_info,
state_entry_time));
#ifdef CONFIG_X86_64
/*
* The only difference is alignment of uint64_t in 32-bit.
* So the first field 'state' is accessed directly using
* offsetof() (where its offset happens to be zero), while the
* remaining fields which are all uint64_t, start at 'offset'
* which we tweak here by adding 4.
*/
BUILD_BUG_ON(offsetof(struct vcpu_runstate_info, state_entry_time) !=
offsetof(struct compat_vcpu_runstate_info, state_entry_time) + 4);
BUILD_BUG_ON(offsetof(struct vcpu_runstate_info, time) !=
offsetof(struct compat_vcpu_runstate_info, time) + 4);
if (v->kvm->arch.xen.long_mode)
offset = offsetof(struct vcpu_runstate_info, state_entry_time);
user_times = (uint64_t __user *)(ghc->hva +
offsetof(struct vcpu_runstate_info,
state_entry_time));
#endif
/*
* First write the updated state_entry_time at the appropriate
@ -172,10 +197,8 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
BUILD_BUG_ON(sizeof_field(struct compat_vcpu_runstate_info, state_entry_time) !=
sizeof(state_entry_time));
if (kvm_write_guest_offset_cached(v->kvm, &v->arch.xen.runstate_cache,
&state_entry_time, offset,
sizeof(state_entry_time)))
return;
if (__put_user(state_entry_time, user_times))
goto out;
smp_wmb();
/*
@ -189,11 +212,8 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
BUILD_BUG_ON(sizeof_field(struct compat_vcpu_runstate_info, state) !=
sizeof(vx->current_runstate));
if (kvm_write_guest_offset_cached(v->kvm, &v->arch.xen.runstate_cache,
&vx->current_runstate,
offsetof(struct vcpu_runstate_info, state),
sizeof(vx->current_runstate)))
return;
if (__put_user(vx->current_runstate, user_state))
goto out;
/*
* Write the actual runstate times immediately after the
@ -208,24 +228,23 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
BUILD_BUG_ON(sizeof_field(struct vcpu_runstate_info, time) !=
sizeof(vx->runstate_times));
if (kvm_write_guest_offset_cached(v->kvm, &v->arch.xen.runstate_cache,
&vx->runstate_times[0],
offset + sizeof(u64),
sizeof(vx->runstate_times)))
return;
if (__copy_to_user(user_times + 1, vx->runstate_times, sizeof(vx->runstate_times)))
goto out;
smp_wmb();
/*
* Finally, clear the XEN_RUNSTATE_UPDATE bit in the guest's
* runstate_entry_time field.
*/
state_entry_time &= ~XEN_RUNSTATE_UPDATE;
if (kvm_write_guest_offset_cached(v->kvm, &v->arch.xen.runstate_cache,
&state_entry_time, offset,
sizeof(state_entry_time)))
return;
__put_user(state_entry_time, user_times);
smp_wmb();
out:
mark_page_dirty_in_slot(v->kvm, ghc->memslot, ghc->gpa >> PAGE_SHIFT);
if (atomic)
pagefault_enable();
}
int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
@ -443,6 +462,12 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data)
break;
}
/* It must fit within a single page */
if ((data->u.gpa & ~PAGE_MASK) + sizeof(struct vcpu_info) > PAGE_SIZE) {
r = -EINVAL;
break;
}
r = kvm_gfn_to_hva_cache_init(vcpu->kvm,
&vcpu->arch.xen.vcpu_info_cache,
data->u.gpa,
@ -460,6 +485,12 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data)
break;
}
/* It must fit within a single page */
if ((data->u.gpa & ~PAGE_MASK) + sizeof(struct pvclock_vcpu_time_info) > PAGE_SIZE) {
r = -EINVAL;
break;
}
r = kvm_gfn_to_hva_cache_init(vcpu->kvm,
&vcpu->arch.xen.vcpu_time_info_cache,
data->u.gpa,
@ -481,6 +512,12 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data)
break;
}
/* It must fit within a single page */
if ((data->u.gpa & ~PAGE_MASK) + sizeof(struct vcpu_runstate_info) > PAGE_SIZE) {
r = -EINVAL;
break;
}
r = kvm_gfn_to_hva_cache_init(vcpu->kvm,
&vcpu->arch.xen.runstate_cache,
data->u.gpa,

View File

@ -7018,6 +7018,8 @@ static void bfq_exit_queue(struct elevator_queue *e)
spin_unlock_irq(&bfqd->lock);
#endif
wbt_enable_default(bfqd->queue);
kfree(bfqd);
}

View File

@ -284,13 +284,6 @@ void blk_queue_start_drain(struct request_queue *q)
wake_up_all(&q->mq_freeze_wq);
}
void blk_set_queue_dying(struct request_queue *q)
{
blk_queue_flag_set(QUEUE_FLAG_DYING, q);
blk_queue_start_drain(q);
}
EXPORT_SYMBOL_GPL(blk_set_queue_dying);
/**
* blk_cleanup_queue - shutdown a request queue
* @q: request queue to shutdown
@ -308,7 +301,8 @@ void blk_cleanup_queue(struct request_queue *q)
WARN_ON_ONCE(blk_queue_registered(q));
/* mark @q DYING, no new request or merges will be allowed afterwards */
blk_set_queue_dying(q);
blk_queue_flag_set(QUEUE_FLAG_DYING, q);
blk_queue_start_drain(q);
blk_queue_flag_set(QUEUE_FLAG_NOMERGES, q);
blk_queue_flag_set(QUEUE_FLAG_NOXMERGES, q);

View File

@ -446,7 +446,7 @@ static struct bio *bio_copy_kern(struct request_queue *q, void *data,
if (bytes > len)
bytes = len;
page = alloc_page(GFP_NOIO | gfp_mask);
page = alloc_page(GFP_NOIO | __GFP_ZERO | gfp_mask);
if (!page)
goto cleanup;

View File

@ -736,6 +736,10 @@ static void blk_complete_request(struct request *req)
/* Completion has already been traced */
bio_clear_flag(bio, BIO_TRACE_COMPLETION);
if (req_op(req) == REQ_OP_ZONE_APPEND)
bio->bi_iter.bi_sector = req->__sector;
if (!is_flush)
bio_endio(bio);
bio = next;

View File

@ -525,8 +525,6 @@ void elv_unregister_queue(struct request_queue *q)
kobject_del(&e->kobj);
e->registered = 0;
/* Re-enable throttling in case elevator disabled it */
wbt_enable_default(q);
}
}

View File

@ -289,6 +289,8 @@ static void blkdev_bio_end_io_async(struct bio *bio)
struct kiocb *iocb = dio->iocb;
ssize_t ret;
WRITE_ONCE(iocb->private, NULL);
if (likely(!bio->bi_status)) {
ret = dio->size;
iocb->ki_pos += ret;

View File

@ -548,6 +548,20 @@ out_free_ext_minor:
}
EXPORT_SYMBOL(device_add_disk);
/**
* blk_mark_disk_dead - mark a disk as dead
* @disk: disk to mark as dead
*
* Mark as disk as dead (e.g. surprise removed) and don't accept any new I/O
* to this disk.
*/
void blk_mark_disk_dead(struct gendisk *disk)
{
set_bit(GD_DEAD, &disk->state);
blk_queue_start_drain(disk->queue);
}
EXPORT_SYMBOL_GPL(blk_mark_disk_dead);
/**
* del_gendisk - remove the gendisk
* @disk: the struct gendisk to remove

View File

@ -25,12 +25,9 @@ struct alg_type_list {
struct list_head list;
};
static atomic_long_t alg_memory_allocated;
static struct proto alg_proto = {
.name = "ALG",
.owner = THIS_MODULE,
.memory_allocated = &alg_memory_allocated,
.obj_size = sizeof(struct alg_sock),
};

View File

@ -96,6 +96,11 @@ static const struct dmi_system_id processor_power_dmi_table[] = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer Inc."),
DMI_MATCH(DMI_PRODUCT_NAME,"L8400B series Notebook PC")},
(void *)1},
/* T40 can not handle C3 idle state */
{ set_max_cstate, "IBM ThinkPad T40", {
DMI_MATCH(DMI_SYS_VENDOR, "IBM"),
DMI_MATCH(DMI_PRODUCT_NAME, "23737CU")},
(void *)2},
{},
};

View File

@ -400,7 +400,7 @@ int __init_or_acpilib acpi_table_parse_entries_array(
acpi_get_table(id, instance, &table_header);
if (!table_header) {
pr_warn("%4.4s not present\n", id);
pr_debug("%4.4s not present\n", id);
return -ENODEV;
}

View File

@ -919,6 +919,20 @@ static int hpt37x_init_one(struct pci_dev *dev, const struct pci_device_id *id)
irqmask &= ~0x10;
pci_write_config_byte(dev, 0x5a, irqmask);
/*
* HPT371 chips physically have only one channel, the secondary one,
* but the primary channel registers do exist! Go figure...
* So, we manually disable the non-existing channel here
* (if the BIOS hasn't done this already).
*/
if (dev->device == PCI_DEVICE_ID_TTI_HPT371) {
u8 mcr1;
pci_read_config_byte(dev, 0x50, &mcr1);
mcr1 &= ~0x04;
pci_write_config_byte(dev, 0x50, mcr1);
}
/*
* default to pci clock. make sure MA15/16 are set to output
* to prevent drives having problems with 40-pin cables. Needed
@ -950,14 +964,14 @@ static int hpt37x_init_one(struct pci_dev *dev, const struct pci_device_id *id)
if ((freq >> 12) != 0xABCDE) {
int i;
u8 sr;
u16 sr;
u32 total = 0;
dev_warn(&dev->dev, "BIOS has not set timing clocks\n");
/* This is the process the HPT371 BIOS is reported to use */
for (i = 0; i < 128; i++) {
pci_read_config_byte(dev, 0x78, &sr);
pci_read_config_word(dev, 0x78, &sr);
total += sr & 0x1FF;
udelay(15);
}

View File

@ -629,6 +629,9 @@ re_probe:
drv->remove(dev);
devres_release_all(dev);
arch_teardown_dma_ops(dev);
kfree(dev->dma_range_map);
dev->dma_range_map = NULL;
driver_sysfs_remove(dev);
dev->driver = NULL;
dev_set_drvdata(dev, NULL);
@ -1209,6 +1212,8 @@ static void __device_release_driver(struct device *dev, struct device *parent)
devres_release_all(dev);
arch_teardown_dma_ops(dev);
kfree(dev->dma_range_map);
dev->dma_range_map = NULL;
dev->driver = NULL;
dev_set_drvdata(dev, NULL);
if (dev->pm_domain && dev->pm_domain->dismiss)

View File

@ -189,11 +189,9 @@ static void regmap_irq_sync_unlock(struct irq_data *data)
ret = regmap_write(map, reg, d->mask_buf[i]);
if (d->chip->clear_ack) {
if (d->chip->ack_invert && !ret)
ret = regmap_write(map, reg,
d->mask_buf[i]);
ret = regmap_write(map, reg, UINT_MAX);
else if (!ret)
ret = regmap_write(map, reg,
~d->mask_buf[i]);
ret = regmap_write(map, reg, 0);
}
if (ret != 0)
dev_err(d->map->dev, "Failed to ack 0x%x: %d\n",
@ -556,11 +554,9 @@ static irqreturn_t regmap_irq_thread(int irq, void *d)
data->status_buf[i]);
if (chip->clear_ack) {
if (chip->ack_invert && !ret)
ret = regmap_write(map, reg,
data->status_buf[i]);
ret = regmap_write(map, reg, UINT_MAX);
else if (!ret)
ret = regmap_write(map, reg,
~data->status_buf[i]);
ret = regmap_write(map, reg, 0);
}
if (ret != 0)
dev_err(map->dev, "Failed to ack 0x%x: %d\n",
@ -817,13 +813,9 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode,
d->status_buf[i] & d->mask_buf[i]);
if (chip->clear_ack) {
if (chip->ack_invert && !ret)
ret = regmap_write(map, reg,
(d->status_buf[i] &
d->mask_buf[i]));
ret = regmap_write(map, reg, UINT_MAX);
else if (!ret)
ret = regmap_write(map, reg,
~(d->status_buf[i] &
d->mask_buf[i]));
ret = regmap_write(map, reg, 0);
}
if (ret != 0) {
dev_err(map->dev, "Failed to ack 0x%x: %d\n",

View File

@ -79,6 +79,7 @@
#include <linux/ioprio.h>
#include <linux/blk-cgroup.h>
#include <linux/sched/mm.h>
#include <linux/statfs.h>
#include "loop.h"
@ -774,8 +775,13 @@ static void loop_config_discard(struct loop_device *lo)
granularity = 0;
} else {
struct kstatfs sbuf;
max_discard_sectors = UINT_MAX >> 9;
granularity = inode->i_sb->s_blocksize;
if (!vfs_statfs(&file->f_path, &sbuf))
granularity = sbuf.f_bsize;
else
max_discard_sectors = 0;
}
if (max_discard_sectors) {

View File

@ -4112,7 +4112,7 @@ static void mtip_pci_remove(struct pci_dev *pdev)
"Completion workers still active!\n");
}
blk_set_queue_dying(dd->queue);
blk_mark_disk_dead(dd->disk);
set_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag);
/* Clean up the block layer. */

View File

@ -7185,7 +7185,7 @@ static ssize_t do_rbd_remove(struct bus_type *bus,
* IO to complete/fail.
*/
blk_mq_freeze_queue(rbd_dev->disk->queue);
blk_set_queue_dying(rbd_dev->disk->queue);
blk_mark_disk_dead(rbd_dev->disk);
}
del_gendisk(rbd_dev->disk);

View File

@ -2126,7 +2126,7 @@ static void blkfront_closing(struct blkfront_info *info)
/* No more blkif_request(). */
blk_mq_stop_hw_queues(info->rq);
blk_set_queue_dying(info->rq);
blk_mark_disk_dead(info->gd);
set_capacity(info->gd, 0);
for_each_rinfo(info, rinfo, i) {

View File

@ -139,11 +139,10 @@ static const struct ingenic_cgu_clk_info jz4725b_cgu_clocks[] = {
},
[JZ4725B_CLK_I2S] = {
"i2s", CGU_CLK_MUX | CGU_CLK_DIV | CGU_CLK_GATE,
"i2s", CGU_CLK_MUX | CGU_CLK_DIV,
.parents = { JZ4725B_CLK_EXT, JZ4725B_CLK_PLL_HALF, -1, -1 },
.mux = { CGU_REG_CPCCR, 31, 1 },
.div = { CGU_REG_I2SCDR, 0, 1, 9, -1, -1, -1 },
.gate = { CGU_REG_CLKGR, 6 },
},
[JZ4725B_CLK_SPI] = {

View File

@ -108,42 +108,6 @@ static const struct clk_parent_data gcc_xo_gpll0_gpll4[] = {
{ .hw = &gpll4.clkr.hw },
};
static struct clk_rcg2 system_noc_clk_src = {
.cmd_rcgr = 0x0120,
.hid_width = 5,
.parent_map = gcc_xo_gpll0_map,
.clkr.hw.init = &(struct clk_init_data){
.name = "system_noc_clk_src",
.parent_data = gcc_xo_gpll0,
.num_parents = ARRAY_SIZE(gcc_xo_gpll0),
.ops = &clk_rcg2_ops,
},
};
static struct clk_rcg2 config_noc_clk_src = {
.cmd_rcgr = 0x0150,
.hid_width = 5,
.parent_map = gcc_xo_gpll0_map,
.clkr.hw.init = &(struct clk_init_data){
.name = "config_noc_clk_src",
.parent_data = gcc_xo_gpll0,
.num_parents = ARRAY_SIZE(gcc_xo_gpll0),
.ops = &clk_rcg2_ops,
},
};
static struct clk_rcg2 periph_noc_clk_src = {
.cmd_rcgr = 0x0190,
.hid_width = 5,
.parent_map = gcc_xo_gpll0_map,
.clkr.hw.init = &(struct clk_init_data){
.name = "periph_noc_clk_src",
.parent_data = gcc_xo_gpll0,
.num_parents = ARRAY_SIZE(gcc_xo_gpll0),
.ops = &clk_rcg2_ops,
},
};
static struct freq_tbl ftbl_ufs_axi_clk_src[] = {
F(50000000, P_GPLL0, 12, 0, 0),
F(100000000, P_GPLL0, 6, 0, 0),
@ -1150,8 +1114,6 @@ static struct clk_branch gcc_blsp1_ahb_clk = {
.enable_mask = BIT(17),
.hw.init = &(struct clk_init_data){
.name = "gcc_blsp1_ahb_clk",
.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
.num_parents = 1,
.ops = &clk_branch2_ops,
},
},
@ -1435,8 +1397,6 @@ static struct clk_branch gcc_blsp2_ahb_clk = {
.enable_mask = BIT(15),
.hw.init = &(struct clk_init_data){
.name = "gcc_blsp2_ahb_clk",
.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
.num_parents = 1,
.ops = &clk_branch2_ops,
},
},
@ -1764,8 +1724,6 @@ static struct clk_branch gcc_lpass_q6_axi_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_lpass_q6_axi_clk",
.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
.num_parents = 1,
.ops = &clk_branch2_ops,
},
},
@ -1778,8 +1736,6 @@ static struct clk_branch gcc_mss_q6_bimc_axi_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_mss_q6_bimc_axi_clk",
.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
.num_parents = 1,
.ops = &clk_branch2_ops,
},
},
@ -1807,9 +1763,6 @@ static struct clk_branch gcc_pcie_0_cfg_ahb_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_pcie_0_cfg_ahb_clk",
.parent_hws = (const struct clk_hw *[]){ &config_noc_clk_src.clkr.hw },
.num_parents = 1,
.flags = CLK_SET_RATE_PARENT,
.ops = &clk_branch2_ops,
},
},
@ -1822,9 +1775,6 @@ static struct clk_branch gcc_pcie_0_mstr_axi_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_pcie_0_mstr_axi_clk",
.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
.num_parents = 1,
.flags = CLK_SET_RATE_PARENT,
.ops = &clk_branch2_ops,
},
},
@ -1854,9 +1804,6 @@ static struct clk_branch gcc_pcie_0_slv_axi_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_pcie_0_slv_axi_clk",
.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
.num_parents = 1,
.flags = CLK_SET_RATE_PARENT,
.ops = &clk_branch2_ops,
},
},
@ -1884,9 +1831,6 @@ static struct clk_branch gcc_pcie_1_cfg_ahb_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_pcie_1_cfg_ahb_clk",
.parent_hws = (const struct clk_hw *[]){ &config_noc_clk_src.clkr.hw },
.num_parents = 1,
.flags = CLK_SET_RATE_PARENT,
.ops = &clk_branch2_ops,
},
},
@ -1899,9 +1843,6 @@ static struct clk_branch gcc_pcie_1_mstr_axi_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_pcie_1_mstr_axi_clk",
.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
.num_parents = 1,
.flags = CLK_SET_RATE_PARENT,
.ops = &clk_branch2_ops,
},
},
@ -1930,9 +1871,6 @@ static struct clk_branch gcc_pcie_1_slv_axi_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_pcie_1_slv_axi_clk",
.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
.num_parents = 1,
.flags = CLK_SET_RATE_PARENT,
.ops = &clk_branch2_ops,
},
},
@ -1960,8 +1898,6 @@ static struct clk_branch gcc_pdm_ahb_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_pdm_ahb_clk",
.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
.num_parents = 1,
.ops = &clk_branch2_ops,
},
},
@ -1989,9 +1925,6 @@ static struct clk_branch gcc_sdcc1_ahb_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_sdcc1_ahb_clk",
.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
.num_parents = 1,
.flags = CLK_SET_RATE_PARENT,
.ops = &clk_branch2_ops,
},
},
@ -2004,9 +1937,6 @@ static struct clk_branch gcc_sdcc2_ahb_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_sdcc2_ahb_clk",
.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
.num_parents = 1,
.flags = CLK_SET_RATE_PARENT,
.ops = &clk_branch2_ops,
},
},
@ -2034,9 +1964,6 @@ static struct clk_branch gcc_sdcc3_ahb_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_sdcc3_ahb_clk",
.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
.num_parents = 1,
.flags = CLK_SET_RATE_PARENT,
.ops = &clk_branch2_ops,
},
},
@ -2064,9 +1991,6 @@ static struct clk_branch gcc_sdcc4_ahb_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_sdcc4_ahb_clk",
.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
.num_parents = 1,
.flags = CLK_SET_RATE_PARENT,
.ops = &clk_branch2_ops,
},
},
@ -2124,8 +2048,6 @@ static struct clk_branch gcc_tsif_ahb_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_tsif_ahb_clk",
.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
.num_parents = 1,
.ops = &clk_branch2_ops,
},
},
@ -2153,8 +2075,6 @@ static struct clk_branch gcc_ufs_ahb_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_ufs_ahb_clk",
.parent_hws = (const struct clk_hw *[]){ &config_noc_clk_src.clkr.hw },
.num_parents = 1,
.ops = &clk_branch2_ops,
},
},
@ -2198,8 +2118,6 @@ static struct clk_branch gcc_ufs_rx_symbol_0_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_ufs_rx_symbol_0_clk",
.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
.num_parents = 1,
.ops = &clk_branch2_ops,
},
},
@ -2213,8 +2131,6 @@ static struct clk_branch gcc_ufs_rx_symbol_1_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_ufs_rx_symbol_1_clk",
.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
.num_parents = 1,
.ops = &clk_branch2_ops,
},
},
@ -2243,8 +2159,6 @@ static struct clk_branch gcc_ufs_tx_symbol_0_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_ufs_tx_symbol_0_clk",
.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
.num_parents = 1,
.ops = &clk_branch2_ops,
},
},
@ -2258,8 +2172,6 @@ static struct clk_branch gcc_ufs_tx_symbol_1_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_ufs_tx_symbol_1_clk",
.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
.num_parents = 1,
.ops = &clk_branch2_ops,
},
},
@ -2364,8 +2276,6 @@ static struct clk_branch gcc_usb_hs_ahb_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gcc_usb_hs_ahb_clk",
.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
.num_parents = 1,
.ops = &clk_branch2_ops,
},
},
@ -2488,8 +2398,6 @@ static struct clk_branch gcc_boot_rom_ahb_clk = {
.enable_mask = BIT(10),
.hw.init = &(struct clk_init_data){
.name = "gcc_boot_rom_ahb_clk",
.parent_hws = (const struct clk_hw *[]){ &config_noc_clk_src.clkr.hw },
.num_parents = 1,
.ops = &clk_branch2_ops,
},
},
@ -2503,8 +2411,6 @@ static struct clk_branch gcc_prng_ahb_clk = {
.enable_mask = BIT(13),
.hw.init = &(struct clk_init_data){
.name = "gcc_prng_ahb_clk",
.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
.num_parents = 1,
.ops = &clk_branch2_ops,
},
},
@ -2547,9 +2453,6 @@ static struct clk_regmap *gcc_msm8994_clocks[] = {
[GPLL0] = &gpll0.clkr,
[GPLL4_EARLY] = &gpll4_early.clkr,
[GPLL4] = &gpll4.clkr,
[CONFIG_NOC_CLK_SRC] = &config_noc_clk_src.clkr,
[PERIPH_NOC_CLK_SRC] = &periph_noc_clk_src.clkr,
[SYSTEM_NOC_CLK_SRC] = &system_noc_clk_src.clkr,
[UFS_AXI_CLK_SRC] = &ufs_axi_clk_src.clkr,
[USB30_MASTER_CLK_SRC] = &usb30_master_clk_src.clkr,
[BLSP1_QUP1_I2C_APPS_CLK_SRC] = &blsp1_qup1_i2c_apps_clk_src.clkr,
@ -2696,6 +2599,15 @@ static struct clk_regmap *gcc_msm8994_clocks[] = {
[USB_SS_PHY_LDO] = &usb_ss_phy_ldo.clkr,
[GCC_BOOT_ROM_AHB_CLK] = &gcc_boot_rom_ahb_clk.clkr,
[GCC_PRNG_AHB_CLK] = &gcc_prng_ahb_clk.clkr,
/*
* The following clocks should NOT be managed by this driver, but they once were
* mistakengly added. Now they are only here to indicate that they are not defined
* on purpose, even though the names will stay in the header file (for ABI sanity).
*/
[CONFIG_NOC_CLK_SRC] = NULL,
[PERIPH_NOC_CLK_SRC] = NULL,
[SYSTEM_NOC_CLK_SRC] = NULL,
};
static struct gdsc *gcc_msm8994_gdscs[] = {

View File

@ -1518,6 +1518,10 @@ static int cpufreq_online(unsigned int cpu)
kobject_uevent(&policy->kobj, KOBJ_ADD);
/* Callback for handling stuff after policy is ready */
if (cpufreq_driver->ready)
cpufreq_driver->ready(policy);
if (cpufreq_thermal_control_enabled(cpufreq_driver))
policy->cdev = of_cpufreq_cooling_register(policy);

View File

@ -388,7 +388,7 @@ static int qcom_cpufreq_hw_lmh_init(struct cpufreq_policy *policy, int index)
snprintf(data->irq_name, sizeof(data->irq_name), "dcvsh-irq-%u", policy->cpu);
ret = request_threaded_irq(data->throttle_irq, NULL, qcom_lmh_dcvs_handle_irq,
IRQF_ONESHOT, data->irq_name, data);
IRQF_ONESHOT | IRQF_NO_AUTOEN, data->irq_name, data);
if (ret) {
dev_err(&pdev->dev, "Error registering %s: %d\n", data->irq_name, ret);
return 0;
@ -542,6 +542,14 @@ static int qcom_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
return 0;
}
static void qcom_cpufreq_ready(struct cpufreq_policy *policy)
{
struct qcom_cpufreq_data *data = policy->driver_data;
if (data->throttle_irq >= 0)
enable_irq(data->throttle_irq);
}
static struct freq_attr *qcom_cpufreq_hw_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
&cpufreq_freq_attr_scaling_boost_freqs,
@ -561,6 +569,7 @@ static struct cpufreq_driver cpufreq_qcom_hw_driver = {
.fast_switch = qcom_cpufreq_hw_fast_switch,
.name = "qcom-cpufreq-hw",
.attr = qcom_cpufreq_hw_attr,
.ready = qcom_cpufreq_ready,
};
static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)

View File

@ -1681,8 +1681,10 @@ static void at_xdmac_tasklet(struct tasklet_struct *t)
__func__, atchan->irq_status);
if (!(atchan->irq_status & AT_XDMAC_CIS_LIS) &&
!(atchan->irq_status & error_mask))
!(atchan->irq_status & error_mask)) {
spin_unlock_irq(&atchan->lock);
return;
}
if (atchan->irq_status & error_mask)
at_xdmac_handle_error(atchan);

View File

@ -207,7 +207,7 @@ int pt_core_init(struct pt_device *pt)
if (!cmd_q->qbase) {
dev_err(dev, "unable to allocate command queue\n");
ret = -ENOMEM;
goto e_dma_alloc;
goto e_destroy_pool;
}
cmd_q->qidx = 0;
@ -229,8 +229,10 @@ int pt_core_init(struct pt_device *pt)
/* Request an irq */
ret = request_irq(pt->pt_irq, pt_core_irq_handler, 0, dev_name(pt->dev), pt);
if (ret)
goto e_pool;
if (ret) {
dev_err(dev, "unable to allocate an IRQ\n");
goto e_free_dma;
}
/* Update the device registers with queue information. */
cmd_q->qcontrol &= ~CMD_Q_SIZE;
@ -250,21 +252,20 @@ int pt_core_init(struct pt_device *pt)
/* Register the DMA engine support */
ret = pt_dmaengine_register(pt);
if (ret)
goto e_dmaengine;
goto e_free_irq;
/* Set up debugfs entries */
ptdma_debugfs_setup(pt);
return 0;
e_dmaengine:
e_free_irq:
free_irq(pt->pt_irq, pt);
e_dma_alloc:
e_free_dma:
dma_free_coherent(dev, cmd_q->qsize, cmd_q->qbase, cmd_q->qbase_dma);
e_pool:
dev_err(dev, "unable to allocate an IRQ\n");
e_destroy_pool:
dma_pool_destroy(pt->cmd_q.dma_pool);
return ret;

View File

@ -1868,8 +1868,13 @@ static int rcar_dmac_probe(struct platform_device *pdev)
dmac->dev = &pdev->dev;
platform_set_drvdata(pdev, dmac);
dma_set_max_seg_size(dmac->dev, RCAR_DMATCR_MASK);
dma_set_mask_and_coherent(dmac->dev, DMA_BIT_MASK(40));
ret = dma_set_max_seg_size(dmac->dev, RCAR_DMATCR_MASK);
if (ret)
return ret;
ret = dma_set_mask_and_coherent(dmac->dev, DMA_BIT_MASK(40));
if (ret)
return ret;
ret = rcar_dmac_parse_of(&pdev->dev, dmac);
if (ret < 0)

View File

@ -115,8 +115,10 @@ static dma_cookie_t shdma_tx_submit(struct dma_async_tx_descriptor *tx)
ret = pm_runtime_get(schan->dev);
spin_unlock_irq(&schan->chan_lock);
if (ret < 0)
if (ret < 0) {
dev_err(schan->dev, "%s(): GET = %d\n", __func__, ret);
pm_runtime_put(schan->dev);
}
pm_runtime_barrier(schan->dev);

View File

@ -292,10 +292,12 @@ static int stm32_dmamux_probe(struct platform_device *pdev)
ret = of_dma_router_register(node, stm32_dmamux_route_allocate,
&stm32_dmamux->dmarouter);
if (ret)
goto err_clk;
goto pm_disable;
return 0;
pm_disable:
pm_runtime_disable(&pdev->dev);
err_clk:
clk_disable_unprepare(stm32_dmamux->clk);

View File

@ -215,7 +215,7 @@ void *edac_align_ptr(void **p, unsigned int size, int n_elems)
else
return (char *)ptr;
r = (unsigned long)p % align;
r = (unsigned long)ptr % align;
if (r == 0)
return (char *)ptr;

View File

@ -410,10 +410,8 @@ static int rockchip_irq_set_type(struct irq_data *d, unsigned int type)
level = rockchip_gpio_readl(bank, bank->gpio_regs->int_type);
polarity = rockchip_gpio_readl(bank, bank->gpio_regs->int_polarity);
switch (type) {
case IRQ_TYPE_EDGE_BOTH:
if (type == IRQ_TYPE_EDGE_BOTH) {
if (bank->gpio_type == GPIO_TYPE_V2) {
bank->toggle_edge_mode &= ~mask;
rockchip_gpio_writel_bit(bank, d->hwirq, 1,
bank->gpio_regs->int_bothedge);
goto out;
@ -431,30 +429,34 @@ static int rockchip_irq_set_type(struct irq_data *d, unsigned int type)
else
polarity |= mask;
}
break;
case IRQ_TYPE_EDGE_RISING:
bank->toggle_edge_mode &= ~mask;
level |= mask;
polarity |= mask;
break;
case IRQ_TYPE_EDGE_FALLING:
bank->toggle_edge_mode &= ~mask;
level |= mask;
polarity &= ~mask;
break;
case IRQ_TYPE_LEVEL_HIGH:
bank->toggle_edge_mode &= ~mask;
level &= ~mask;
polarity |= mask;
break;
case IRQ_TYPE_LEVEL_LOW:
bank->toggle_edge_mode &= ~mask;
level &= ~mask;
polarity &= ~mask;
break;
default:
ret = -EINVAL;
goto out;
} else {
if (bank->gpio_type == GPIO_TYPE_V2) {
rockchip_gpio_writel_bit(bank, d->hwirq, 0,
bank->gpio_regs->int_bothedge);
} else {
bank->toggle_edge_mode &= ~mask;
}
switch (type) {
case IRQ_TYPE_EDGE_RISING:
level |= mask;
polarity |= mask;
break;
case IRQ_TYPE_EDGE_FALLING:
level |= mask;
polarity &= ~mask;
break;
case IRQ_TYPE_LEVEL_HIGH:
level &= ~mask;
polarity |= mask;
break;
case IRQ_TYPE_LEVEL_LOW:
level &= ~mask;
polarity &= ~mask;
break;
default:
ret = -EINVAL;
goto out;
}
}
rockchip_gpio_writel(bank, level, bank->gpio_regs->int_type);

View File

@ -343,9 +343,12 @@ static int tegra186_gpio_of_xlate(struct gpio_chip *chip,
return offset + pin;
}
#define to_tegra_gpio(x) container_of((x), struct tegra_gpio, gpio)
static void tegra186_irq_ack(struct irq_data *data)
{
struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data);
struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
struct tegra_gpio *gpio = to_tegra_gpio(gc);
void __iomem *base;
base = tegra186_gpio_get_base(gpio, data->hwirq);
@ -357,7 +360,8 @@ static void tegra186_irq_ack(struct irq_data *data)
static void tegra186_irq_mask(struct irq_data *data)
{
struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data);
struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
struct tegra_gpio *gpio = to_tegra_gpio(gc);
void __iomem *base;
u32 value;
@ -372,7 +376,8 @@ static void tegra186_irq_mask(struct irq_data *data)
static void tegra186_irq_unmask(struct irq_data *data)
{
struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data);
struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
struct tegra_gpio *gpio = to_tegra_gpio(gc);
void __iomem *base;
u32 value;
@ -387,7 +392,8 @@ static void tegra186_irq_unmask(struct irq_data *data)
static int tegra186_irq_set_type(struct irq_data *data, unsigned int type)
{
struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data);
struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
struct tegra_gpio *gpio = to_tegra_gpio(gc);
void __iomem *base;
u32 value;

View File

@ -3147,6 +3147,16 @@ int gpiod_to_irq(const struct gpio_desc *desc)
return retirq;
}
#ifdef CONFIG_GPIOLIB_IRQCHIP
if (gc->irq.chip) {
/*
* Avoid race condition with other code, which tries to lookup
* an IRQ before the irqchip has been properly registered,
* i.e. while gpiochip is still being brought up.
*/
return -EPROBE_DEFER;
}
#endif
return -ENXIO;
}
EXPORT_SYMBOL_GPL(gpiod_to_irq);

View File

@ -1141,7 +1141,7 @@ int amdgpu_display_framebuffer_init(struct drm_device *dev,
if (ret)
return ret;
if (!dev->mode_config.allow_fb_modifiers) {
if (!dev->mode_config.allow_fb_modifiers && !adev->enable_virtual_display) {
drm_WARN_ONCE(dev, adev->family >= AMDGPU_FAMILY_AI,
"GFX9+ requires FB check based on format modifier\n");
ret = check_tiling_flags_gfx6(rfb);

View File

@ -2011,6 +2011,9 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
return -ENODEV;
}
if (amdgpu_aspm == -1 && !pcie_aspm_enabled(pdev))
amdgpu_aspm = 0;
if (amdgpu_virtual_display ||
amdgpu_device_asic_has_dc_support(flags & AMD_ASIC_MASK))
supports_atomic = true;

View File

@ -391,7 +391,6 @@ static struct drm_plane *amdgpu_vkms_plane_init(struct drm_device *dev,
int index)
{
struct drm_plane *plane;
uint64_t modifiers[] = {DRM_FORMAT_MOD_LINEAR, DRM_FORMAT_MOD_INVALID};
int ret;
plane = kzalloc(sizeof(*plane), GFP_KERNEL);
@ -402,7 +401,7 @@ static struct drm_plane *amdgpu_vkms_plane_init(struct drm_device *dev,
&amdgpu_vkms_plane_funcs,
amdgpu_vkms_formats,
ARRAY_SIZE(amdgpu_vkms_formats),
modifiers, type, NULL);
NULL, type, NULL);
if (ret) {
kfree(plane);
return ERR_PTR(ret);

View File

@ -768,11 +768,16 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
* Check if all VM PDs/PTs are ready for updates
*
* Returns:
* True if eviction list is empty.
* True if VM is not evicting.
*/
bool amdgpu_vm_ready(struct amdgpu_vm *vm)
{
return list_empty(&vm->evicted);
bool ret;
amdgpu_vm_eviction_lock(vm);
ret = !vm->evicting;
amdgpu_vm_eviction_unlock(vm);
return ret;
}
/**

View File

@ -2057,6 +2057,10 @@ static int sdma_v4_0_suspend(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
/* SMU saves SDMA state for us */
if (adev->in_s0ix)
return 0;
return sdma_v4_0_hw_fini(adev);
}
@ -2064,6 +2068,10 @@ static int sdma_v4_0_resume(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
/* SMU restores SDMA state for us */
if (adev->in_s0ix)
return 0;
return sdma_v4_0_hw_init(adev);
}

View File

@ -619,8 +619,8 @@ soc15_asic_reset_method(struct amdgpu_device *adev)
static int soc15_asic_reset(struct amdgpu_device *adev)
{
/* original raven doesn't have full asic reset */
if ((adev->apu_flags & AMD_APU_IS_RAVEN) &&
!(adev->apu_flags & AMD_APU_IS_RAVEN2))
if ((adev->apu_flags & AMD_APU_IS_RAVEN) ||
(adev->apu_flags & AMD_APU_IS_RAVEN2))
return 0;
switch (soc15_asic_reset_method(adev)) {
@ -1114,8 +1114,11 @@ static int soc15_common_early_init(void *handle)
AMD_CG_SUPPORT_SDMA_LS |
AMD_CG_SUPPORT_VCN_MGCG;
/*
* MMHUB PG needs to be disabled for Picasso for
* stability reasons.
*/
adev->pg_flags = AMD_PG_SUPPORT_SDMA |
AMD_PG_SUPPORT_MMHUB |
AMD_PG_SUPPORT_VCN;
} else {
adev->cg_flags = AMD_CG_SUPPORT_GFX_MGCG |

View File

@ -4256,6 +4256,9 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
}
#endif
/* Disable vblank IRQs aggressively for power-saving. */
adev_to_drm(adev)->vblank_disable_immediate = true;
/* loops over all connectors on the board */
for (i = 0; i < link_cnt; i++) {
struct dc_link *link = NULL;
@ -4301,19 +4304,17 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
update_connector_ext_caps(aconnector);
if (psr_feature_enabled)
amdgpu_dm_set_psr_caps(link);
/* TODO: Fix vblank control helpers to delay PSR entry to allow this when
* PSR is also supported.
*/
if (link->psr_settings.psr_feature_enabled)
adev_to_drm(adev)->vblank_disable_immediate = false;
}
}
/*
* Disable vblank IRQs aggressively for power-saving.
*
* TODO: Fix vblank control helpers to delay PSR entry to allow this when PSR
* is also supported.
*/
adev_to_drm(adev)->vblank_disable_immediate = !psr_feature_enabled;
/* Software is initialized. Now we can register interrupt handlers. */
switch (adev->asic_type) {
#if defined(CONFIG_DRM_AMD_DC_SI)

View File

@ -473,8 +473,10 @@ static void dcn3_get_memclk_states_from_smu(struct clk_mgr *clk_mgr_base)
clk_mgr_base->bw_params->dc_mode_softmax_memclk = dcn30_smu_get_dc_mode_max_dpm_freq(clk_mgr, PPCLK_UCLK);
/* Refresh bounding box */
DC_FP_START();
clk_mgr_base->ctx->dc->res_pool->funcs->update_bw_bounding_box(
clk_mgr->base.ctx->dc, clk_mgr_base->bw_params);
DC_FP_END();
}
static bool dcn3_is_smu_present(struct clk_mgr *clk_mgr_base)

View File

@ -985,10 +985,13 @@ static bool dc_construct(struct dc *dc,
goto fail;
#ifdef CONFIG_DRM_AMD_DC_DCN
dc->clk_mgr->force_smu_not_present = init_params->force_smu_not_present;
#endif
if (dc->res_pool->funcs->update_bw_bounding_box)
if (dc->res_pool->funcs->update_bw_bounding_box) {
DC_FP_START();
dc->res_pool->funcs->update_bw_bounding_box(dc, dc->clk_mgr->bw_params);
DC_FP_END();
}
#endif
/* Creation of current_state must occur after dc->dml
* is initialized in dc_create_resource_pool because

View File

@ -1964,10 +1964,6 @@ enum dc_status dc_remove_stream_from_ctx(
dc->res_pool,
del_pipe->stream_res.stream_enc,
false);
/* Release link encoder from stream in new dc_state. */
if (dc->res_pool->funcs->link_enc_unassign)
dc->res_pool->funcs->link_enc_unassign(new_ctx, del_pipe->stream);
#if defined(CONFIG_DRM_AMD_DC_DCN)
if (is_dp_128b_132b_signal(del_pipe)) {
update_hpo_dp_stream_engine_usage(

View File

@ -421,6 +421,36 @@ static int sienna_cichlid_store_powerplay_table(struct smu_context *smu)
return 0;
}
static int sienna_cichlid_patch_pptable_quirk(struct smu_context *smu)
{
struct amdgpu_device *adev = smu->adev;
uint32_t *board_reserved;
uint16_t *freq_table_gfx;
uint32_t i;
/* Fix some OEM SKU specific stability issues */
GET_PPTABLE_MEMBER(BoardReserved, &board_reserved);
if ((adev->pdev->device == 0x73DF) &&
(adev->pdev->revision == 0XC3) &&
(adev->pdev->subsystem_device == 0x16C2) &&
(adev->pdev->subsystem_vendor == 0x1043))
board_reserved[0] = 1387;
GET_PPTABLE_MEMBER(FreqTableGfx, &freq_table_gfx);
if ((adev->pdev->device == 0x73DF) &&
(adev->pdev->revision == 0XC3) &&
((adev->pdev->subsystem_device == 0x16C2) ||
(adev->pdev->subsystem_device == 0x133C)) &&
(adev->pdev->subsystem_vendor == 0x1043)) {
for (i = 0; i < NUM_GFXCLK_DPM_LEVELS; i++) {
if (freq_table_gfx[i] > 2500)
freq_table_gfx[i] = 2500;
}
}
return 0;
}
static int sienna_cichlid_setup_pptable(struct smu_context *smu)
{
int ret = 0;
@ -441,7 +471,7 @@ static int sienna_cichlid_setup_pptable(struct smu_context *smu)
if (ret)
return ret;
return ret;
return sienna_cichlid_patch_pptable_quirk(smu);
}
static int sienna_cichlid_tables_init(struct smu_context *smu)
@ -1238,21 +1268,37 @@ static int sienna_cichlid_populate_umd_state_clk(struct smu_context *smu)
&dpm_context->dpm_tables.soc_table;
struct smu_umd_pstate_table *pstate_table =
&smu->pstate_table;
struct amdgpu_device *adev = smu->adev;
pstate_table->gfxclk_pstate.min = gfx_table->min;
pstate_table->gfxclk_pstate.peak = gfx_table->max;
if (gfx_table->max >= SIENNA_CICHLID_UMD_PSTATE_PROFILING_GFXCLK)
pstate_table->gfxclk_pstate.standard = SIENNA_CICHLID_UMD_PSTATE_PROFILING_GFXCLK;
pstate_table->uclk_pstate.min = mem_table->min;
pstate_table->uclk_pstate.peak = mem_table->max;
if (mem_table->max >= SIENNA_CICHLID_UMD_PSTATE_PROFILING_MEMCLK)
pstate_table->uclk_pstate.standard = SIENNA_CICHLID_UMD_PSTATE_PROFILING_MEMCLK;
pstate_table->socclk_pstate.min = soc_table->min;
pstate_table->socclk_pstate.peak = soc_table->max;
if (soc_table->max >= SIENNA_CICHLID_UMD_PSTATE_PROFILING_SOCCLK)
switch (adev->asic_type) {
case CHIP_SIENNA_CICHLID:
case CHIP_NAVY_FLOUNDER:
pstate_table->gfxclk_pstate.standard = SIENNA_CICHLID_UMD_PSTATE_PROFILING_GFXCLK;
pstate_table->uclk_pstate.standard = SIENNA_CICHLID_UMD_PSTATE_PROFILING_MEMCLK;
pstate_table->socclk_pstate.standard = SIENNA_CICHLID_UMD_PSTATE_PROFILING_SOCCLK;
break;
case CHIP_DIMGREY_CAVEFISH:
pstate_table->gfxclk_pstate.standard = DIMGREY_CAVEFISH_UMD_PSTATE_PROFILING_GFXCLK;
pstate_table->uclk_pstate.standard = DIMGREY_CAVEFISH_UMD_PSTATE_PROFILING_MEMCLK;
pstate_table->socclk_pstate.standard = DIMGREY_CAVEFISH_UMD_PSTATE_PROFILING_SOCCLK;
break;
case CHIP_BEIGE_GOBY:
pstate_table->gfxclk_pstate.standard = BEIGE_GOBY_UMD_PSTATE_PROFILING_GFXCLK;
pstate_table->uclk_pstate.standard = BEIGE_GOBY_UMD_PSTATE_PROFILING_MEMCLK;
pstate_table->socclk_pstate.standard = BEIGE_GOBY_UMD_PSTATE_PROFILING_SOCCLK;
break;
default:
break;
}
return 0;
}

View File

@ -33,6 +33,14 @@ typedef enum {
#define SIENNA_CICHLID_UMD_PSTATE_PROFILING_SOCCLK 960
#define SIENNA_CICHLID_UMD_PSTATE_PROFILING_MEMCLK 1000
#define DIMGREY_CAVEFISH_UMD_PSTATE_PROFILING_GFXCLK 1950
#define DIMGREY_CAVEFISH_UMD_PSTATE_PROFILING_SOCCLK 960
#define DIMGREY_CAVEFISH_UMD_PSTATE_PROFILING_MEMCLK 676
#define BEIGE_GOBY_UMD_PSTATE_PROFILING_GFXCLK 2200
#define BEIGE_GOBY_UMD_PSTATE_PROFILING_SOCCLK 960
#define BEIGE_GOBY_UMD_PSTATE_PROFILING_MEMCLK 1000
extern void sienna_cichlid_set_ppt_funcs(struct smu_context *smu);
#endif

View File

@ -282,14 +282,9 @@ static int yellow_carp_post_smu_init(struct smu_context *smu)
static int yellow_carp_mode_reset(struct smu_context *smu, int type)
{
int ret = 0, index = 0;
int ret = 0;
index = smu_cmn_to_asic_specific_index(smu, CMN2ASIC_MAPPING_MSG,
SMU_MSG_GfxDeviceDriverReset);
if (index < 0)
return index == -EACCES ? 0 : index;
ret = smu_cmn_send_smc_msg_with_param(smu, (uint16_t)index, type, NULL);
ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_GfxDeviceDriverReset, type, NULL);
if (ret)
dev_err(smu->adev->dev, "Failed to mode reset!\n");

View File

@ -76,15 +76,17 @@ int drm_atomic_set_mode_for_crtc(struct drm_crtc_state *state,
state->mode_blob = NULL;
if (mode) {
struct drm_property_blob *blob;
drm_mode_convert_to_umode(&umode, mode);
state->mode_blob =
drm_property_create_blob(state->crtc->dev,
sizeof(umode),
&umode);
if (IS_ERR(state->mode_blob))
return PTR_ERR(state->mode_blob);
blob = drm_property_create_blob(crtc->dev,
sizeof(umode), &umode);
if (IS_ERR(blob))
return PTR_ERR(blob);
drm_mode_copy(&state->mode, mode);
state->mode_blob = blob;
state->enable = true;
drm_dbg_atomic(crtc->dev,
"Set [MODE:%s] for [CRTC:%d:%s] state %p\n",

View File

@ -5345,6 +5345,7 @@ u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edi
if (!(edid->input & DRM_EDID_INPUT_DIGITAL))
return quirks;
info->color_formats |= DRM_COLOR_FORMAT_RGB444;
drm_parse_cea_ext(connector, edid);
/*
@ -5393,7 +5394,6 @@ u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edi
DRM_DEBUG("%s: Assigning EDID-1.4 digital sink color depth as %d bpc.\n",
connector->name, info->bpc);
info->color_formats |= DRM_COLOR_FORMAT_RGB444;
if (edid->features & DRM_EDID_FEATURE_RGB_YCRCB444)
info->color_formats |= DRM_COLOR_FORMAT_YCRCB444;
if (edid->features & DRM_EDID_FEATURE_RGB_YCRCB422)

View File

@ -512,6 +512,7 @@ int drm_gem_cma_mmap(struct drm_gem_cma_object *cma_obj, struct vm_area_struct *
*/
vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
vma->vm_flags &= ~VM_PFNMAP;
vma->vm_flags |= VM_DONTEXPAND;
if (cma_obj->map_noncoherent) {
vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);

View File

@ -101,6 +101,7 @@ config DRM_I915_USERPTR
config DRM_I915_GVT
bool "Enable Intel GVT-g graphics virtualization host support"
depends on DRM_I915
depends on X86
depends on 64BIT
default n
help

Some files were not shown because too many files have changed in this diff Show More