2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-16 01:04:08 +08:00

KVM/arm64 updates for 6.4

- Numerous fixes for the pathological lock inversion issue that
   plagued KVM/arm64 since... forever.
 
 - New framework allowing SMCCC-compliant hypercalls to be forwarded
   to userspace, hopefully paving the way for some more features
   being moved to VMMs rather than be implemented in the kernel.
 
 - Large rework of the timer code to allow a VM-wide offset to be
   applied to both virtual and physical counters as well as a
   per-timer, per-vcpu offset that complements the global one.
   This last part allows the NV timer code to be implemented on
   top.
 
 - A small set of fixes to make sure that we don't change anything
   affecting the EL1&0 translation regime just after having having
   taken an exception to EL2 until we have executed a DSB. This
   ensures that speculative walks started in EL1&0 have completed.
 
 - The usual selftest fixes and improvements.
 -----BEGIN PGP SIGNATURE-----
 
 iQJDBAABCgAtFiEEn9UcU+C1Yxj9lZw9I9DQutE9ekMFAmRCZIwPHG1hekBrZXJu
 ZWwub3JnAAoJECPQ0LrRPXpDoZ8P/ioXAdDbAE4hTuyD2YdKJ3IGWN3pg52Z7xc2
 rBXXFrbK9+n9FEc3AVdHoGsRPDP0Ynl+apj+aB0Klr/Fl0KKqac+W0ARX9rn1mI1
 HjeygFPaGnXjMUp0BjeSLS+g3b0gebELJ6R1QEe1/MIPb8Se7M1y3ZpMWdhe0PPL
 vyzw3LZq2OAlLgWKZhAfhh03qdr2kqJxypYs6nMrcexfn8dXT78dsYKW1nXmqKcE
 61Gg23MDPUoexYpUhm+ym5t8hltoI1di8faPmxEpaFzpSDyAg8V5vo6LiW9jn3cf
 RX0Sikk1laiRAhVbbIFCKC148vFyKxum3scpKyb91Qc+sK1kmIcxvEqlc6SfG9je
 +5ndZwAfXtW6SMSOyX8y5fXbee7M0sx3n3le9BNgwXfmLWg/GHXJ544dJgVIlf/e
 0Z+8QnP1IUDfARR/b2FlW7A7XLzNHQzO379ekcAdUptbGwlS9CrW6SJ83QR7K6fB
 bh0aSSELKsD7pX8wnNyNACvmz2zL12ITlDKdZWUr8MSxyTjgVy7s0BDsQT3sbrA1
 1sH++RvUWfC2k7tVT3vjZFzUDlPw3bnZmo5YMWRTMbXEdr1V5rDw5F5IXit13KeT
 8bk0hnJgnLmyoX2A17v5dkFMIKD7p13tqDRdfFcn0ru63HIKxgkS3ITkDmsAQELK
 DHT7RBE0
 =Bhta
 -----END PGP SIGNATURE-----

Merge tag 'kvmarm-6.4' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD

KVM/arm64 updates for 6.4

- Numerous fixes for the pathological lock inversion issue that
  plagued KVM/arm64 since... forever.

- New framework allowing SMCCC-compliant hypercalls to be forwarded
  to userspace, hopefully paving the way for some more features
  being moved to VMMs rather than be implemented in the kernel.

- Large rework of the timer code to allow a VM-wide offset to be
  applied to both virtual and physical counters as well as a
  per-timer, per-vcpu offset that complements the global one.
  This last part allows the NV timer code to be implemented on
  top.

- A small set of fixes to make sure that we don't change anything
  affecting the EL1&0 translation regime just after having having
  taken an exception to EL2 until we have executed a DSB. This
  ensures that speculative walks started in EL1&0 have completed.

- The usual selftest fixes and improvements.
This commit is contained in:
Paolo Bonzini 2023-04-26 15:46:52 -04:00
commit 4f382a79a6
802 changed files with 10095 additions and 5702 deletions

1
.gitignore vendored
View File

@ -78,6 +78,7 @@ modules.order
# RPM spec file (make rpm-pkg)
#
/*.spec
/rpmbuild/
#
# Debian directory (make deb-pkg)

View File

@ -28,6 +28,7 @@ Alexander Lobakin <alobakin@pm.me> <bloodyreaper@yandex.ru>
Alexander Mikhalitsyn <alexander@mihalicyn.com> <alexander.mikhalitsyn@virtuozzo.com>
Alexander Mikhalitsyn <alexander@mihalicyn.com> <aleksandr.mikhalitsyn@canonical.com>
Alexandre Belloni <alexandre.belloni@bootlin.com> <alexandre.belloni@free-electrons.com>
Alexandre Ghiti <alex@ghiti.fr> <alexandre.ghiti@canonical.com>
Alexei Starovoitov <ast@kernel.org> <alexei.starovoitov@gmail.com>
Alexei Starovoitov <ast@kernel.org> <ast@fb.com>
Alexei Starovoitov <ast@kernel.org> <ast@plumgrid.com>
@ -121,7 +122,7 @@ Dengcheng Zhu <dzhu@wavecomp.com> <dengcheng.zhu@gmail.com>
Dengcheng Zhu <dzhu@wavecomp.com> <dengcheng.zhu@imgtec.com>
Dengcheng Zhu <dzhu@wavecomp.com> <dengcheng.zhu@mips.com>
<dev.kurt@vandijck-laurijssen.be> <kurt.van.dijck@eia.be>
Dikshita Agarwal <dikshita@qti.qualcomm.com> <dikshita@codeaurora.org>
Dikshita Agarwal <quic_dikshita@quicinc.com> <dikshita@codeaurora.org>
Dmitry Baryshkov <dbaryshkov@gmail.com>
Dmitry Baryshkov <dbaryshkov@gmail.com> <[dbaryshkov@gmail.com]>
Dmitry Baryshkov <dbaryshkov@gmail.com> <dmitry_baryshkov@mentor.com>
@ -132,6 +133,8 @@ Dmitry Safonov <0x7f454c46@gmail.com> <dsafonov@virtuozzo.com>
Domen Puncer <domen@coderock.org>
Douglas Gilbert <dougg@torque.net>
Ed L. Cashin <ecashin@coraid.com>
Enric Balletbo i Serra <eballetbo@kernel.org> <enric.balletbo@collabora.com>
Enric Balletbo i Serra <eballetbo@kernel.org> <eballetbo@iseebcn.com>
Erik Kaneda <erik.kaneda@intel.com> <erik.schmauss@intel.com>
Eugen Hristev <eugen.hristev@collabora.com> <eugen.hristev@microchip.com>
Evgeniy Polyakov <johnpol@2ka.mipt.ru>
@ -194,6 +197,7 @@ Jan Glauber <jan.glauber@gmail.com> <jang@linux.vnet.ibm.com>
Jan Glauber <jan.glauber@gmail.com> <jglauber@cavium.com>
Jarkko Sakkinen <jarkko@kernel.org> <jarkko.sakkinen@linux.intel.com>
Jarkko Sakkinen <jarkko@kernel.org> <jarkko@profian.com>
Jarkko Sakkinen <jarkko@kernel.org> <jarkko.sakkinen@tuni.fi>
Jason Gunthorpe <jgg@ziepe.ca> <jgg@mellanox.com>
Jason Gunthorpe <jgg@ziepe.ca> <jgg@nvidia.com>
Jason Gunthorpe <jgg@ziepe.ca> <jgunthorpe@obsidianresearch.com>
@ -213,6 +217,9 @@ Jens Axboe <axboe@suse.de>
Jens Osterkamp <Jens.Osterkamp@de.ibm.com>
Jernej Skrabec <jernej.skrabec@gmail.com> <jernej.skrabec@siol.net>
Jessica Zhang <quic_jesszhan@quicinc.com> <jesszhan@codeaurora.org>
Jiri Pirko <jiri@resnulli.us> <jiri@nvidia.com>
Jiri Pirko <jiri@resnulli.us> <jiri@mellanox.com>
Jiri Pirko <jiri@resnulli.us> <jpirko@redhat.com>
Jiri Slaby <jirislaby@kernel.org> <jirislaby@gmail.com>
Jiri Slaby <jirislaby@kernel.org> <jslaby@novell.com>
Jiri Slaby <jirislaby@kernel.org> <jslaby@suse.com>
@ -374,6 +381,7 @@ Quentin Monnet <quentin@isovalent.com> <quentin.monnet@netronome.com>
Quentin Perret <qperret@qperret.net> <quentin.perret@arm.com>
Rafael J. Wysocki <rjw@rjwysocki.net> <rjw@sisk.pl>
Rajeev Nandan <quic_rajeevny@quicinc.com> <rajeevny@codeaurora.org>
Rajendra Nayak <quic_rjendra@quicinc.com> <rnayak@codeaurora.org>
Rajesh Shah <rajesh.shah@intel.com>
Ralf Baechle <ralf@linux-mips.org>
Ralf Wildenhues <Ralf.Wildenhues@gmx.de>
@ -382,6 +390,9 @@ Rémi Denis-Courmont <rdenis@simphalempin.com>
Ricardo Ribalda <ribalda@kernel.org> <ricardo@ribalda.com>
Ricardo Ribalda <ribalda@kernel.org> Ricardo Ribalda Delgado <ribalda@kernel.org>
Ricardo Ribalda <ribalda@kernel.org> <ricardo.ribalda@gmail.com>
Richard Leitner <richard.leitner@linux.dev> <dev@g0hl1n.net>
Richard Leitner <richard.leitner@linux.dev> <me@g0hl1n.net>
Richard Leitner <richard.leitner@linux.dev> <richard.leitner@skidata.com>
Robert Foss <rfoss@kernel.org> <robert.foss@linaro.org>
Roman Gushchin <roman.gushchin@linux.dev> <guro@fb.com>
Roman Gushchin <roman.gushchin@linux.dev> <guroan@gmail.com>
@ -392,6 +403,7 @@ Ross Zwisler <zwisler@kernel.org> <ross.zwisler@linux.intel.com>
Rudolf Marek <R.Marek@sh.cvut.cz>
Rui Saraiva <rmps@joel.ist.utl.pt>
Sachin P Sant <ssant@in.ibm.com>
Sai Prakash Ranjan <quic_saipraka@quicinc.com> <saiprakash.ranjan@codeaurora.org>
Sakari Ailus <sakari.ailus@linux.intel.com> <sakari.ailus@iki.fi>
Sam Ravnborg <sam@mars.ravnborg.org>
Sankeerth Billakanti <quic_sbillaka@quicinc.com> <sbillaka@codeaurora.org>
@ -432,6 +444,10 @@ Thomas Graf <tgraf@suug.ch>
Thomas Körper <socketcan@esd.eu> <thomas.koerper@esd.eu>
Thomas Pedersen <twp@codeaurora.org>
Tiezhu Yang <yangtiezhu@loongson.cn> <kernelpatch@126.com>
Tobias Klauser <tklauser@distanz.ch> <tobias.klauser@gmail.com>
Tobias Klauser <tklauser@distanz.ch> <klto@zhaw.ch>
Tobias Klauser <tklauser@distanz.ch> <tklauser@nuerscht.ch>
Tobias Klauser <tklauser@distanz.ch> <tklauser@xenon.tklauser.home>
Todor Tomov <todor.too@gmail.com> <todor.tomov@linaro.org>
Tony Luck <tony.luck@intel.com>
TripleX Chung <xxx.phy@gmail.com> <triplex@zh-kernel.org>

View File

@ -242,7 +242,7 @@ group and can access them as follows::
VFIO User API
-------------------------------------------------------------------------------
Please see include/linux/vfio.h for complete API documentation.
Please see include/uapi/linux/vfio.h for complete API documentation.
VFIO bus driver API
-------------------------------------------------------------------------------

View File

@ -1222,7 +1222,7 @@ defined:
return
-ECHILD and it will be called again in ref-walk mode.
``_weak_revalidate``
``d_weak_revalidate``
called when the VFS needs to revalidate a "jumped" dentry. This
is called when a path-walk ends at dentry that was not acquired
by doing a lookup in the parent directory. This includes "/",

View File

@ -19,7 +19,7 @@ possible we decided to do following:
platform devices.
- Devices behind real busses where there is a connector resource
are represented as struct spi_device or struct i2c_device. Note
are represented as struct spi_device or struct i2c_client. Note
that standard UARTs are not busses so there is no struct uart_device,
although some of them may be represented by struct serdev_device.

View File

@ -213,11 +213,7 @@ point rather than some random spot. If your upstream-bound branch has
emptied entirely into the mainline during the merge window, you can pull it
forward with a command like::
git merge v5.2-rc1^0
The "^0" will cause Git to do a fast-forward merge (which should be
possible in this situation), thus avoiding the addition of a spurious merge
commit.
git merge --ff-only v5.2-rc1
The guidelines laid out above are just that: guidelines. There will always
be situations that call out for a different solution, and these guidelines

View File

@ -5,10 +5,10 @@ Hugetlbfs Reservation
Overview
========
Huge pages as described at Documentation/mm/hugetlbpage.rst are typically
preallocated for application use. These huge pages are instantiated in a
task's address space at page fault time if the VMA indicates huge pages are
to be used. If no huge page exists at page fault time, the task is sent
Huge pages as described at Documentation/admin-guide/mm/hugetlbpage.rst are
typically preallocated for application use. These huge pages are instantiated
in a task's address space at page fault time if the VMA indicates huge pages
are to be used. If no huge page exists at page fault time, the task is sent
a SIGBUS and often dies an unhappy death. Shortly after huge page support
was added, it was determined that it would be better to detect a shortage
of huge pages at mmap() time. The idea is that if there were not enough

View File

@ -66,7 +66,7 @@ one of the types described below.
also populated on boot using one of ``kernelcore``, ``movablecore`` and
``movable_node`` kernel command line parameters. See
Documentation/mm/page_migration.rst and
Documentation/admin-guide/mm/memory_hotplug.rst for additional details.
Documentation/admin-guide/mm/memory-hotplug.rst for additional details.
* ``ZONE_DEVICE`` represents memory residing on devices such as PMEM and GPU.
It has different characteristics than RAM zone types and it exists to provide

View File

@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
%YAML 1.2
---
$id: http://kernel.org/schemas/netlink/genetlink-c.yaml#

View File

@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
%YAML 1.2
---
$id: http://kernel.org/schemas/netlink/genetlink-legacy.yaml#

View File

@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
%YAML 1.2
---
$id: http://kernel.org/schemas/netlink/genetlink-legacy.yaml#

View File

@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
name: ethtool

View File

@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
name: fou

View File

@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
name: netdev
@ -9,6 +9,7 @@ definitions:
-
type: flags
name: xdp-act
render-max: true
entries:
-
name: basic

View File

@ -23,10 +23,13 @@ metadata is supported, this set will grow:
An XDP program can use these kfuncs to read the metadata into stack
variables for its own consumption. Or, to pass the metadata on to other
consumers, an XDP program can store it into the metadata area carried
ahead of the packet.
ahead of the packet. Not all packets will necessary have the requested
metadata available in which case the driver returns ``-ENODATA``.
Not all kfuncs have to be implemented by the device driver; when not
implemented, the default ones that return ``-EOPNOTSUPP`` will be used.
implemented, the default ones that return ``-EOPNOTSUPP`` will be used
to indicate the device driver have not implemented this kfunc.
Within an XDP frame, the metadata layout (accessed via ``xdp_buff``) is
as follows::

View File

@ -12,10 +12,6 @@ under ``-std=gnu11`` [gcc-c-dialect-options]_: the GNU dialect of ISO C11.
This dialect contains many extensions to the language [gnu-extensions]_,
and many of them are used within the kernel as a matter of course.
There is some support for compiling the kernel with ``icc`` [icc]_ for several
of the architectures, although at the time of writing it is not completed,
requiring third-party patches.
Attributes
----------
@ -35,12 +31,28 @@ in order to feature detect which ones can be used and/or to shorten the code.
Please refer to ``include/linux/compiler_attributes.h`` for more information.
Rust
----
The kernel has experimental support for the Rust programming language
[rust-language]_ under ``CONFIG_RUST``. It is compiled with ``rustc`` [rustc]_
under ``--edition=2021`` [rust-editions]_. Editions are a way to introduce
small changes to the language that are not backwards compatible.
On top of that, some unstable features [rust-unstable-features]_ are used in
the kernel. Unstable features may change in the future, thus it is an important
goal to reach a point where only stable features are used.
Please refer to Documentation/rust/index.rst for more information.
.. [c-language] http://www.open-std.org/jtc1/sc22/wg14/www/standards
.. [gcc] https://gcc.gnu.org
.. [clang] https://clang.llvm.org
.. [icc] https://software.intel.com/en-us/c-compilers
.. [gcc-c-dialect-options] https://gcc.gnu.org/onlinedocs/gcc/C-Dialect-Options.html
.. [gnu-extensions] https://gcc.gnu.org/onlinedocs/gcc/C-Extensions.html
.. [gcc-attribute-syntax] https://gcc.gnu.org/onlinedocs/gcc/Attribute-Syntax.html
.. [n2049] http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2049.pdf
.. [rust-language] https://www.rust-lang.org
.. [rustc] https://doc.rust-lang.org/rustc/
.. [rust-editions] https://doc.rust-lang.org/edition-guide/editions/
.. [rust-unstable-features] https://github.com/Rust-for-Linux/linux/issues/2

View File

@ -320,7 +320,7 @@ for their time. Code review is a tiring and time-consuming process, and
reviewers sometimes get grumpy. Even in that case, though, respond
politely and address the problems they have pointed out. When sending a next
version, add a ``patch changelog`` to the cover letter or to individual patches
explaining difference aganst previous submission (see
explaining difference against previous submission (see
:ref:`the_canonical_patch_format`).
See Documentation/process/email-clients.rst for recommendations on email

View File

@ -258,7 +258,7 @@ Linux cannot currently figure out CPU capacity on its own, this information thus
needs to be handed to it. Architectures must define arch_scale_cpu_capacity()
for that purpose.
The arm and arm64 architectures directly map this to the arch_topology driver
The arm, arm64, and RISC-V architectures directly map this to the arch_topology driver
CPU scaling data, which is derived from the capacity-dmips-mhz CPU binding; see
Documentation/devicetree/bindings/cpu/cpu-capacity.txt.

View File

@ -15,7 +15,8 @@ Hugetlbfs 预留
概述
====
Documentation/mm/hugetlbpage.rst 中描述的巨页通常是预先分配给应用程序使用的。如果VMA指
Documentation/admin-guide/mm/hugetlbpage.rst
中描述的巨页通常是预先分配给应用程序使用的 。如果VMA指
示要使用巨页,这些巨页会在缺页异常时被实例化到任务的地址空间。如果在缺页异常
时没有巨页存在任务就会被发送一个SIGBUS并经常不高兴地死去。在加入巨页支
持后不久人们决定在mmap()时检测巨页的短缺情况会更好。这个想法是,如果

View File

@ -231,7 +231,7 @@ CFS调度类基于实体负载跟踪机制Per-Entity Load Tracking, PELT
当前Linux无法凭自身算出CPU算力因此必须要有把这个信息传递给Linux的方式。每个架构必须为此
定义arch_scale_cpu_capacity()函数。
arm和arm64架构直接把这个信息映射到arch_topology驱动的CPU scaling数据中译注参考
arm、arm64和RISC-V架构直接把这个信息映射到arch_topology驱动的CPU scaling数据中译注参考
arch_topology.h的percpu变量cpu_scale它是从capacity-dmips-mhz CPU binding中衍生计算
出来的。参见Documentation/devicetree/bindings/cpu/cpu-capacity.txt。

View File

@ -0,0 +1,352 @@
=======================
Linux UVC Gadget Driver
=======================
Overview
--------
The UVC Gadget driver is a driver for hardware on the *device* side of a USB
connection. It is intended to run on a Linux system that has USB device-side
hardware such as boards with an OTG port.
On the device system, once the driver is bound it appears as a V4L2 device with
the output capability.
On the host side (once connected via USB cable), a device running the UVC Gadget
driver *and controlled by an appropriate userspace program* should appear as a UVC
specification compliant camera, and function appropriately with any program
designed to handle them. The userspace program running on the device system can
queue image buffers from a variety of sources to be transmitted via the USB
connection. Typically this would mean forwarding the buffers from a camera sensor
peripheral, but the source of the buffer is entirely dependent on the userspace
companion program.
Configuring the device kernel
-----------------------------
The Kconfig options USB_CONFIGFS, USB_LIBCOMPOSITE, USB_CONFIGFS_F_UVC and
USB_F_UVC must be selected to enable support for the UVC gadget.
Configuring the gadget through configfs
---------------------------------------
The UVC Gadget expects to be configured through configfs using the UVC function.
This allows a significant degree of flexibility, as many of a UVC device's
settings can be controlled this way.
Not all of the available attributes are described here. For a complete enumeration
see Documentation/ABI/testing/configfs-usb-gadget-uvc
Assumptions
~~~~~~~~~~~
This section assumes that you have mounted configfs at `/sys/kernel/config` and
created a gadget as `/sys/kernel/config/usb_gadget/g1`.
The UVC Function
~~~~~~~~~~~~~~~~
The first step is to create the UVC function:
.. code-block:: bash
# These variables will be assumed throughout the rest of the document
CONFIGFS="/sys/kernel/config"
GADGET="$CONFIGFS/usb_gadget/g1"
FUNCTION="$GADGET/functions/uvc.0"
mkdir -p $FUNCTION
Formats and Frames
~~~~~~~~~~~~~~~~~~
You must configure the gadget by telling it which formats you support, as well
as the frame sizes and frame intervals that are supported for each format. In
the current implementation there is no way for the gadget to refuse to set a
format that the host instructs it to set, so it is important that this step is
completed *accurately* to ensure that the host never asks for a format that
can't be provided.
Formats are created under the streaming/uncompressed and streaming/mjpeg configfs
groups, with the framesizes created under the formats in the following
structure:
::
uvc.0 +
|
+ streaming +
|
+ mjpeg +
| |
| + mjpeg +
| |
| + 720p
| |
| + 1080p
|
+ uncompressed +
|
+ yuyv +
|
+ 720p
|
+ 1080p
Each frame can then be configured with a width and height, plus the maximum
buffer size required to store a single frame, and finally with the supported
frame intervals for that format and framesize. Width and height are enumerated in
units of pixels, frame interval in units of 100ns. To create the structure
above with 2, 15 and 100 fps frameintervals for each framesize for example you
might do:
.. code-block:: bash
create_frame() {
# Example usage:
# create_frame <width> <height> <group> <format name>
WIDTH=$1
HEIGHT=$2
FORMAT=$3
NAME=$4
wdir=$FUNCTION/streaming/$FORMAT/$NAME/${HEIGHT}p
mkdir -p $wdir
echo $WIDTH > $wdir/wWidth
echo $HEIGHT > $wdir/wHeight
echo $(( $WIDTH * $HEIGHT * 2 )) > $wdir/dwMaxVideoFrameBufferSize
cat <<EOF > $wdir/dwFrameInterval
666666
100000
5000000
EOF
}
create_frame 1280 720 mjpeg mjpeg
create_frame 1920 1080 mjpeg mjpeg
create_frame 1280 720 uncompressed yuyv
create_frame 1920 1080 uncompressed yuyv
The only uncompressed format currently supported is YUYV, which is detailed at
Documentation/userspace-api/media/v4l/pixfmt-packed.yuv.rst.
Color Matching Descriptors
~~~~~~~~~~~~~~~~~~~~~~~~~~
It's possible to specify some colometry information for each format you create.
This step is optional, and default information will be included if this step is
skipped; those default values follow those defined in the Color Matching Descriptor
section of the UVC specification.
To create a Color Matching Descriptor, create a configfs item and set its three
attributes to your desired settings and then link to it from the format you wish
it to be associated with:
.. code-block:: bash
# Create a new Color Matching Descriptor
mkdir $FUNCTION/streaming/color_matching/yuyv
pushd $FUNCTION/streaming/color_matching/yuyv
echo 1 > bColorPrimaries
echo 1 > bTransferCharacteristics
echo 4 > bMatrixCoefficients
popd
# Create a symlink to the Color Matching Descriptor from the format's config item
ln -s $FUNCTION/streaming/color_matching/yuyv $FUNCTION/streaming/uncompressed/yuyv
For details about the valid values, consult the UVC specification. Note that a
default color matching descriptor exists and is used by any format which does
not have a link to a different Color Matching Descriptor. It's possible to
change the attribute settings for the default descriptor, so bear in mind that if
you do that you are altering the defaults for any format that does not link to
a different one.
Header linking
~~~~~~~~~~~~~~
The UVC specification requires that Format and Frame descriptors be preceded by
Headers detailing things such as the number and cumulative size of the different
Format descriptors that follow. This and similar operations are acheived in
configfs by linking between the configfs item representing the header and the
config items representing those other descriptors, in this manner:
.. code-block:: bash
mkdir $FUNCTION/streaming/header/h
# This section links the format descriptors and their associated frames
# to the header
cd $FUNCTION/streaming/header/h
ln -s ../../uncompressed/yuyv
ln -s ../../mjpeg/mjpeg
# This section ensures that the header will be transmitted for each
# speed's set of descriptors. If support for a particular speed is not
# needed then it can be skipped here.
cd ../../class/fs
ln -s ../../header/h
cd ../../class/hs
ln -s ../../header/h
cd ../../class/ss
ln -s ../../header/h
cd ../../../control
mkdir header/h
ln -s header/h class/fs
ln -s header/h class/ss
Extension Unit Support
~~~~~~~~~~~~~~~~~~~~~~
A UVC Extension Unit (XU) basically provides a distinct unit to which control set
and get requests can be addressed. The meaning of those control requests is
entirely implementation dependent, but may be used to control settings outside
of the UVC specification (for example enabling or disabling video effects). An
XU can be inserted into the UVC unit chain or left free-hanging.
Configuring an extension unit involves creating an entry in the appropriate
directory and setting its attributes appropriately, like so:
.. code-block:: bash
mkdir $FUNCTION/control/extensions/xu.0
pushd $FUNCTION/control/extensions/xu.0
# Set the bUnitID of the Processing Unit as the source for this
# Extension Unit
echo 2 > baSourceID
# Set this XU as the source of the default output terminal. This inserts
# the XU into the UVC chain between the PU and OT such that the final
# chain is IT > PU > XU.0 > OT
cat bUnitID > ../../terminal/output/default/baSourceID
# Flag some controls as being available for use. The bmControl field is
# a bitmap with each bit denoting the availability of a particular
# control. For example to flag the 0th, 2nd and 3rd controls available:
echo 0x0d > bmControls
# Set the GUID; this is a vendor-specific code identifying the XU.
echo -e -n "\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f\x10" > guidExtensionCode
popd
The bmControls attribute and the baSourceID attribute are multi-value attributes.
This means that you may write multiple newline separated values to them. For
example to flag the 1st, 2nd, 9th and 10th controls as being available you would
need to write two values to bmControls, like so:
.. code-block:: bash
cat << EOF > bmControls
0x03
0x03
EOF
The multi-value nature of the baSourceID attribute belies the fact that XUs can
be multiple-input, though note that this currently has no significant effect.
The bControlSize attribute reflects the size of the bmControls attribute, and
similarly bNrInPins reflects the size of the baSourceID attributes. Both
attributes are automatically increased / decreased as you set bmControls and
baSourceID. It is also possible to manually increase or decrease bControlSize
which has the effect of truncating entries to the new size, or padding entries
out with 0x00, for example:
::
$ cat bmControls
0x03
0x05
$ cat bControlSize
2
$ echo 1 > bControlSize
$ cat bmControls
0x03
$ echo 2 > bControlSize
$ cat bmControls
0x03
0x00
bNrInPins and baSourceID function in the same way.
Custom Strings Support
~~~~~~~~~~~~~~~~~~~~~~
String descriptors that provide a textual description for various parts of a
USB device can be defined in the usual place within USB configfs, and may then
be linked to from the UVC function root or from Extension Unit directories to
assign those strings as descriptors:
.. code-block:: bash
# Create a string descriptor in us-EN and link to it from the function
# root. The name of the link is significant here, as it declares this
# descriptor to be intended for the Interface Association Descriptor.
# Other significant link names at function root are vs0_desc and vs1_desc
# For the VideoStreaming Interface 0/1 Descriptors.
mkdir -p $GADGET/strings/0x409/iad_desc
echo -n "Interface Associaton Descriptor" > $GADGET/strings/0x409/iad_desc/s
ln -s $GADGET/strings/0x409/iad_desc $FUNCTION/iad_desc
# Because the link to a String Descriptor from an Extension Unit clearly
# associates the two, the name of this link is not significant and may
# be set freely.
mkdir -p $GADGET/strings/0x409/xu.0
echo -n "A Very Useful Extension Unit" > $GADGET/strings/0x409/xu.0/s
ln -s $GADGET/strings/0x409/xu.0 $FUNCTION/control/extensions/xu.0
The interrupt endpoint
~~~~~~~~~~~~~~~~~~~~~~
The VideoControl interface has an optional interrupt endpoint which is by default
disabled. This is intended to support delayed response control set requests for
UVC (which should respond through the interrupt endpoint rather than tying up
endpoint 0). At present support for sending data through this endpoint is missing
and so it is left disabled to avoid confusion. If you wish to enable it you can
do so through the configfs attribute:
.. code-block:: bash
echo 1 > $FUNCTION/control/enable_interrupt_ep
Bandwidth configuration
~~~~~~~~~~~~~~~~~~~~~~~
There are three attributes which control the bandwidth of the USB connection.
These live in the function root and can be set within limits:
.. code-block:: bash
# streaming_interval sets bInterval. Values range from 1..255
echo 1 > $FUNCTION/streaming_interval
# streaming_maxpacket sets wMaxPacketSize. Valid values are 1024/2048/3072
echo 3072 > $FUNCTION/streaming_maxpacket
# streaming_maxburst sets bMaxBurst. Valid values are 1..15
echo 1 > $FUNCTION/streaming_maxburst
The values passed here will be clamped to valid values according to the UVC
specification (which depend on the speed of the USB connection). To understand
how the settings influence bandwidth you should consult the UVC specifications,
but a rule of thumb is that increasing the streaming_maxpacket setting will
improve bandwidth (and thus the maximum possible framerate), whilst the same is
true for streaming_maxburst provided the USB connection is running at SuperSpeed.
Increasing streaming_interval will reduce bandwidth and framerate.
The userspace application
-------------------------
By itself, the UVC Gadget driver cannot do anything particularly interesting. It
must be paired with a userspace program that responds to UVC control requests and
fills buffers to be queued to the V4L2 device that the driver creates. How those
things are achieved is implementation dependent and beyond the scope of this
document, but a reference application can be found at https://gitlab.freedesktop.org/camera/uvc-gadget

View File

@ -16,6 +16,7 @@ USB support
gadget_multi
gadget_printer
gadget_serial
gadget_uvc
gadget-testing
iuu_phoenix
mass-storage

View File

@ -24,7 +24,8 @@ YAML specifications can be found under ``Documentation/netlink/specs/``
This document describes details of the schema.
See :doc:`intro-specs` for a practical starting guide.
All specs must be licensed under ``GPL-2.0-only OR BSD-3-Clause``
All specs must be licensed under
``((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)``
to allow for easy adoption in user space code.
Compatibility levels

View File

@ -6030,6 +6030,44 @@ delivery must be provided via the "reg_aen" struct.
The "pad" and "reserved" fields may be used for future extensions and should be
set to 0s by userspace.
4.138 KVM_ARM_SET_COUNTER_OFFSET
--------------------------------
:Capability: KVM_CAP_COUNTER_OFFSET
:Architectures: arm64
:Type: vm ioctl
:Parameters: struct kvm_arm_counter_offset (in)
:Returns: 0 on success, < 0 on error
This capability indicates that userspace is able to apply a single VM-wide
offset to both the virtual and physical counters as viewed by the guest
using the KVM_ARM_SET_CNT_OFFSET ioctl and the following data structure:
::
struct kvm_arm_counter_offset {
__u64 counter_offset;
__u64 reserved;
};
The offset describes a number of counter cycles that are subtracted from
both virtual and physical counter views (similar to the effects of the
CNTVOFF_EL2 and CNTPOFF_EL2 system registers, but only global). The offset
always applies to all vcpus (already created or created after this ioctl)
for this VM.
It is userspace's responsibility to compute the offset based, for example,
on previous values of the guest counters.
Any value other than 0 for the "reserved" field may result in an error
(-EINVAL) being returned. This ioctl can also return -EBUSY if any vcpu
ioctl is issued concurrently.
Note that using this ioctl results in KVM ignoring subsequent userspace
writes to the CNTVCT_EL0 and CNTPCT_EL0 registers using the SET_ONE_REG
interface. No error will be returned, but the resulting offset will not be
applied.
5. The kvm_run structure
========================
@ -6219,15 +6257,40 @@ to the byte array.
__u64 nr;
__u64 args[6];
__u64 ret;
__u32 longmode;
__u32 pad;
__u64 flags;
} hypercall;
Unused. This was once used for 'hypercall to userspace'. To implement
such functionality, use KVM_EXIT_IO (x86) or KVM_EXIT_MMIO (all except s390).
It is strongly recommended that userspace use ``KVM_EXIT_IO`` (x86) or
``KVM_EXIT_MMIO`` (all except s390) to implement functionality that
requires a guest to interact with host userpace.
.. note:: KVM_EXIT_IO is significantly faster than KVM_EXIT_MMIO.
For arm64:
----------
SMCCC exits can be enabled depending on the configuration of the SMCCC
filter. See the Documentation/virt/kvm/devices/vm.rst
``KVM_ARM_SMCCC_FILTER`` for more details.
``nr`` contains the function ID of the guest's SMCCC call. Userspace is
expected to use the ``KVM_GET_ONE_REG`` ioctl to retrieve the call
parameters from the vCPU's GPRs.
Definition of ``flags``:
- ``KVM_HYPERCALL_EXIT_SMC``: Indicates that the guest used the SMC
conduit to initiate the SMCCC call. If this bit is 0 then the guest
used the HVC conduit for the SMCCC call.
- ``KVM_HYPERCALL_EXIT_16BIT``: Indicates that the guest used a 16bit
instruction to initiate the SMCCC call. If this bit is 0 then the
guest used a 32bit instruction. An AArch64 guest always has this
bit set to 0.
At the point of exit, PC points to the instruction immediately following
the trapping instruction.
::
/* KVM_EXIT_TPR_ACCESS */

View File

@ -321,3 +321,82 @@ Allows userspace to query the status of migration mode.
if it is enabled
:Returns: -EFAULT if the given address is not accessible from kernel space;
0 in case of success.
6. GROUP: KVM_ARM_VM_SMCCC_CTRL
===============================
:Architectures: arm64
6.1. ATTRIBUTE: KVM_ARM_VM_SMCCC_FILTER (w/o)
---------------------------------------------
:Parameters: Pointer to a ``struct kvm_smccc_filter``
:Returns:
====== ===========================================
EEXIST Range intersects with a previously inserted
or reserved range
EBUSY A vCPU in the VM has already run
EINVAL Invalid filter configuration
ENOMEM Failed to allocate memory for the in-kernel
representation of the SMCCC filter
====== ===========================================
Requests the installation of an SMCCC call filter described as follows::
enum kvm_smccc_filter_action {
KVM_SMCCC_FILTER_HANDLE = 0,
KVM_SMCCC_FILTER_DENY,
KVM_SMCCC_FILTER_FWD_TO_USER,
};
struct kvm_smccc_filter {
__u32 base;
__u32 nr_functions;
__u8 action;
__u8 pad[15];
};
The filter is defined as a set of non-overlapping ranges. Each
range defines an action to be applied to SMCCC calls within the range.
Userspace can insert multiple ranges into the filter by using
successive calls to this attribute.
The default configuration of KVM is such that all implemented SMCCC
calls are allowed. Thus, the SMCCC filter can be defined sparsely
by userspace, only describing ranges that modify the default behavior.
The range expressed by ``struct kvm_smccc_filter`` is
[``base``, ``base + nr_functions``). The range is not allowed to wrap,
i.e. userspace cannot rely on ``base + nr_functions`` overflowing.
The SMCCC filter applies to both SMC and HVC calls initiated by the
guest. The SMCCC filter gates the in-kernel emulation of SMCCC calls
and as such takes effect before other interfaces that interact with
SMCCC calls (e.g. hypercall bitmap registers).
Actions:
- ``KVM_SMCCC_FILTER_HANDLE``: Allows the guest SMCCC call to be
handled in-kernel. It is strongly recommended that userspace *not*
explicitly describe the allowed SMCCC call ranges.
- ``KVM_SMCCC_FILTER_DENY``: Rejects the guest SMCCC call in-kernel
and returns to the guest.
- ``KVM_SMCCC_FILTER_FWD_TO_USER``: The guest SMCCC call is forwarded
to userspace with an exit reason of ``KVM_EXIT_HYPERCALL``.
The ``pad`` field is reserved for future use and must be zero. KVM may
return ``-EINVAL`` if the field is nonzero.
KVM reserves the 'Arm Architecture Calls' range of function IDs and
will reject attempts to define a filter for any portion of these ranges:
=========== ===============
Start End (inclusive)
=========== ===============
0x8000_0000 0x8000_FFFF
0xC000_0000 0xC000_FFFF
=========== ===============

View File

@ -5971,7 +5971,7 @@ F: include/linux/dm-*.h
F: include/uapi/linux/dm-*.h
DEVLINK
M: Jiri Pirko <jiri@nvidia.com>
M: Jiri Pirko <jiri@resnulli.us>
L: netdev@vger.kernel.org
S: Supported
F: Documentation/networking/devlink
@ -14872,12 +14872,12 @@ M: Sagi Grimberg <sagi@grimberg.me>
L: linux-nvme@lists.infradead.org
S: Supported
W: http://git.infradead.org/nvme.git
T: git://git.infradead.org/nvme.git
T: git git://git.infradead.org/nvme.git
F: Documentation/nvme/
F: drivers/nvme/host/
F: drivers/nvme/common/
F: include/linux/nvme.h
F: drivers/nvme/host/
F: include/linux/nvme-*.h
F: include/linux/nvme.h
F: include/uapi/linux/nvme_ioctl.h
NVM EXPRESS FABRICS AUTHENTICATION
@ -14912,7 +14912,7 @@ M: Chaitanya Kulkarni <kch@nvidia.com>
L: linux-nvme@lists.infradead.org
S: Supported
W: http://git.infradead.org/nvme.git
T: git://git.infradead.org/nvme.git
T: git git://git.infradead.org/nvme.git
F: drivers/nvme/target/
NVMEM FRAMEWORK
@ -15079,7 +15079,7 @@ F: Documentation/hwmon/nzxt-smart2.rst
F: drivers/hwmon/nzxt-smart2.c
OBJAGG
M: Jiri Pirko <jiri@nvidia.com>
M: Jiri Pirko <jiri@resnulli.us>
L: netdev@vger.kernel.org
S: Supported
F: include/linux/objagg.h
@ -15853,7 +15853,7 @@ F: drivers/video/logo/logo_parisc*
F: include/linux/hp_sdc.h
PARMAN
M: Jiri Pirko <jiri@nvidia.com>
M: Jiri Pirko <jiri@resnulli.us>
L: netdev@vger.kernel.org
S: Supported
F: include/linux/parman.h
@ -17990,7 +17990,7 @@ F: Documentation/devicetree/bindings/spi/microchip,mpfs-spi.yaml
F: Documentation/devicetree/bindings/usb/microchip,mpfs-musb.yaml
F: arch/riscv/boot/dts/microchip/
F: drivers/char/hw_random/mpfs-rng.c
F: drivers/clk/microchip/clk-mpfs.c
F: drivers/clk/microchip/clk-mpfs*.c
F: drivers/i2c/busses/i2c-microchip-corei2c.c
F: drivers/mailbox/mailbox-mpfs.c
F: drivers/pci/controller/pcie-microchip-host.c
@ -19150,9 +19150,7 @@ W: http://www.brownhat.org/sis900.html
F: drivers/net/ethernet/sis/sis900.*
SIS FRAMEBUFFER DRIVER
M: Thomas Winischhofer <thomas@winischhofer.net>
S: Maintained
W: http://www.winischhofer.net/linuxsisvga.shtml
S: Orphan
F: Documentation/fb/sisfb.rst
F: drivers/video/fbdev/sis/
F: include/video/sisfb.h
@ -21645,6 +21643,7 @@ USB OVER IP DRIVER
M: Valentina Manea <valentina.manea.m@gmail.com>
M: Shuah Khan <shuah@kernel.org>
M: Shuah Khan <skhan@linuxfoundation.org>
R: Hongren Zheng <i@zenithal.me>
L: linux-usb@vger.kernel.org
S: Maintained
F: Documentation/usb/usbip_protocol.rst

View File

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 3
SUBLEVEL = 0
EXTRAVERSION = -rc2
EXTRAVERSION = -rc4
NAME = Hurr durr I'ma ninja sloth
# *DOCUMENTATION*
@ -274,8 +274,7 @@ no-dot-config-targets := $(clean-targets) \
cscope gtags TAGS tags help% %docs check% coccicheck \
$(version_h) headers headers_% archheaders archscripts \
%asm-generic kernelversion %src-pkg dt_binding_check \
outputmakefile rustavailable rustfmt rustfmtcheck \
scripts_package
outputmakefile rustavailable rustfmt rustfmtcheck
# Installation targets should not require compiler. Unfortunately, vdso_install
# is an exception where build artifacts may be updated. This must be fixed.
no-compiler-targets := $(no-dot-config-targets) install dtbs_install \
@ -1605,7 +1604,7 @@ MRPROPER_FILES += include/config include/generated \
certs/signing_key.pem \
certs/x509.genkey \
vmlinux-gdb.py \
*.spec \
*.spec rpmbuild \
rust/libmacros.so
# clean - Delete most, but leave enough to build external modules
@ -1656,10 +1655,6 @@ distclean: mrproper
%pkg: include/config/kernel.release FORCE
$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.package $@
PHONY += scripts_package
scripts_package: scripts_basic
$(Q)$(MAKE) $(build)=scripts scripts/list-gitignored
# Brief documentation of the typical targets used
# ---------------------------------------------------------------------------
@ -1886,6 +1881,8 @@ endif
else # KBUILD_EXTMOD
filechk_kernel.release = echo $(KERNELRELEASE)
###
# External module support.
# When building external modules the kernel used as basis is considered

View File

@ -311,6 +311,7 @@
&usbotg1 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_usbotg1>;
disable-over-current;
srp-disable;
hnp-disable;

View File

@ -321,6 +321,7 @@
&usbotg1 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_usbotg1>;
disable-over-current;
srp-disable;
hnp-disable;

View File

@ -625,6 +625,7 @@
&usbotg1 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_usbotg1>;
disable-over-current;
srp-disable;
hnp-disable;

View File

@ -27,6 +27,16 @@
};
reserved-memory {
sbl_region: sbl@2f00000 {
reg = <0x02f00000 0x100000>;
no-map;
};
external_image_region: external-image@3100000 {
reg = <0x03100000 0x200000>;
no-map;
};
adsp_region: adsp@3300000 {
reg = <0x03300000 0x1400000>;
no-map;

View File

@ -116,7 +116,7 @@ __copy_to_user_memcpy(void __user *to, const void *from, unsigned long n)
tocopy = n;
ua_flags = uaccess_save_and_enable();
memcpy((void *)to, from, tocopy);
__memcpy((void *)to, from, tocopy);
uaccess_restore(ua_flags);
to += tocopy;
from += tocopy;
@ -178,7 +178,7 @@ __clear_user_memset(void __user *addr, unsigned long n)
tocopy = n;
ua_flags = uaccess_save_and_enable();
memset((void *)addr, 0, tocopy);
__memset((void *)addr, 0, tocopy);
uaccess_restore(ua_flags);
addr += tocopy;
n -= tocopy;

View File

@ -56,14 +56,10 @@
};
&enetc_port2 {
nvmem-cells = <&base_mac_address 2>;
nvmem-cell-names = "mac-address";
status = "okay";
};
&enetc_port3 {
nvmem-cells = <&base_mac_address 3>;
nvmem-cell-names = "mac-address";
status = "okay";
};
@ -84,8 +80,6 @@
managed = "in-band-status";
phy-handle = <&qsgmii_phy0>;
phy-mode = "qsgmii";
nvmem-cells = <&base_mac_address 4>;
nvmem-cell-names = "mac-address";
status = "okay";
};
@ -94,8 +88,6 @@
managed = "in-band-status";
phy-handle = <&qsgmii_phy1>;
phy-mode = "qsgmii";
nvmem-cells = <&base_mac_address 5>;
nvmem-cell-names = "mac-address";
status = "okay";
};
@ -104,8 +96,6 @@
managed = "in-band-status";
phy-handle = <&qsgmii_phy2>;
phy-mode = "qsgmii";
nvmem-cells = <&base_mac_address 6>;
nvmem-cell-names = "mac-address";
status = "okay";
};
@ -114,8 +104,6 @@
managed = "in-band-status";
phy-handle = <&qsgmii_phy3>;
phy-mode = "qsgmii";
nvmem-cells = <&base_mac_address 7>;
nvmem-cell-names = "mac-address";
status = "okay";
};

View File

@ -55,7 +55,5 @@
&enetc_port1 {
phy-handle = <&phy0>;
phy-mode = "rgmii-id";
nvmem-cells = <&base_mac_address 0>;
nvmem-cell-names = "mac-address";
status = "okay";
};

View File

@ -36,14 +36,10 @@
};
&enetc_port2 {
nvmem-cells = <&base_mac_address 2>;
nvmem-cell-names = "mac-address";
status = "okay";
};
&enetc_port3 {
nvmem-cells = <&base_mac_address 3>;
nvmem-cell-names = "mac-address";
status = "okay";
};
@ -56,8 +52,6 @@
managed = "in-band-status";
phy-handle = <&phy0>;
phy-mode = "sgmii";
nvmem-cells = <&base_mac_address 0>;
nvmem-cell-names = "mac-address";
status = "okay";
};
@ -66,8 +60,6 @@
managed = "in-band-status";
phy-handle = <&phy1>;
phy-mode = "sgmii";
nvmem-cells = <&base_mac_address 1>;
nvmem-cell-names = "mac-address";
status = "okay";
};

View File

@ -43,7 +43,5 @@
&enetc_port1 {
phy-handle = <&phy1>;
phy-mode = "rgmii-id";
nvmem-cells = <&base_mac_address 1>;
nvmem-cell-names = "mac-address";
status = "okay";
};

View File

@ -92,8 +92,6 @@
phy-handle = <&phy0>;
phy-mode = "sgmii";
managed = "in-band-status";
nvmem-cells = <&base_mac_address 0>;
nvmem-cell-names = "mac-address";
status = "okay";
};
@ -156,21 +154,6 @@
label = "bootloader environment";
};
};
otp-1 {
compatible = "user-otp";
nvmem-layout {
compatible = "kontron,sl28-vpd";
serial_number: serial-number {
};
base_mac_address: base-mac-address {
#nvmem-cell-cells = <1>;
};
};
};
};
};

View File

@ -117,7 +117,7 @@ lsio_subsys: bus@5d000000 {
interrupts = <GIC_SPI 92 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX_SC_R_FSPI_0 IMX_SC_PM_CLK_PER>,
<&clk IMX_SC_R_FSPI_0 IMX_SC_PM_CLK_PER>;
clock-names = "fspi", "fspi_en";
clock-names = "fspi_en", "fspi";
power-domains = <&pd IMX_SC_R_FSPI_0>;
status = "disabled";
};

View File

@ -121,8 +121,6 @@
phy-handle = <&ethphy0>;
nvmem-cells = <&fec_mac1>;
nvmem-cell-names = "mac-address";
snps,reset-gpios = <&pca6416_1 2 GPIO_ACTIVE_LOW>;
snps,reset-delays-us = <10 20 200000>;
status = "okay";
mdio {
@ -136,6 +134,9 @@
eee-broken-1000t;
qca,disable-smarteee;
qca,disable-hibernation-mode;
reset-gpios = <&pca6416_1 2 GPIO_ACTIVE_LOW>;
reset-assert-us = <20>;
reset-deassert-us = <200000>;
vddio-supply = <&vddio0>;
vddio0: vddio-regulator {

View File

@ -247,7 +247,7 @@
compatible = "wlf,wm8960";
reg = <0x1a>;
clocks = <&clk IMX8MM_CLK_SAI1_ROOT>;
clock-names = "mclk1";
clock-names = "mclk";
wlf,shared-lrclk;
#sound-dai-cells = <0>;
};

View File

@ -296,6 +296,7 @@
sai2: sai@30020000 {
compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
reg = <0x30020000 0x10000>;
#sound-dai-cells = <0>;
interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX8MN_CLK_SAI2_IPG>,
<&clk IMX8MN_CLK_DUMMY>,
@ -310,6 +311,7 @@
sai3: sai@30030000 {
compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
reg = <0x30030000 0x10000>;
#sound-dai-cells = <0>;
interrupts = <GIC_SPI 50 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX8MN_CLK_SAI3_IPG>,
<&clk IMX8MN_CLK_DUMMY>,
@ -324,6 +326,7 @@
sai5: sai@30050000 {
compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
reg = <0x30050000 0x10000>;
#sound-dai-cells = <0>;
interrupts = <GIC_SPI 90 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX8MN_CLK_SAI5_IPG>,
<&clk IMX8MN_CLK_DUMMY>,
@ -340,6 +343,7 @@
sai6: sai@30060000 {
compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
reg = <0x30060000 0x10000>;
#sound-dai-cells = <0>;
interrupts = <GIC_SPI 90 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX8MN_CLK_SAI6_IPG>,
<&clk IMX8MN_CLK_DUMMY>,
@ -397,6 +401,7 @@
sai7: sai@300b0000 {
compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
reg = <0x300b0000 0x10000>;
#sound-dai-cells = <0>;
interrupts = <GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX8MN_CLK_SAI7_IPG>,
<&clk IMX8MN_CLK_DUMMY>,

View File

@ -1131,8 +1131,8 @@
reg = <0x32e90000 0x238>;
interrupts = <GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX8MP_CLK_MEDIA_DISP2_PIX_ROOT>,
<&clk IMX8MP_CLK_MEDIA_AXI_ROOT>,
<&clk IMX8MP_CLK_MEDIA_APB_ROOT>;
<&clk IMX8MP_CLK_MEDIA_APB_ROOT>,
<&clk IMX8MP_CLK_MEDIA_AXI_ROOT>;
clock-names = "pix", "axi", "disp_axi";
assigned-clocks = <&clk IMX8MP_CLK_MEDIA_DISP2_PIX>,
<&clk IMX8MP_VIDEO_PLL1>;

View File

@ -164,6 +164,8 @@
lpi2c1: i2c@44340000 {
compatible = "fsl,imx93-lpi2c", "fsl,imx7ulp-lpi2c";
reg = <0x44340000 0x10000>;
#address-cells = <1>;
#size-cells = <0>;
interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX93_CLK_LPI2C1_GATE>,
<&clk IMX93_CLK_BUS_AON>;
@ -174,6 +176,8 @@
lpi2c2: i2c@44350000 {
compatible = "fsl,imx93-lpi2c", "fsl,imx7ulp-lpi2c";
reg = <0x44350000 0x10000>;
#address-cells = <1>;
#size-cells = <0>;
interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX93_CLK_LPI2C2_GATE>,
<&clk IMX93_CLK_BUS_AON>;
@ -343,6 +347,8 @@
lpi2c3: i2c@42530000 {
compatible = "fsl,imx93-lpi2c", "fsl,imx7ulp-lpi2c";
reg = <0x42530000 0x10000>;
#address-cells = <1>;
#size-cells = <0>;
interrupts = <GIC_SPI 62 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX93_CLK_LPI2C3_GATE>,
<&clk IMX93_CLK_BUS_WAKEUP>;
@ -353,6 +359,8 @@
lpi2c4: i2c@42540000 {
compatible = "fsl,imx93-lpi2c", "fsl,imx7ulp-lpi2c";
reg = <0x42540000 0x10000>;
#address-cells = <1>;
#size-cells = <0>;
interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX93_CLK_LPI2C4_GATE>,
<&clk IMX93_CLK_BUS_WAKEUP>;
@ -455,6 +463,8 @@
lpi2c5: i2c@426b0000 {
compatible = "fsl,imx93-lpi2c", "fsl,imx7ulp-lpi2c";
reg = <0x426b0000 0x10000>;
#address-cells = <1>;
#size-cells = <0>;
interrupts = <GIC_SPI 195 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX93_CLK_LPI2C5_GATE>,
<&clk IMX93_CLK_BUS_WAKEUP>;
@ -465,6 +475,8 @@
lpi2c6: i2c@426c0000 {
compatible = "fsl,imx93-lpi2c", "fsl,imx7ulp-lpi2c";
reg = <0x426c0000 0x10000>;
#address-cells = <1>;
#size-cells = <0>;
interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX93_CLK_LPI2C6_GATE>,
<&clk IMX93_CLK_BUS_WAKEUP>;
@ -475,6 +487,8 @@
lpi2c7: i2c@426d0000 {
compatible = "fsl,imx93-lpi2c", "fsl,imx7ulp-lpi2c";
reg = <0x426d0000 0x10000>;
#address-cells = <1>;
#size-cells = <0>;
interrupts = <GIC_SPI 197 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX93_CLK_LPI2C7_GATE>,
<&clk IMX93_CLK_BUS_WAKEUP>;
@ -485,6 +499,8 @@
lpi2c8: i2c@426e0000 {
compatible = "fsl,imx93-lpi2c", "fsl,imx7ulp-lpi2c";
reg = <0x426e0000 0x10000>;
#address-cells = <1>;
#size-cells = <0>;
interrupts = <GIC_SPI 198 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX93_CLK_LPI2C8_GATE>,
<&clk IMX93_CLK_BUS_WAKEUP>;
@ -580,9 +596,9 @@
eqos: ethernet@428a0000 {
compatible = "nxp,imx93-dwmac-eqos", "snps,dwmac-5.10a";
reg = <0x428a0000 0x10000>;
interrupts = <GIC_SPI 183 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 184 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "eth_wake_irq", "macirq";
interrupts = <GIC_SPI 184 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 183 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "macirq", "eth_wake_irq";
clocks = <&clk IMX93_CLK_ENET_QOS_GATE>,
<&clk IMX93_CLK_ENET_QOS_GATE>,
<&clk IMX93_CLK_ENET_TIMER2>,
@ -595,7 +611,7 @@
<&clk IMX93_CLK_SYS_PLL_PFD0_DIV2>;
assigned-clock-rates = <100000000>, <250000000>;
intf_mode = <&wakeupmix_gpr 0x28>;
clk_csr = <0>;
snps,clk-csr = <0>;
status = "disabled";
};

View File

@ -22,7 +22,7 @@
#address-cells = <2>;
#size-cells = <2>;
ranges = <0x0 0x0 0x0 0x0 0x0 0x40000000>;
ranges = <0x0 0x0 0x0 0x0 0x100 0x0>;
apbmisc: misc@100000 {
compatible = "nvidia,tegra194-misc";

View File

@ -20,7 +20,7 @@
#address-cells = <2>;
#size-cells = <2>;
ranges = <0x0 0x0 0x0 0x0 0x0 0x40000000>;
ranges = <0x0 0x0 0x0 0x0 0x100 0x0>;
misc@100000 {
compatible = "nvidia,tegra234-misc";

View File

@ -33,7 +33,3 @@
&gpio_leds_default {
pins = "gpio81", "gpio82", "gpio83";
};
&sim_ctrl_default {
pins = "gpio1", "gpio2";
};

View File

@ -25,6 +25,11 @@
gpios = <&msmgpio 20 GPIO_ACTIVE_HIGH>;
};
&mpss {
pinctrl-0 = <&sim_ctrl_default>;
pinctrl-names = "default";
};
&button_default {
pins = "gpio37";
bias-pull-down;
@ -34,6 +39,25 @@
pins = "gpio20", "gpio21", "gpio22";
};
&sim_ctrl_default {
pins = "gpio1", "gpio2";
/* This selects the external SIM card slot by default */
&msmgpio {
sim_ctrl_default: sim-ctrl-default-state {
esim-sel-pins {
pins = "gpio0", "gpio3";
bias-disable;
output-low;
};
sim-en-pins {
pins = "gpio1";
bias-disable;
output-low;
};
sim-sel-pins {
pins = "gpio2";
bias-disable;
output-high;
};
};
};

View File

@ -92,9 +92,6 @@
};
&mpss {
pinctrl-0 = <&sim_ctrl_default>;
pinctrl-names = "default";
status = "okay";
};
@ -240,11 +237,4 @@
drive-strength = <2>;
bias-disable;
};
sim_ctrl_default: sim-ctrl-default-state {
function = "gpio";
drive-strength = <2>;
bias-disable;
output-low;
};
};

View File

@ -241,7 +241,7 @@
};
&remoteproc_nsp0 {
firmware-name = "qcom/sa8540p/cdsp.mbn";
firmware-name = "qcom/sa8540p/cdsp0.mbn";
status = "okay";
};

View File

@ -2131,6 +2131,8 @@
pinctrl-names = "default";
pinctrl-0 = <&pcie1_clkreq_n>;
dma-coherent;
iommus = <&apps_smmu 0x1c80 0x1>;
iommu-map = <0x0 &apps_smmu 0x1c80 0x1>,

View File

@ -370,6 +370,7 @@
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
regulator-always-on;
};
vreg_s11b: smps11 {
@ -377,6 +378,7 @@
regulator-min-microvolt = <1272000>;
regulator-max-microvolt = <1272000>;
regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
regulator-always-on;
};
vreg_s12b: smps12 {
@ -384,6 +386,7 @@
regulator-min-microvolt = <984000>;
regulator-max-microvolt = <984000>;
regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
regulator-always-on;
};
vreg_l3b: ldo3 {
@ -441,6 +444,7 @@
regulator-min-microvolt = <3008000>;
regulator-max-microvolt = <3960000>;
regulator-initial-mode = <RPMH_REGULATOR_MODE_AUTO>;
regulator-always-on;
};
};
@ -772,75 +776,88 @@
pmic-die-temp@3 {
reg = <PMK8350_ADC7_DIE_TEMP>;
qcom,pre-scaling = <1 1>;
label = "pmk8350_die_temp";
};
xo-therm@44 {
reg = <PMK8350_ADC7_AMUX_THM1_100K_PU>;
qcom,hw-settle-time = <200>;
qcom,ratiometric;
label = "pmk8350_xo_therm";
};
pmic-die-temp@103 {
reg = <PM8350_ADC7_DIE_TEMP(1)>;
qcom,pre-scaling = <1 1>;
label = "pmc8280_1_die_temp";
};
sys-therm@144 {
reg = <PM8350_ADC7_AMUX_THM1_100K_PU(1)>;
qcom,hw-settle-time = <200>;
qcom,ratiometric;
label = "sys_therm1";
};
sys-therm@145 {
reg = <PM8350_ADC7_AMUX_THM2_100K_PU(1)>;
qcom,hw-settle-time = <200>;
qcom,ratiometric;
label = "sys_therm2";
};
sys-therm@146 {
reg = <PM8350_ADC7_AMUX_THM3_100K_PU(1)>;
qcom,hw-settle-time = <200>;
qcom,ratiometric;
label = "sys_therm3";
};
sys-therm@147 {
reg = <PM8350_ADC7_AMUX_THM4_100K_PU(1)>;
qcom,hw-settle-time = <200>;
qcom,ratiometric;
label = "sys_therm4";
};
pmic-die-temp@303 {
reg = <PM8350_ADC7_DIE_TEMP(3)>;
qcom,pre-scaling = <1 1>;
label = "pmc8280_2_die_temp";
};
sys-therm@344 {
reg = <PM8350_ADC7_AMUX_THM1_100K_PU(3)>;
qcom,hw-settle-time = <200>;
qcom,ratiometric;
label = "sys_therm5";
};
sys-therm@345 {
reg = <PM8350_ADC7_AMUX_THM2_100K_PU(3)>;
qcom,hw-settle-time = <200>;
qcom,ratiometric;
label = "sys_therm6";
};
sys-therm@346 {
reg = <PM8350_ADC7_AMUX_THM3_100K_PU(3)>;
qcom,hw-settle-time = <200>;
qcom,ratiometric;
label = "sys_therm7";
};
sys-therm@347 {
reg = <PM8350_ADC7_AMUX_THM4_100K_PU(3)>;
qcom,hw-settle-time = <200>;
qcom,ratiometric;
label = "sys_therm8";
};
pmic-die-temp@403 {
reg = <PMR735A_ADC7_DIE_TEMP>;
qcom,pre-scaling = <1 1>;
label = "pmr735a_die_temp";
};
};
@ -884,9 +901,9 @@
"VA DMIC0", "MIC BIAS1",
"VA DMIC1", "MIC BIAS1",
"VA DMIC2", "MIC BIAS3",
"TX DMIC0", "MIC BIAS1",
"TX DMIC1", "MIC BIAS2",
"TX DMIC2", "MIC BIAS3",
"VA DMIC0", "VA MIC BIAS1",
"VA DMIC1", "VA MIC BIAS1",
"VA DMIC2", "VA MIC BIAS3",
"TX SWR_ADC1", "ADC2_OUTPUT";
wcd-playback-dai-link {
@ -937,7 +954,7 @@
va-dai-link {
link-name = "VA Capture";
cpu {
sound-dai = <&q6apmbedai TX_CODEC_DMA_TX_3>;
sound-dai = <&q6apmbedai VA_CODEC_DMA_TX_0>;
};
platform {
@ -1062,7 +1079,7 @@
vdd-micb-supply = <&vreg_s10b>;
qcom,dmic-sample-rate = <600000>;
qcom,dmic-sample-rate = <4800000>;
status = "okay";
};

View File

@ -2504,12 +2504,12 @@
qcom,ports-sinterval-low = /bits/ 8 <0x03 0x1f 0x1f 0x07 0x00>;
qcom,ports-offset1 = /bits/ 8 <0x00 0x00 0x0B 0x01 0x00>;
qcom,ports-offset2 = /bits/ 8 <0x00 0x00 0x0B 0x00 0x00>;
qcom,ports-hstart = /bits/ 8 <0xff 0x03 0xff 0xff 0xff>;
qcom,ports-hstop = /bits/ 8 <0xff 0x06 0xff 0xff 0xff>;
qcom,ports-hstart = /bits/ 8 <0xff 0x03 0x00 0xff 0xff>;
qcom,ports-hstop = /bits/ 8 <0xff 0x06 0x0f 0xff 0xff>;
qcom,ports-word-length = /bits/ 8 <0x01 0x07 0x04 0xff 0xff>;
qcom,ports-block-pack-mode = /bits/ 8 <0xff 0x00 0x01 0xff 0xff>;
qcom,ports-block-pack-mode = /bits/ 8 <0xff 0xff 0x01 0xff 0xff>;
qcom,ports-lane-control = /bits/ 8 <0x01 0x00 0x00 0x00 0x00>;
qcom,ports-block-group-count = /bits/ 8 <0xff 0xff 0xff 0xff 0x00>;
qcom,ports-block-group-count = /bits/ 8 <0xff 0xff 0xff 0xff 0xff>;
#sound-dai-cells = <1>;
#address-cells = <2>;
@ -2600,7 +2600,7 @@
<&intc GIC_SPI 520 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "core", "wake";
clocks = <&vamacro>;
clocks = <&txmacro>;
clock-names = "iface";
label = "TX";
#sound-dai-cells = <1>;
@ -2609,15 +2609,15 @@
qcom,din-ports = <4>;
qcom,dout-ports = <0>;
qcom,ports-sinterval-low = /bits/ 8 <0x01 0x03 0x03 0x03>;
qcom,ports-offset1 = /bits/ 8 <0x01 0x00 0x02 0x01>;
qcom,ports-sinterval-low = /bits/ 8 <0x01 0x01 0x03 0x03>;
qcom,ports-offset1 = /bits/ 8 <0x01 0x00 0x02 0x00>;
qcom,ports-offset2 = /bits/ 8 <0x00 0x00 0x00 0x00>;
qcom,ports-block-pack-mode = /bits/ 8 <0xff 0xff 0xff 0xff>;
qcom,ports-hstart = /bits/ 8 <0xff 0xff 0xff 0xff>;
qcom,ports-hstop = /bits/ 8 <0xff 0xff 0xff 0xff>;
qcom,ports-word-length = /bits/ 8 <0xff 0x00 0xff 0xff>;
qcom,ports-word-length = /bits/ 8 <0xff 0xff 0xff 0xff>;
qcom,ports-block-group-count = /bits/ 8 <0xff 0xff 0xff 0xff>;
qcom,ports-lane-control = /bits/ 8 <0x00 0x01 0x00 0x00>;
qcom,ports-lane-control = /bits/ 8 <0x00 0x01 0x00 0x01>;
status = "disabled";
};

View File

@ -1078,6 +1078,7 @@
dma-names = "tx", "rx";
#address-cells = <1>;
#size-cells = <0>;
status = "disabled";
};
};

View File

@ -1209,6 +1209,7 @@
clock-names = "xo";
power-domains = <&rpmpd SM6375_VDDCX>;
power-domain-names = "cx";
memory-region = <&pil_cdsp_mem>;

View File

@ -1826,7 +1826,7 @@
"slave_q2a",
"tbu";
iommus = <&apps_smmu 0x1d80 0x7f>;
iommus = <&apps_smmu 0x1d80 0x3f>;
iommu-map = <0x0 &apps_smmu 0x1d80 0x1>,
<0x100 &apps_smmu 0x1d81 0x1>;
@ -1925,7 +1925,7 @@
assigned-clocks = <&gcc GCC_PCIE_1_AUX_CLK>;
assigned-clock-rates = <19200000>;
iommus = <&apps_smmu 0x1e00 0x7f>;
iommus = <&apps_smmu 0x1e00 0x3f>;
iommu-map = <0x0 &apps_smmu 0x1e00 0x1>,
<0x100 &apps_smmu 0x1e01 0x1>;

View File

@ -625,6 +625,6 @@
};
&venus {
firmware-name = "qcom/sm8250/elish/venus.mbn";
firmware-name = "qcom/sm8250/xiaomi/elish/venus.mbn";
status = "okay";
};

View File

@ -1664,6 +1664,7 @@
power-domains = <&gcc UFS_PHY_GDSC>;
iommus = <&apps_smmu 0xe0 0x0>;
dma-coherent;
clock-names =
"core_clk",

View File

@ -2143,8 +2143,8 @@
<&q6prmcc LPASS_HW_DCODEC_VOTE LPASS_CLK_ATTRIBUTE_COUPLE_NO>,
<&vamacro>;
clock-names = "mclk", "npl", "macro", "dcodec", "fsgen";
assigned-clocks = <&q6prmcc LPASS_CLK_ID_WSA_CORE_TX_MCLK LPASS_CLK_ATTRIBUTE_COUPLE_NO>,
<&q6prmcc LPASS_CLK_ID_WSA_CORE_TX_2X_MCLK LPASS_CLK_ATTRIBUTE_COUPLE_NO>;
assigned-clocks = <&q6prmcc LPASS_CLK_ID_WSA2_CORE_TX_MCLK LPASS_CLK_ATTRIBUTE_COUPLE_NO>,
<&q6prmcc LPASS_CLK_ID_WSA2_CORE_TX_2X_MCLK LPASS_CLK_ATTRIBUTE_COUPLE_NO>;
assigned-clock-rates = <19200000>, <19200000>;
#clock-cells = <0>;
@ -4003,6 +4003,7 @@
power-domains = <&gcc UFS_PHY_GDSC>;
iommus = <&apps_smmu 0xe0 0x0>;
dma-coherent;
interconnects = <&aggre1_noc MASTER_UFS_MEM 0 &mc_virt SLAVE_EBI1 0>,
<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_UFS_MEM_CFG 0>;

View File

@ -66,7 +66,7 @@
CPU0: cpu@0 {
device_type = "cpu";
compatible = "qcom,kryo";
compatible = "arm,cortex-a510";
reg = <0 0>;
enable-method = "psci";
next-level-cache = <&L2_0>;
@ -89,7 +89,7 @@
CPU1: cpu@100 {
device_type = "cpu";
compatible = "qcom,kryo";
compatible = "arm,cortex-a510";
reg = <0 0x100>;
enable-method = "psci";
next-level-cache = <&L2_100>;
@ -108,7 +108,7 @@
CPU2: cpu@200 {
device_type = "cpu";
compatible = "qcom,kryo";
compatible = "arm,cortex-a510";
reg = <0 0x200>;
enable-method = "psci";
next-level-cache = <&L2_200>;
@ -127,7 +127,7 @@
CPU3: cpu@300 {
device_type = "cpu";
compatible = "qcom,kryo";
compatible = "arm,cortex-a715";
reg = <0 0x300>;
enable-method = "psci";
next-level-cache = <&L2_300>;
@ -146,7 +146,7 @@
CPU4: cpu@400 {
device_type = "cpu";
compatible = "qcom,kryo";
compatible = "arm,cortex-a715";
reg = <0 0x400>;
enable-method = "psci";
next-level-cache = <&L2_400>;
@ -165,7 +165,7 @@
CPU5: cpu@500 {
device_type = "cpu";
compatible = "qcom,kryo";
compatible = "arm,cortex-a710";
reg = <0 0x500>;
enable-method = "psci";
next-level-cache = <&L2_500>;
@ -184,7 +184,7 @@
CPU6: cpu@600 {
device_type = "cpu";
compatible = "qcom,kryo";
compatible = "arm,cortex-a710";
reg = <0 0x600>;
enable-method = "psci";
next-level-cache = <&L2_600>;
@ -203,7 +203,7 @@
CPU7: cpu@700 {
device_type = "cpu";
compatible = "qcom,kryo";
compatible = "arm,cortex-x3";
reg = <0 0x700>;
enable-method = "psci";
next-level-cache = <&L2_700>;
@ -1905,6 +1905,7 @@
required-opps = <&rpmhpd_opp_nom>;
iommus = <&apps_smmu 0x60 0x0>;
dma-coherent;
interconnects = <&aggre1_noc MASTER_UFS_MEM 0 &mc_virt SLAVE_EBI1 0>,
<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_UFS_MEM_CFG 0>;
@ -1997,7 +1998,7 @@
lpass_tlmm: pinctrl@6e80000 {
compatible = "qcom,sm8550-lpass-lpi-pinctrl";
reg = <0 0x06e80000 0 0x20000>,
<0 0x0725a000 0 0x10000>;
<0 0x07250000 0 0x10000>;
gpio-controller;
#gpio-cells = <2>;
gpio-ranges = <&lpass_tlmm 0 0 23>;
@ -2691,7 +2692,7 @@
pins = "gpio28", "gpio29";
function = "qup1_se0";
drive-strength = <2>;
bias-pull-up;
bias-pull-up = <2200>;
};
qup_i2c1_data_clk: qup-i2c1-data-clk-state {
@ -2699,7 +2700,7 @@
pins = "gpio32", "gpio33";
function = "qup1_se1";
drive-strength = <2>;
bias-pull-up;
bias-pull-up = <2200>;
};
qup_i2c2_data_clk: qup-i2c2-data-clk-state {
@ -2707,7 +2708,7 @@
pins = "gpio36", "gpio37";
function = "qup1_se2";
drive-strength = <2>;
bias-pull-up;
bias-pull-up = <2200>;
};
qup_i2c3_data_clk: qup-i2c3-data-clk-state {
@ -2715,7 +2716,7 @@
pins = "gpio40", "gpio41";
function = "qup1_se3";
drive-strength = <2>;
bias-pull-up;
bias-pull-up = <2200>;
};
qup_i2c4_data_clk: qup-i2c4-data-clk-state {
@ -2723,7 +2724,7 @@
pins = "gpio44", "gpio45";
function = "qup1_se4";
drive-strength = <2>;
bias-pull-up;
bias-pull-up = <2200>;
};
qup_i2c5_data_clk: qup-i2c5-data-clk-state {
@ -2731,7 +2732,7 @@
pins = "gpio52", "gpio53";
function = "qup1_se5";
drive-strength = <2>;
bias-pull-up;
bias-pull-up = <2200>;
};
qup_i2c6_data_clk: qup-i2c6-data-clk-state {
@ -2739,7 +2740,7 @@
pins = "gpio48", "gpio49";
function = "qup1_se6";
drive-strength = <2>;
bias-pull-up;
bias-pull-up = <2200>;
};
qup_i2c8_data_clk: qup-i2c8-data-clk-state {
@ -2747,14 +2748,14 @@
pins = "gpio57";
function = "qup2_se0_l1_mira";
drive-strength = <2>;
bias-pull-up;
bias-pull-up = <2200>;
};
sda-pins {
pins = "gpio56";
function = "qup2_se0_l0_mira";
drive-strength = <2>;
bias-pull-up;
bias-pull-up = <2200>;
};
};
@ -2763,7 +2764,7 @@
pins = "gpio60", "gpio61";
function = "qup2_se1";
drive-strength = <2>;
bias-pull-up;
bias-pull-up = <2200>;
};
qup_i2c10_data_clk: qup-i2c10-data-clk-state {
@ -2771,7 +2772,7 @@
pins = "gpio64", "gpio65";
function = "qup2_se2";
drive-strength = <2>;
bias-pull-up;
bias-pull-up = <2200>;
};
qup_i2c11_data_clk: qup-i2c11-data-clk-state {
@ -2779,7 +2780,7 @@
pins = "gpio68", "gpio69";
function = "qup2_se3";
drive-strength = <2>;
bias-pull-up;
bias-pull-up = <2200>;
};
qup_i2c12_data_clk: qup-i2c12-data-clk-state {
@ -2787,7 +2788,7 @@
pins = "gpio2", "gpio3";
function = "qup2_se4";
drive-strength = <2>;
bias-pull-up;
bias-pull-up = <2200>;
};
qup_i2c13_data_clk: qup-i2c13-data-clk-state {
@ -2795,7 +2796,7 @@
pins = "gpio80", "gpio81";
function = "qup2_se5";
drive-strength = <2>;
bias-pull-up;
bias-pull-up = <2200>;
};
qup_i2c15_data_clk: qup-i2c15-data-clk-state {
@ -2803,7 +2804,7 @@
pins = "gpio72", "gpio106";
function = "qup2_se7";
drive-strength = <2>;
bias-pull-up;
bias-pull-up = <2200>;
};
qup_spi0_cs: qup-spi0-cs-state {

View File

@ -16,6 +16,7 @@
#include <linux/types.h>
#include <linux/jump_label.h>
#include <linux/kvm_types.h>
#include <linux/maple_tree.h>
#include <linux/percpu.h>
#include <linux/psci.h>
#include <asm/arch_gicv3.h>
@ -199,6 +200,9 @@ struct kvm_arch {
/* Mandated version of PSCI */
u32 psci_version;
/* Protects VM-scoped configuration data */
struct mutex config_lock;
/*
* If we encounter a data abort without valid instruction syndrome
* information, report this to user space. User space can (and
@ -221,7 +225,12 @@ struct kvm_arch {
#define KVM_ARCH_FLAG_EL1_32BIT 4
/* PSCI SYSTEM_SUSPEND enabled for the guest */
#define KVM_ARCH_FLAG_SYSTEM_SUSPEND_ENABLED 5
/* VM counter offset */
#define KVM_ARCH_FLAG_VM_COUNTER_OFFSET 6
/* Timer PPIs made immutable */
#define KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE 7
/* SMCCC filter initialized for the VM */
#define KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED 8
unsigned long flags;
/*
@ -242,6 +251,7 @@ struct kvm_arch {
/* Hypercall features firmware registers' descriptor */
struct kvm_smccc_features smccc_feat;
struct maple_tree smccc_filter;
/*
* For an untrusted host VM, 'pkvm.handle' is used to lookup
@ -365,6 +375,10 @@ enum vcpu_sysreg {
TPIDR_EL2, /* EL2 Software Thread ID Register */
CNTHCTL_EL2, /* Counter-timer Hypervisor Control register */
SP_EL2, /* EL2 Stack Pointer */
CNTHP_CTL_EL2,
CNTHP_CVAL_EL2,
CNTHV_CTL_EL2,
CNTHV_CVAL_EL2,
NR_SYS_REGS /* Nothing after this line! */
};
@ -522,6 +536,7 @@ struct kvm_vcpu_arch {
/* vcpu power state */
struct kvm_mp_state mp_state;
spinlock_t mp_state_lock;
/* Cache some mmu pages needed inside spinlock regions */
struct kvm_mmu_memory_cache mmu_page_cache;
@ -922,6 +937,9 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
int __init kvm_sys_reg_table_init(void);
bool lock_all_vcpus(struct kvm *kvm);
void unlock_all_vcpus(struct kvm *kvm);
/* MMIO helpers */
void kvm_mmio_write_buf(void *buf, unsigned int len, unsigned long data);
unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len);
@ -1007,6 +1025,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
struct kvm_arm_copy_mte_tags *copy_tags);
int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm,
struct kvm_arm_counter_offset *offset);
/* Guest/host FPSIMD coordination helpers */
int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
@ -1061,6 +1081,9 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
(system_supports_32bit_el0() && \
!static_branch_unlikely(&arm64_mismatched_32bit_el0))
#define kvm_vm_has_ran_once(kvm) \
(test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &(kvm)->arch.flags))
int kvm_trng_call(struct kvm_vcpu *vcpu);
#ifdef CONFIG_KVM
extern phys_addr_t hyp_mem_base;

View File

@ -63,6 +63,7 @@
* specific registers encoded in the instructions).
*/
.macro kern_hyp_va reg
#ifndef __KVM_VHE_HYPERVISOR__
alternative_cb ARM64_ALWAYS_SYSTEM, kvm_update_va_mask
and \reg, \reg, #1 /* mask with va_mask */
ror \reg, \reg, #1 /* rotate to the first tag bit */
@ -70,6 +71,7 @@ alternative_cb ARM64_ALWAYS_SYSTEM, kvm_update_va_mask
add \reg, \reg, #0, lsl 12 /* insert the top 12 bits of the tag */
ror \reg, \reg, #63 /* rotate back */
alternative_cb_end
#endif
.endm
/*
@ -127,6 +129,7 @@ void kvm_apply_hyp_relocations(void);
static __always_inline unsigned long __kern_hyp_va(unsigned long v)
{
#ifndef __KVM_VHE_HYPERVISOR__
asm volatile(ALTERNATIVE_CB("and %0, %0, #1\n"
"ror %0, %0, #1\n"
"add %0, %0, #0\n"
@ -135,6 +138,7 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v)
ARM64_ALWAYS_SYSTEM,
kvm_update_va_mask)
: "+r" (v));
#endif
return v;
}

View File

@ -388,6 +388,7 @@
#define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0)
#define SYS_CNTPCT_EL0 sys_reg(3, 3, 14, 0, 1)
#define SYS_CNTPCTSS_EL0 sys_reg(3, 3, 14, 0, 5)
#define SYS_CNTVCTSS_EL0 sys_reg(3, 3, 14, 0, 6)
@ -400,7 +401,9 @@
#define SYS_AARCH32_CNTP_TVAL sys_reg(0, 0, 14, 2, 0)
#define SYS_AARCH32_CNTP_CTL sys_reg(0, 0, 14, 2, 1)
#define SYS_AARCH32_CNTPCT sys_reg(0, 0, 0, 14, 0)
#define SYS_AARCH32_CNTP_CVAL sys_reg(0, 2, 0, 14, 0)
#define SYS_AARCH32_CNTPCTSS sys_reg(0, 8, 0, 14, 0)
#define __PMEV_op2(n) ((n) & 0x7)
#define __CNTR_CRm(n) (0x8 | (((n) >> 3) & 0x3))

View File

@ -198,6 +198,15 @@ struct kvm_arm_copy_mte_tags {
__u64 reserved[2];
};
/*
* Counter/Timer offset structure. Describe the virtual/physical offset.
* To be used with KVM_ARM_SET_COUNTER_OFFSET.
*/
struct kvm_arm_counter_offset {
__u64 counter_offset;
__u64 reserved;
};
#define KVM_ARM_TAGS_TO_GUEST 0
#define KVM_ARM_TAGS_FROM_GUEST 1
@ -372,6 +381,10 @@ enum {
#endif
};
/* Device Control API on vm fd */
#define KVM_ARM_VM_SMCCC_CTRL 0
#define KVM_ARM_VM_SMCCC_FILTER 0
/* Device Control API: ARM VGIC */
#define KVM_DEV_ARM_VGIC_GRP_ADDR 0
#define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1
@ -411,6 +424,8 @@ enum {
#define KVM_ARM_VCPU_TIMER_CTRL 1
#define KVM_ARM_VCPU_TIMER_IRQ_VTIMER 0
#define KVM_ARM_VCPU_TIMER_IRQ_PTIMER 1
#define KVM_ARM_VCPU_TIMER_IRQ_HVTIMER 2
#define KVM_ARM_VCPU_TIMER_IRQ_HPTIMER 3
#define KVM_ARM_VCPU_PVTIME_CTRL 2
#define KVM_ARM_VCPU_PVTIME_IPA 0
@ -469,6 +484,27 @@ enum {
/* run->fail_entry.hardware_entry_failure_reason codes. */
#define KVM_EXIT_FAIL_ENTRY_CPU_UNSUPPORTED (1ULL << 0)
enum kvm_smccc_filter_action {
KVM_SMCCC_FILTER_HANDLE = 0,
KVM_SMCCC_FILTER_DENY,
KVM_SMCCC_FILTER_FWD_TO_USER,
#ifdef __KERNEL__
NR_SMCCC_FILTER_ACTIONS
#endif
};
struct kvm_smccc_filter {
__u32 base;
__u32 nr_functions;
__u8 action;
__u8 pad[15];
};
/* arm64-specific KVM_EXIT_HYPERCALL flags */
#define KVM_HYPERCALL_EXIT_SMC (1U << 0)
#define KVM_HYPERCALL_EXIT_16BIT (1U << 1)
#endif
#endif /* __ARM_KVM_H__ */

View File

@ -2223,6 +2223,17 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.sign = FTR_UNSIGNED,
.min_field_value = 1,
},
{
.desc = "Enhanced Counter Virtualization (CNTPOFF)",
.capability = ARM64_HAS_ECV_CNTPOFF,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature,
.sys_reg = SYS_ID_AA64MMFR0_EL1,
.field_pos = ID_AA64MMFR0_EL1_ECV_SHIFT,
.field_width = 4,
.sign = FTR_UNSIGNED,
.min_field_value = ID_AA64MMFR0_EL1_ECV_CNTPOFF,
},
#ifdef CONFIG_ARM64_PAN
{
.desc = "Privileged Access Never",

View File

@ -66,7 +66,7 @@
.long .Lefi_header_end - .L_head // SizeOfHeaders
.long 0 // CheckSum
.short IMAGE_SUBSYSTEM_EFI_APPLICATION // Subsystem
.short 0 // DllCharacteristics
.short IMAGE_DLL_CHARACTERISTICS_NX_COMPAT // DllCharacteristics
.quad 0 // SizeOfStackReserve
.quad 0 // SizeOfStackCommit
.quad 0 // SizeOfHeapReserve

View File

@ -16,6 +16,7 @@
#include <asm/arch_timer.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_hyp.h>
#include <asm/kvm_nested.h>
#include <kvm/arm_vgic.h>
#include <kvm/arm_arch_timer.h>
@ -30,14 +31,11 @@ static u32 host_ptimer_irq_flags;
static DEFINE_STATIC_KEY_FALSE(has_gic_active_state);
static const struct kvm_irq_level default_ptimer_irq = {
.irq = 30,
.level = 1,
};
static const struct kvm_irq_level default_vtimer_irq = {
.irq = 27,
.level = 1,
static const u8 default_ppi[] = {
[TIMER_PTIMER] = 30,
[TIMER_VTIMER] = 27,
[TIMER_HPTIMER] = 26,
[TIMER_HVTIMER] = 28,
};
static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx);
@ -51,6 +49,24 @@ static void kvm_arm_timer_write(struct kvm_vcpu *vcpu,
static u64 kvm_arm_timer_read(struct kvm_vcpu *vcpu,
struct arch_timer_context *timer,
enum kvm_arch_timer_regs treg);
static bool kvm_arch_timer_get_input_level(int vintid);
static struct irq_ops arch_timer_irq_ops = {
.get_input_level = kvm_arch_timer_get_input_level,
};
static bool has_cntpoff(void)
{
return (has_vhe() && cpus_have_final_cap(ARM64_HAS_ECV_CNTPOFF));
}
static int nr_timers(struct kvm_vcpu *vcpu)
{
if (!vcpu_has_nv(vcpu))
return NR_KVM_EL0_TIMERS;
return NR_KVM_TIMERS;
}
u32 timer_get_ctl(struct arch_timer_context *ctxt)
{
@ -61,6 +77,10 @@ u32 timer_get_ctl(struct arch_timer_context *ctxt)
return __vcpu_sys_reg(vcpu, CNTV_CTL_EL0);
case TIMER_PTIMER:
return __vcpu_sys_reg(vcpu, CNTP_CTL_EL0);
case TIMER_HVTIMER:
return __vcpu_sys_reg(vcpu, CNTHV_CTL_EL2);
case TIMER_HPTIMER:
return __vcpu_sys_reg(vcpu, CNTHP_CTL_EL2);
default:
WARN_ON(1);
return 0;
@ -76,6 +96,10 @@ u64 timer_get_cval(struct arch_timer_context *ctxt)
return __vcpu_sys_reg(vcpu, CNTV_CVAL_EL0);
case TIMER_PTIMER:
return __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0);
case TIMER_HVTIMER:
return __vcpu_sys_reg(vcpu, CNTHV_CVAL_EL2);
case TIMER_HPTIMER:
return __vcpu_sys_reg(vcpu, CNTHP_CVAL_EL2);
default:
WARN_ON(1);
return 0;
@ -84,10 +108,17 @@ u64 timer_get_cval(struct arch_timer_context *ctxt)
static u64 timer_get_offset(struct arch_timer_context *ctxt)
{
if (ctxt->offset.vm_offset)
return *ctxt->offset.vm_offset;
u64 offset = 0;
return 0;
if (!ctxt)
return 0;
if (ctxt->offset.vm_offset)
offset += *ctxt->offset.vm_offset;
if (ctxt->offset.vcpu_offset)
offset += *ctxt->offset.vcpu_offset;
return offset;
}
static void timer_set_ctl(struct arch_timer_context *ctxt, u32 ctl)
@ -101,6 +132,12 @@ static void timer_set_ctl(struct arch_timer_context *ctxt, u32 ctl)
case TIMER_PTIMER:
__vcpu_sys_reg(vcpu, CNTP_CTL_EL0) = ctl;
break;
case TIMER_HVTIMER:
__vcpu_sys_reg(vcpu, CNTHV_CTL_EL2) = ctl;
break;
case TIMER_HPTIMER:
__vcpu_sys_reg(vcpu, CNTHP_CTL_EL2) = ctl;
break;
default:
WARN_ON(1);
}
@ -117,6 +154,12 @@ static void timer_set_cval(struct arch_timer_context *ctxt, u64 cval)
case TIMER_PTIMER:
__vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = cval;
break;
case TIMER_HVTIMER:
__vcpu_sys_reg(vcpu, CNTHV_CVAL_EL2) = cval;
break;
case TIMER_HPTIMER:
__vcpu_sys_reg(vcpu, CNTHP_CVAL_EL2) = cval;
break;
default:
WARN_ON(1);
}
@ -139,13 +182,27 @@ u64 kvm_phys_timer_read(void)
static void get_timer_map(struct kvm_vcpu *vcpu, struct timer_map *map)
{
if (has_vhe()) {
if (vcpu_has_nv(vcpu)) {
if (is_hyp_ctxt(vcpu)) {
map->direct_vtimer = vcpu_hvtimer(vcpu);
map->direct_ptimer = vcpu_hptimer(vcpu);
map->emul_vtimer = vcpu_vtimer(vcpu);
map->emul_ptimer = vcpu_ptimer(vcpu);
} else {
map->direct_vtimer = vcpu_vtimer(vcpu);
map->direct_ptimer = vcpu_ptimer(vcpu);
map->emul_vtimer = vcpu_hvtimer(vcpu);
map->emul_ptimer = vcpu_hptimer(vcpu);
}
} else if (has_vhe()) {
map->direct_vtimer = vcpu_vtimer(vcpu);
map->direct_ptimer = vcpu_ptimer(vcpu);
map->emul_vtimer = NULL;
map->emul_ptimer = NULL;
} else {
map->direct_vtimer = vcpu_vtimer(vcpu);
map->direct_ptimer = NULL;
map->emul_vtimer = NULL;
map->emul_ptimer = vcpu_ptimer(vcpu);
}
@ -212,7 +269,7 @@ static u64 kvm_counter_compute_delta(struct arch_timer_context *timer_ctx,
ns = cyclecounter_cyc2ns(timecounter->cc,
val - now,
timecounter->mask,
&timecounter->frac);
&timer_ctx->ns_frac);
return ns;
}
@ -240,8 +297,11 @@ static bool vcpu_has_wfit_active(struct kvm_vcpu *vcpu)
static u64 wfit_delay_ns(struct kvm_vcpu *vcpu)
{
struct arch_timer_context *ctx = vcpu_vtimer(vcpu);
u64 val = vcpu_get_reg(vcpu, kvm_vcpu_sys_get_rt(vcpu));
struct arch_timer_context *ctx;
ctx = (vcpu_has_nv(vcpu) && is_hyp_ctxt(vcpu)) ? vcpu_hvtimer(vcpu)
: vcpu_vtimer(vcpu);
return kvm_counter_compute_delta(ctx, val);
}
@ -255,7 +315,7 @@ static u64 kvm_timer_earliest_exp(struct kvm_vcpu *vcpu)
u64 min_delta = ULLONG_MAX;
int i;
for (i = 0; i < NR_KVM_TIMERS; i++) {
for (i = 0; i < nr_timers(vcpu); i++) {
struct arch_timer_context *ctx = &vcpu->arch.timer_cpu.timers[i];
WARN(ctx->loaded, "timer %d loaded\n", i);
@ -338,9 +398,11 @@ static bool kvm_timer_should_fire(struct arch_timer_context *timer_ctx)
switch (index) {
case TIMER_VTIMER:
case TIMER_HVTIMER:
cnt_ctl = read_sysreg_el0(SYS_CNTV_CTL);
break;
case TIMER_PTIMER:
case TIMER_HPTIMER:
cnt_ctl = read_sysreg_el0(SYS_CNTP_CTL);
break;
case NR_KVM_TIMERS:
@ -392,12 +454,12 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
int ret;
timer_ctx->irq.level = new_level;
trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq,
trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_irq(timer_ctx),
timer_ctx->irq.level);
if (!userspace_irqchip(vcpu->kvm)) {
ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
timer_ctx->irq.irq,
timer_irq(timer_ctx),
timer_ctx->irq.level,
timer_ctx);
WARN_ON(ret);
@ -432,6 +494,12 @@ static void set_cntvoff(u64 cntvoff)
kvm_call_hyp(__kvm_timer_set_cntvoff, cntvoff);
}
static void set_cntpoff(u64 cntpoff)
{
if (has_cntpoff())
write_sysreg_s(cntpoff, SYS_CNTPOFF_EL2);
}
static void timer_save_state(struct arch_timer_context *ctx)
{
struct arch_timer_cpu *timer = vcpu_timer(ctx->vcpu);
@ -447,7 +515,10 @@ static void timer_save_state(struct arch_timer_context *ctx)
goto out;
switch (index) {
u64 cval;
case TIMER_VTIMER:
case TIMER_HVTIMER:
timer_set_ctl(ctx, read_sysreg_el0(SYS_CNTV_CTL));
timer_set_cval(ctx, read_sysreg_el0(SYS_CNTV_CVAL));
@ -473,13 +544,20 @@ static void timer_save_state(struct arch_timer_context *ctx)
set_cntvoff(0);
break;
case TIMER_PTIMER:
case TIMER_HPTIMER:
timer_set_ctl(ctx, read_sysreg_el0(SYS_CNTP_CTL));
timer_set_cval(ctx, read_sysreg_el0(SYS_CNTP_CVAL));
cval = read_sysreg_el0(SYS_CNTP_CVAL);
if (!has_cntpoff())
cval -= timer_get_offset(ctx);
timer_set_cval(ctx, cval);
/* Disable the timer */
write_sysreg_el0(0, SYS_CNTP_CTL);
isb();
set_cntpoff(0);
break;
case NR_KVM_TIMERS:
BUG();
@ -510,6 +588,7 @@ static void kvm_timer_blocking(struct kvm_vcpu *vcpu)
*/
if (!kvm_timer_irq_can_fire(map.direct_vtimer) &&
!kvm_timer_irq_can_fire(map.direct_ptimer) &&
!kvm_timer_irq_can_fire(map.emul_vtimer) &&
!kvm_timer_irq_can_fire(map.emul_ptimer) &&
!vcpu_has_wfit_active(vcpu))
return;
@ -543,14 +622,23 @@ static void timer_restore_state(struct arch_timer_context *ctx)
goto out;
switch (index) {
u64 cval, offset;
case TIMER_VTIMER:
case TIMER_HVTIMER:
set_cntvoff(timer_get_offset(ctx));
write_sysreg_el0(timer_get_cval(ctx), SYS_CNTV_CVAL);
isb();
write_sysreg_el0(timer_get_ctl(ctx), SYS_CNTV_CTL);
break;
case TIMER_PTIMER:
write_sysreg_el0(timer_get_cval(ctx), SYS_CNTP_CVAL);
case TIMER_HPTIMER:
cval = timer_get_cval(ctx);
offset = timer_get_offset(ctx);
set_cntpoff(offset);
if (!has_cntpoff())
cval += offset;
write_sysreg_el0(cval, SYS_CNTP_CVAL);
isb();
write_sysreg_el0(timer_get_ctl(ctx), SYS_CNTP_CTL);
break;
@ -586,7 +674,7 @@ static void kvm_timer_vcpu_load_gic(struct arch_timer_context *ctx)
kvm_timer_update_irq(ctx->vcpu, kvm_timer_should_fire(ctx), ctx);
if (irqchip_in_kernel(vcpu->kvm))
phys_active = kvm_vgic_map_is_active(vcpu, ctx->irq.irq);
phys_active = kvm_vgic_map_is_active(vcpu, timer_irq(ctx));
phys_active |= ctx->irq.level;
@ -621,6 +709,128 @@ static void kvm_timer_vcpu_load_nogic(struct kvm_vcpu *vcpu)
enable_percpu_irq(host_vtimer_irq, host_vtimer_irq_flags);
}
/* If _pred is true, set bit in _set, otherwise set it in _clr */
#define assign_clear_set_bit(_pred, _bit, _clr, _set) \
do { \
if (_pred) \
(_set) |= (_bit); \
else \
(_clr) |= (_bit); \
} while (0)
static void kvm_timer_vcpu_load_nested_switch(struct kvm_vcpu *vcpu,
struct timer_map *map)
{
int hw, ret;
if (!irqchip_in_kernel(vcpu->kvm))
return;
/*
* We only ever unmap the vtimer irq on a VHE system that runs nested
* virtualization, in which case we have both a valid emul_vtimer,
* emul_ptimer, direct_vtimer, and direct_ptimer.
*
* Since this is called from kvm_timer_vcpu_load(), a change between
* vEL2 and vEL1/0 will have just happened, and the timer_map will
* represent this, and therefore we switch the emul/direct mappings
* below.
*/
hw = kvm_vgic_get_map(vcpu, timer_irq(map->direct_vtimer));
if (hw < 0) {
kvm_vgic_unmap_phys_irq(vcpu, timer_irq(map->emul_vtimer));
kvm_vgic_unmap_phys_irq(vcpu, timer_irq(map->emul_ptimer));
ret = kvm_vgic_map_phys_irq(vcpu,
map->direct_vtimer->host_timer_irq,
timer_irq(map->direct_vtimer),
&arch_timer_irq_ops);
WARN_ON_ONCE(ret);
ret = kvm_vgic_map_phys_irq(vcpu,
map->direct_ptimer->host_timer_irq,
timer_irq(map->direct_ptimer),
&arch_timer_irq_ops);
WARN_ON_ONCE(ret);
/*
* The virtual offset behaviour is "interresting", as it
* always applies when HCR_EL2.E2H==0, but only when
* accessed from EL1 when HCR_EL2.E2H==1. So make sure we
* track E2H when putting the HV timer in "direct" mode.
*/
if (map->direct_vtimer == vcpu_hvtimer(vcpu)) {
struct arch_timer_offset *offs = &map->direct_vtimer->offset;
if (vcpu_el2_e2h_is_set(vcpu))
offs->vcpu_offset = NULL;
else
offs->vcpu_offset = &__vcpu_sys_reg(vcpu, CNTVOFF_EL2);
}
}
}
static void timer_set_traps(struct kvm_vcpu *vcpu, struct timer_map *map)
{
bool tpt, tpc;
u64 clr, set;
/*
* No trapping gets configured here with nVHE. See
* __timer_enable_traps(), which is where the stuff happens.
*/
if (!has_vhe())
return;
/*
* Our default policy is not to trap anything. As we progress
* within this function, reality kicks in and we start adding
* traps based on emulation requirements.
*/
tpt = tpc = false;
/*
* We have two possibility to deal with a physical offset:
*
* - Either we have CNTPOFF (yay!) or the offset is 0:
* we let the guest freely access the HW
*
* - or neither of these condition apply:
* we trap accesses to the HW, but still use it
* after correcting the physical offset
*/
if (!has_cntpoff() && timer_get_offset(map->direct_ptimer))
tpt = tpc = true;
/*
* Apply the enable bits that the guest hypervisor has requested for
* its own guest. We can only add traps that wouldn't have been set
* above.
*/
if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
u64 val = __vcpu_sys_reg(vcpu, CNTHCTL_EL2);
/* Use the VHE format for mental sanity */
if (!vcpu_el2_e2h_is_set(vcpu))
val = (val & (CNTHCTL_EL1PCEN | CNTHCTL_EL1PCTEN)) << 10;
tpt |= !(val & (CNTHCTL_EL1PCEN << 10));
tpc |= !(val & (CNTHCTL_EL1PCTEN << 10));
}
/*
* Now that we have collected our requirements, compute the
* trap and enable bits.
*/
set = 0;
clr = 0;
assign_clear_set_bit(tpt, CNTHCTL_EL1PCEN << 10, set, clr);
assign_clear_set_bit(tpc, CNTHCTL_EL1PCTEN << 10, set, clr);
/* This only happens on VHE, so use the CNTKCTL_EL1 accessor */
sysreg_clear_set(cntkctl_el1, clr, set);
}
void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu)
{
struct arch_timer_cpu *timer = vcpu_timer(vcpu);
@ -632,6 +842,9 @@ void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu)
get_timer_map(vcpu, &map);
if (static_branch_likely(&has_gic_active_state)) {
if (vcpu_has_nv(vcpu))
kvm_timer_vcpu_load_nested_switch(vcpu, &map);
kvm_timer_vcpu_load_gic(map.direct_vtimer);
if (map.direct_ptimer)
kvm_timer_vcpu_load_gic(map.direct_ptimer);
@ -644,9 +857,12 @@ void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu)
timer_restore_state(map.direct_vtimer);
if (map.direct_ptimer)
timer_restore_state(map.direct_ptimer);
if (map.emul_vtimer)
timer_emulate(map.emul_vtimer);
if (map.emul_ptimer)
timer_emulate(map.emul_ptimer);
timer_set_traps(vcpu, &map);
}
bool kvm_timer_should_notify_user(struct kvm_vcpu *vcpu)
@ -689,6 +905,8 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu)
* In any case, we re-schedule the hrtimer for the physical timer when
* coming back to the VCPU thread in kvm_timer_vcpu_load().
*/
if (map.emul_vtimer)
soft_timer_cancel(&map.emul_vtimer->hrtimer);
if (map.emul_ptimer)
soft_timer_cancel(&map.emul_ptimer->hrtimer);
@ -738,56 +956,89 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu)
* resets the timer to be disabled and unmasked and is compliant with
* the ARMv7 architecture.
*/
timer_set_ctl(vcpu_vtimer(vcpu), 0);
timer_set_ctl(vcpu_ptimer(vcpu), 0);
for (int i = 0; i < nr_timers(vcpu); i++)
timer_set_ctl(vcpu_get_timer(vcpu, i), 0);
/*
* A vcpu running at EL2 is in charge of the offset applied to
* the virtual timer, so use the physical VM offset, and point
* the vcpu offset to CNTVOFF_EL2.
*/
if (vcpu_has_nv(vcpu)) {
struct arch_timer_offset *offs = &vcpu_vtimer(vcpu)->offset;
offs->vcpu_offset = &__vcpu_sys_reg(vcpu, CNTVOFF_EL2);
offs->vm_offset = &vcpu->kvm->arch.timer_data.poffset;
}
if (timer->enabled) {
kvm_timer_update_irq(vcpu, false, vcpu_vtimer(vcpu));
kvm_timer_update_irq(vcpu, false, vcpu_ptimer(vcpu));
for (int i = 0; i < nr_timers(vcpu); i++)
kvm_timer_update_irq(vcpu, false,
vcpu_get_timer(vcpu, i));
if (irqchip_in_kernel(vcpu->kvm)) {
kvm_vgic_reset_mapped_irq(vcpu, map.direct_vtimer->irq.irq);
kvm_vgic_reset_mapped_irq(vcpu, timer_irq(map.direct_vtimer));
if (map.direct_ptimer)
kvm_vgic_reset_mapped_irq(vcpu, map.direct_ptimer->irq.irq);
kvm_vgic_reset_mapped_irq(vcpu, timer_irq(map.direct_ptimer));
}
}
if (map.emul_vtimer)
soft_timer_cancel(&map.emul_vtimer->hrtimer);
if (map.emul_ptimer)
soft_timer_cancel(&map.emul_ptimer->hrtimer);
return 0;
}
static void timer_context_init(struct kvm_vcpu *vcpu, int timerid)
{
struct arch_timer_context *ctxt = vcpu_get_timer(vcpu, timerid);
struct kvm *kvm = vcpu->kvm;
ctxt->vcpu = vcpu;
if (timerid == TIMER_VTIMER)
ctxt->offset.vm_offset = &kvm->arch.timer_data.voffset;
else
ctxt->offset.vm_offset = &kvm->arch.timer_data.poffset;
hrtimer_init(&ctxt->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_HARD);
ctxt->hrtimer.function = kvm_hrtimer_expire;
switch (timerid) {
case TIMER_PTIMER:
case TIMER_HPTIMER:
ctxt->host_timer_irq = host_ptimer_irq;
break;
case TIMER_VTIMER:
case TIMER_HVTIMER:
ctxt->host_timer_irq = host_vtimer_irq;
break;
}
}
void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
{
struct arch_timer_cpu *timer = vcpu_timer(vcpu);
struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
vtimer->vcpu = vcpu;
vtimer->offset.vm_offset = &vcpu->kvm->arch.timer_data.voffset;
ptimer->vcpu = vcpu;
for (int i = 0; i < NR_KVM_TIMERS; i++)
timer_context_init(vcpu, i);
/* Synchronize cntvoff across all vtimers of a VM. */
timer_set_offset(vtimer, kvm_phys_timer_read());
timer_set_offset(ptimer, 0);
/* Synchronize offsets across timers of a VM if not already provided */
if (!test_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET, &vcpu->kvm->arch.flags)) {
timer_set_offset(vcpu_vtimer(vcpu), kvm_phys_timer_read());
timer_set_offset(vcpu_ptimer(vcpu), 0);
}
hrtimer_init(&timer->bg_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_HARD);
timer->bg_timer.function = kvm_bg_timer_expire;
}
hrtimer_init(&vtimer->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_HARD);
hrtimer_init(&ptimer->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_HARD);
vtimer->hrtimer.function = kvm_hrtimer_expire;
ptimer->hrtimer.function = kvm_hrtimer_expire;
vtimer->irq.irq = default_vtimer_irq.irq;
ptimer->irq.irq = default_ptimer_irq.irq;
vtimer->host_timer_irq = host_vtimer_irq;
ptimer->host_timer_irq = host_ptimer_irq;
vtimer->host_timer_irq_flags = host_vtimer_irq_flags;
ptimer->host_timer_irq_flags = host_ptimer_irq_flags;
void kvm_timer_init_vm(struct kvm *kvm)
{
for (int i = 0; i < NR_KVM_TIMERS; i++)
kvm->arch.timer_data.ppi[i] = default_ppi[i];
}
void kvm_timer_cpu_up(void)
@ -814,8 +1065,11 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
kvm_arm_timer_write(vcpu, timer, TIMER_REG_CTL, value);
break;
case KVM_REG_ARM_TIMER_CNT:
timer = vcpu_vtimer(vcpu);
timer_set_offset(timer, kvm_phys_timer_read() - value);
if (!test_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET,
&vcpu->kvm->arch.flags)) {
timer = vcpu_vtimer(vcpu);
timer_set_offset(timer, kvm_phys_timer_read() - value);
}
break;
case KVM_REG_ARM_TIMER_CVAL:
timer = vcpu_vtimer(vcpu);
@ -825,6 +1079,13 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
timer = vcpu_ptimer(vcpu);
kvm_arm_timer_write(vcpu, timer, TIMER_REG_CTL, value);
break;
case KVM_REG_ARM_PTIMER_CNT:
if (!test_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET,
&vcpu->kvm->arch.flags)) {
timer = vcpu_ptimer(vcpu);
timer_set_offset(timer, kvm_phys_timer_read() - value);
}
break;
case KVM_REG_ARM_PTIMER_CVAL:
timer = vcpu_ptimer(vcpu);
kvm_arm_timer_write(vcpu, timer, TIMER_REG_CVAL, value);
@ -902,6 +1163,10 @@ static u64 kvm_arm_timer_read(struct kvm_vcpu *vcpu,
val = kvm_phys_timer_read() - timer_get_offset(timer);
break;
case TIMER_REG_VOFF:
val = *timer->offset.vcpu_offset;
break;
default:
BUG();
}
@ -920,7 +1185,7 @@ u64 kvm_arm_timer_read_sysreg(struct kvm_vcpu *vcpu,
get_timer_map(vcpu, &map);
timer = vcpu_get_timer(vcpu, tmr);
if (timer == map.emul_ptimer)
if (timer == map.emul_vtimer || timer == map.emul_ptimer)
return kvm_arm_timer_read(vcpu, timer, treg);
preempt_disable();
@ -952,6 +1217,10 @@ static void kvm_arm_timer_write(struct kvm_vcpu *vcpu,
timer_set_cval(timer, val);
break;
case TIMER_REG_VOFF:
*timer->offset.vcpu_offset = val;
break;
default:
BUG();
}
@ -967,7 +1236,7 @@ void kvm_arm_timer_write_sysreg(struct kvm_vcpu *vcpu,
get_timer_map(vcpu, &map);
timer = vcpu_get_timer(vcpu, tmr);
if (timer == map.emul_ptimer) {
if (timer == map.emul_vtimer || timer == map.emul_ptimer) {
soft_timer_cancel(&timer->hrtimer);
kvm_arm_timer_write(vcpu, timer, treg, val);
timer_emulate(timer);
@ -1047,10 +1316,6 @@ static const struct irq_domain_ops timer_domain_ops = {
.free = timer_irq_domain_free,
};
static struct irq_ops arch_timer_irq_ops = {
.get_input_level = kvm_arch_timer_get_input_level,
};
static void kvm_irq_fixup_flags(unsigned int virq, u32 *flags)
{
*flags = irq_get_trigger_type(virq);
@ -1192,44 +1457,56 @@ void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu)
static bool timer_irqs_are_valid(struct kvm_vcpu *vcpu)
{
int vtimer_irq, ptimer_irq, ret;
unsigned long i;
u32 ppis = 0;
bool valid;
vtimer_irq = vcpu_vtimer(vcpu)->irq.irq;
ret = kvm_vgic_set_owner(vcpu, vtimer_irq, vcpu_vtimer(vcpu));
if (ret)
return false;
mutex_lock(&vcpu->kvm->arch.config_lock);
ptimer_irq = vcpu_ptimer(vcpu)->irq.irq;
ret = kvm_vgic_set_owner(vcpu, ptimer_irq, vcpu_ptimer(vcpu));
if (ret)
return false;
for (int i = 0; i < nr_timers(vcpu); i++) {
struct arch_timer_context *ctx;
int irq;
kvm_for_each_vcpu(i, vcpu, vcpu->kvm) {
if (vcpu_vtimer(vcpu)->irq.irq != vtimer_irq ||
vcpu_ptimer(vcpu)->irq.irq != ptimer_irq)
return false;
ctx = vcpu_get_timer(vcpu, i);
irq = timer_irq(ctx);
if (kvm_vgic_set_owner(vcpu, irq, ctx))
break;
/*
* We know by construction that we only have PPIs, so
* all values are less than 32.
*/
ppis |= BIT(irq);
}
return true;
valid = hweight32(ppis) == nr_timers(vcpu);
if (valid)
set_bit(KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE, &vcpu->kvm->arch.flags);
mutex_unlock(&vcpu->kvm->arch.config_lock);
return valid;
}
bool kvm_arch_timer_get_input_level(int vintid)
static bool kvm_arch_timer_get_input_level(int vintid)
{
struct kvm_vcpu *vcpu = kvm_get_running_vcpu();
struct arch_timer_context *timer;
if (WARN(!vcpu, "No vcpu context!\n"))
return false;
if (vintid == vcpu_vtimer(vcpu)->irq.irq)
timer = vcpu_vtimer(vcpu);
else if (vintid == vcpu_ptimer(vcpu)->irq.irq)
timer = vcpu_ptimer(vcpu);
else
BUG();
for (int i = 0; i < nr_timers(vcpu); i++) {
struct arch_timer_context *ctx;
return kvm_timer_should_fire(timer);
ctx = vcpu_get_timer(vcpu, i);
if (timer_irq(ctx) == vintid)
return kvm_timer_should_fire(ctx);
}
/* A timer IRQ has fired, but no matching timer was found? */
WARN_RATELIMIT(1, "timer INTID%d unknown\n", vintid);
return false;
}
int kvm_timer_enable(struct kvm_vcpu *vcpu)
@ -1258,7 +1535,7 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
ret = kvm_vgic_map_phys_irq(vcpu,
map.direct_vtimer->host_timer_irq,
map.direct_vtimer->irq.irq,
timer_irq(map.direct_vtimer),
&arch_timer_irq_ops);
if (ret)
return ret;
@ -1266,7 +1543,7 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
if (map.direct_ptimer) {
ret = kvm_vgic_map_phys_irq(vcpu,
map.direct_ptimer->host_timer_irq,
map.direct_ptimer->irq.irq,
timer_irq(map.direct_ptimer),
&arch_timer_irq_ops);
}
@ -1278,45 +1555,17 @@ no_vgic:
return 0;
}
/*
* On VHE system, we only need to configure the EL2 timer trap register once,
* not for every world switch.
* The host kernel runs at EL2 with HCR_EL2.TGE == 1,
* and this makes those bits have no effect for the host kernel execution.
*/
/* If we have CNTPOFF, permanently set ECV to enable it */
void kvm_timer_init_vhe(void)
{
/* When HCR_EL2.E2H ==1, EL1PCEN and EL1PCTEN are shifted by 10 */
u32 cnthctl_shift = 10;
u64 val;
/*
* VHE systems allow the guest direct access to the EL1 physical
* timer/counter.
*/
val = read_sysreg(cnthctl_el2);
val |= (CNTHCTL_EL1PCEN << cnthctl_shift);
val |= (CNTHCTL_EL1PCTEN << cnthctl_shift);
write_sysreg(val, cnthctl_el2);
}
static void set_timer_irqs(struct kvm *kvm, int vtimer_irq, int ptimer_irq)
{
struct kvm_vcpu *vcpu;
unsigned long i;
kvm_for_each_vcpu(i, vcpu, kvm) {
vcpu_vtimer(vcpu)->irq.irq = vtimer_irq;
vcpu_ptimer(vcpu)->irq.irq = ptimer_irq;
}
if (cpus_have_final_cap(ARM64_HAS_ECV_CNTPOFF))
sysreg_clear_set(cntkctl_el1, 0, CNTHCTL_ECV);
}
int kvm_arm_timer_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
{
int __user *uaddr = (int __user *)(long)attr->addr;
struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
int irq;
int irq, idx, ret = 0;
if (!irqchip_in_kernel(vcpu->kvm))
return -EINVAL;
@ -1327,21 +1576,42 @@ int kvm_arm_timer_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
if (!(irq_is_ppi(irq)))
return -EINVAL;
if (vcpu->arch.timer_cpu.enabled)
return -EBUSY;
mutex_lock(&vcpu->kvm->arch.config_lock);
if (test_bit(KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE,
&vcpu->kvm->arch.flags)) {
ret = -EBUSY;
goto out;
}
switch (attr->attr) {
case KVM_ARM_VCPU_TIMER_IRQ_VTIMER:
set_timer_irqs(vcpu->kvm, irq, ptimer->irq.irq);
idx = TIMER_VTIMER;
break;
case KVM_ARM_VCPU_TIMER_IRQ_PTIMER:
set_timer_irqs(vcpu->kvm, vtimer->irq.irq, irq);
idx = TIMER_PTIMER;
break;
case KVM_ARM_VCPU_TIMER_IRQ_HVTIMER:
idx = TIMER_HVTIMER;
break;
case KVM_ARM_VCPU_TIMER_IRQ_HPTIMER:
idx = TIMER_HPTIMER;
break;
default:
return -ENXIO;
ret = -ENXIO;
goto out;
}
return 0;
/*
* We cannot validate the IRQ unicity before we run, so take it at
* face value. The verdict will be given on first vcpu run, for each
* vcpu. Yes this is late. Blame it on the stupid API.
*/
vcpu->kvm->arch.timer_data.ppi[idx] = irq;
out:
mutex_unlock(&vcpu->kvm->arch.config_lock);
return ret;
}
int kvm_arm_timer_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
@ -1357,11 +1627,17 @@ int kvm_arm_timer_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
case KVM_ARM_VCPU_TIMER_IRQ_PTIMER:
timer = vcpu_ptimer(vcpu);
break;
case KVM_ARM_VCPU_TIMER_IRQ_HVTIMER:
timer = vcpu_hvtimer(vcpu);
break;
case KVM_ARM_VCPU_TIMER_IRQ_HPTIMER:
timer = vcpu_hptimer(vcpu);
break;
default:
return -ENXIO;
}
irq = timer->irq.irq;
irq = timer_irq(timer);
return put_user(irq, uaddr);
}
@ -1370,8 +1646,42 @@ int kvm_arm_timer_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
switch (attr->attr) {
case KVM_ARM_VCPU_TIMER_IRQ_VTIMER:
case KVM_ARM_VCPU_TIMER_IRQ_PTIMER:
case KVM_ARM_VCPU_TIMER_IRQ_HVTIMER:
case KVM_ARM_VCPU_TIMER_IRQ_HPTIMER:
return 0;
}
return -ENXIO;
}
int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm,
struct kvm_arm_counter_offset *offset)
{
int ret = 0;
if (offset->reserved)
return -EINVAL;
mutex_lock(&kvm->lock);
if (lock_all_vcpus(kvm)) {
set_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET, &kvm->arch.flags);
/*
* If userspace decides to set the offset using this
* API rather than merely restoring the counter
* values, the offset applies to both the virtual and
* physical views.
*/
kvm->arch.timer_data.voffset = offset->counter_offset;
kvm->arch.timer_data.poffset = offset->counter_offset;
unlock_all_vcpus(kvm);
} else {
ret = -EBUSY;
}
mutex_unlock(&kvm->lock);
return ret;
}

View File

@ -128,6 +128,16 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
{
int ret;
mutex_init(&kvm->arch.config_lock);
#ifdef CONFIG_LOCKDEP
/* Clue in lockdep that the config_lock must be taken inside kvm->lock */
mutex_lock(&kvm->lock);
mutex_lock(&kvm->arch.config_lock);
mutex_unlock(&kvm->arch.config_lock);
mutex_unlock(&kvm->lock);
#endif
ret = kvm_share_hyp(kvm, kvm + 1);
if (ret)
return ret;
@ -148,6 +158,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
kvm_vgic_early_init(kvm);
kvm_timer_init_vm(kvm);
/* The maximum number of VCPUs is limited by the host's GIC model */
kvm->max_vcpus = kvm_arm_default_max_vcpus();
@ -192,6 +204,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
kvm_destroy_vcpus(kvm);
kvm_unshare_hyp(kvm, kvm + 1);
kvm_arm_teardown_hypercalls(kvm);
}
int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
@ -220,6 +234,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_VCPU_ATTRIBUTES:
case KVM_CAP_PTP_KVM:
case KVM_CAP_ARM_SYSTEM_SUSPEND:
case KVM_CAP_COUNTER_OFFSET:
r = 1;
break;
case KVM_CAP_SET_GUEST_DEBUG2:
@ -326,6 +341,16 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
{
int err;
spin_lock_init(&vcpu->arch.mp_state_lock);
#ifdef CONFIG_LOCKDEP
/* Inform lockdep that the config_lock is acquired after vcpu->mutex */
mutex_lock(&vcpu->mutex);
mutex_lock(&vcpu->kvm->arch.config_lock);
mutex_unlock(&vcpu->kvm->arch.config_lock);
mutex_unlock(&vcpu->mutex);
#endif
/* Force users to call KVM_ARM_VCPU_INIT */
vcpu->arch.target = -1;
bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
@ -443,34 +468,41 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
vcpu->cpu = -1;
}
void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu)
static void __kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu)
{
vcpu->arch.mp_state.mp_state = KVM_MP_STATE_STOPPED;
WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_STOPPED);
kvm_make_request(KVM_REQ_SLEEP, vcpu);
kvm_vcpu_kick(vcpu);
}
void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu)
{
spin_lock(&vcpu->arch.mp_state_lock);
__kvm_arm_vcpu_power_off(vcpu);
spin_unlock(&vcpu->arch.mp_state_lock);
}
bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu)
{
return vcpu->arch.mp_state.mp_state == KVM_MP_STATE_STOPPED;
return READ_ONCE(vcpu->arch.mp_state.mp_state) == KVM_MP_STATE_STOPPED;
}
static void kvm_arm_vcpu_suspend(struct kvm_vcpu *vcpu)
{
vcpu->arch.mp_state.mp_state = KVM_MP_STATE_SUSPENDED;
WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_SUSPENDED);
kvm_make_request(KVM_REQ_SUSPEND, vcpu);
kvm_vcpu_kick(vcpu);
}
static bool kvm_arm_vcpu_suspended(struct kvm_vcpu *vcpu)
{
return vcpu->arch.mp_state.mp_state == KVM_MP_STATE_SUSPENDED;
return READ_ONCE(vcpu->arch.mp_state.mp_state) == KVM_MP_STATE_SUSPENDED;
}
int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
struct kvm_mp_state *mp_state)
{
*mp_state = vcpu->arch.mp_state;
*mp_state = READ_ONCE(vcpu->arch.mp_state);
return 0;
}
@ -480,12 +512,14 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
{
int ret = 0;
spin_lock(&vcpu->arch.mp_state_lock);
switch (mp_state->mp_state) {
case KVM_MP_STATE_RUNNABLE:
vcpu->arch.mp_state = *mp_state;
WRITE_ONCE(vcpu->arch.mp_state, *mp_state);
break;
case KVM_MP_STATE_STOPPED:
kvm_arm_vcpu_power_off(vcpu);
__kvm_arm_vcpu_power_off(vcpu);
break;
case KVM_MP_STATE_SUSPENDED:
kvm_arm_vcpu_suspend(vcpu);
@ -494,6 +528,8 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
ret = -EINVAL;
}
spin_unlock(&vcpu->arch.mp_state_lock);
return ret;
}
@ -593,9 +629,9 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
if (kvm_vm_is_protected(kvm))
kvm_call_hyp_nvhe(__pkvm_vcpu_init_traps, vcpu);
mutex_lock(&kvm->lock);
mutex_lock(&kvm->arch.config_lock);
set_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags);
mutex_unlock(&kvm->lock);
mutex_unlock(&kvm->arch.config_lock);
return ret;
}
@ -1210,10 +1246,14 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
/*
* Handle the "start in power-off" case.
*/
spin_lock(&vcpu->arch.mp_state_lock);
if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
kvm_arm_vcpu_power_off(vcpu);
__kvm_arm_vcpu_power_off(vcpu);
else
vcpu->arch.mp_state.mp_state = KVM_MP_STATE_RUNNABLE;
WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE);
spin_unlock(&vcpu->arch.mp_state_lock);
return 0;
}
@ -1439,10 +1479,31 @@ static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm,
}
}
static int kvm_vm_has_attr(struct kvm *kvm, struct kvm_device_attr *attr)
{
switch (attr->group) {
case KVM_ARM_VM_SMCCC_CTRL:
return kvm_vm_smccc_has_attr(kvm, attr);
default:
return -ENXIO;
}
}
static int kvm_vm_set_attr(struct kvm *kvm, struct kvm_device_attr *attr)
{
switch (attr->group) {
case KVM_ARM_VM_SMCCC_CTRL:
return kvm_vm_smccc_set_attr(kvm, attr);
default:
return -ENXIO;
}
}
int kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
{
struct kvm *kvm = filp->private_data;
void __user *argp = (void __user *)arg;
struct kvm_device_attr attr;
switch (ioctl) {
case KVM_CREATE_IRQCHIP: {
@ -1478,11 +1539,73 @@ int kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
return -EFAULT;
return kvm_vm_ioctl_mte_copy_tags(kvm, &copy_tags);
}
case KVM_ARM_SET_COUNTER_OFFSET: {
struct kvm_arm_counter_offset offset;
if (copy_from_user(&offset, argp, sizeof(offset)))
return -EFAULT;
return kvm_vm_ioctl_set_counter_offset(kvm, &offset);
}
case KVM_HAS_DEVICE_ATTR: {
if (copy_from_user(&attr, argp, sizeof(attr)))
return -EFAULT;
return kvm_vm_has_attr(kvm, &attr);
}
case KVM_SET_DEVICE_ATTR: {
if (copy_from_user(&attr, argp, sizeof(attr)))
return -EFAULT;
return kvm_vm_set_attr(kvm, &attr);
}
default:
return -EINVAL;
}
}
/* unlocks vcpus from @vcpu_lock_idx and smaller */
static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx)
{
struct kvm_vcpu *tmp_vcpu;
for (; vcpu_lock_idx >= 0; vcpu_lock_idx--) {
tmp_vcpu = kvm_get_vcpu(kvm, vcpu_lock_idx);
mutex_unlock(&tmp_vcpu->mutex);
}
}
void unlock_all_vcpus(struct kvm *kvm)
{
lockdep_assert_held(&kvm->lock);
unlock_vcpus(kvm, atomic_read(&kvm->online_vcpus) - 1);
}
/* Returns true if all vcpus were locked, false otherwise */
bool lock_all_vcpus(struct kvm *kvm)
{
struct kvm_vcpu *tmp_vcpu;
unsigned long c;
lockdep_assert_held(&kvm->lock);
/*
* Any time a vcpu is in an ioctl (including running), the
* core KVM code tries to grab the vcpu->mutex.
*
* By grabbing the vcpu->mutex of all VCPUs we ensure that no
* other VCPUs can fiddle with the state while we access it.
*/
kvm_for_each_vcpu(c, tmp_vcpu, kvm) {
if (!mutex_trylock(&tmp_vcpu->mutex)) {
unlock_vcpus(kvm, c - 1);
return false;
}
}
return true;
}
static unsigned long nvhe_percpu_size(void)
{
return (unsigned long)CHOOSE_NVHE_SYM(__per_cpu_end) -

View File

@ -590,11 +590,16 @@ static unsigned long num_core_regs(const struct kvm_vcpu *vcpu)
return copy_core_reg_indices(vcpu, NULL);
}
/**
* ARM64 versions of the TIMER registers, always available on arm64
*/
static const u64 timer_reg_list[] = {
KVM_REG_ARM_TIMER_CTL,
KVM_REG_ARM_TIMER_CNT,
KVM_REG_ARM_TIMER_CVAL,
KVM_REG_ARM_PTIMER_CTL,
KVM_REG_ARM_PTIMER_CNT,
KVM_REG_ARM_PTIMER_CVAL,
};
#define NUM_TIMER_REGS 3
#define NUM_TIMER_REGS ARRAY_SIZE(timer_reg_list)
static bool is_timer_reg(u64 index)
{
@ -602,6 +607,9 @@ static bool is_timer_reg(u64 index)
case KVM_REG_ARM_TIMER_CTL:
case KVM_REG_ARM_TIMER_CNT:
case KVM_REG_ARM_TIMER_CVAL:
case KVM_REG_ARM_PTIMER_CTL:
case KVM_REG_ARM_PTIMER_CNT:
case KVM_REG_ARM_PTIMER_CVAL:
return true;
}
return false;
@ -609,14 +617,11 @@ static bool is_timer_reg(u64 index)
static int copy_timer_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
{
if (put_user(KVM_REG_ARM_TIMER_CTL, uindices))
return -EFAULT;
uindices++;
if (put_user(KVM_REG_ARM_TIMER_CNT, uindices))
return -EFAULT;
uindices++;
if (put_user(KVM_REG_ARM_TIMER_CVAL, uindices))
return -EFAULT;
for (int i = 0; i < NUM_TIMER_REGS; i++) {
if (put_user(timer_reg_list[i], uindices))
return -EFAULT;
uindices++;
}
return 0;
}
@ -957,7 +962,9 @@ int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
switch (attr->group) {
case KVM_ARM_VCPU_PMU_V3_CTRL:
mutex_lock(&vcpu->kvm->arch.config_lock);
ret = kvm_arm_pmu_v3_set_attr(vcpu, attr);
mutex_unlock(&vcpu->kvm->arch.config_lock);
break;
case KVM_ARM_VCPU_TIMER_CTRL:
ret = kvm_arm_timer_set_attr(vcpu, attr);

View File

@ -36,8 +36,6 @@ static void kvm_handle_guest_serror(struct kvm_vcpu *vcpu, u64 esr)
static int handle_hvc(struct kvm_vcpu *vcpu)
{
int ret;
trace_kvm_hvc_arm64(*vcpu_pc(vcpu), vcpu_get_reg(vcpu, 0),
kvm_vcpu_hvc_get_imm(vcpu));
vcpu->stat.hvc_exit_stat++;
@ -52,33 +50,29 @@ static int handle_hvc(struct kvm_vcpu *vcpu)
return 1;
}
ret = kvm_hvc_call_handler(vcpu);
if (ret < 0) {
vcpu_set_reg(vcpu, 0, ~0UL);
return 1;
}
return ret;
return kvm_smccc_call_handler(vcpu);
}
static int handle_smc(struct kvm_vcpu *vcpu)
{
int ret;
/*
* "If an SMC instruction executed at Non-secure EL1 is
* trapped to EL2 because HCR_EL2.TSC is 1, the exception is a
* Trap exception, not a Secure Monitor Call exception [...]"
*
* We need to advance the PC after the trap, as it would
* otherwise return to the same address...
*
* Only handle SMCs from the virtual EL2 with an immediate of zero and
* skip it otherwise.
* otherwise return to the same address. Furthermore, pre-incrementing
* the PC before potentially exiting to userspace maintains the same
* abstraction for both SMCs and HVCs.
*/
if (!vcpu_is_el2(vcpu) || kvm_vcpu_hvc_get_imm(vcpu)) {
kvm_incr_pc(vcpu);
/*
* SMCs with a nonzero immediate are reserved according to DEN0028E 2.9
* "SMC and HVC immediate value".
*/
if (kvm_vcpu_hvc_get_imm(vcpu)) {
vcpu_set_reg(vcpu, 0, ~0UL);
kvm_incr_pc(vcpu);
return 1;
}
@ -89,13 +83,7 @@ static int handle_smc(struct kvm_vcpu *vcpu)
* at Non-secure EL1 is trapped to EL2 if HCR_EL2.TSC==1, rather than
* being treated as UNDEFINED.
*/
ret = kvm_hvc_call_handler(vcpu);
if (ret < 0)
vcpu_set_reg(vcpu, 0, ~0UL);
kvm_incr_pc(vcpu);
return ret;
return kvm_smccc_call_handler(vcpu);
}
/*

View File

@ -26,6 +26,7 @@
#include <asm/kvm_emulate.h>
#include <asm/kvm_hyp.h>
#include <asm/kvm_mmu.h>
#include <asm/kvm_nested.h>
#include <asm/fpsimd.h>
#include <asm/debug-monitors.h>
#include <asm/processor.h>
@ -326,6 +327,55 @@ static bool kvm_hyp_handle_ptrauth(struct kvm_vcpu *vcpu, u64 *exit_code)
return true;
}
static bool kvm_hyp_handle_cntpct(struct kvm_vcpu *vcpu)
{
struct arch_timer_context *ctxt;
u32 sysreg;
u64 val;
/*
* We only get here for 64bit guests, 32bit guests will hit
* the long and winding road all the way to the standard
* handling. Yes, it sucks to be irrelevant.
*/
sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu));
switch (sysreg) {
case SYS_CNTPCT_EL0:
case SYS_CNTPCTSS_EL0:
if (vcpu_has_nv(vcpu)) {
if (is_hyp_ctxt(vcpu)) {
ctxt = vcpu_hptimer(vcpu);
break;
}
/* Check for guest hypervisor trapping */
val = __vcpu_sys_reg(vcpu, CNTHCTL_EL2);
if (!vcpu_el2_e2h_is_set(vcpu))
val = (val & CNTHCTL_EL1PCTEN) << 10;
if (!(val & (CNTHCTL_EL1PCTEN << 10)))
return false;
}
ctxt = vcpu_ptimer(vcpu);
break;
default:
return false;
}
val = arch_timer_read_cntpct_el0();
if (ctxt->offset.vm_offset)
val -= *kern_hyp_va(ctxt->offset.vm_offset);
if (ctxt->offset.vcpu_offset)
val -= *kern_hyp_va(ctxt->offset.vcpu_offset);
vcpu_set_reg(vcpu, kvm_vcpu_sys_get_rt(vcpu), val);
__kvm_skip_instr(vcpu);
return true;
}
static bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code)
{
if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&
@ -339,6 +389,9 @@ static bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code)
if (esr_is_ptrauth_trap(kvm_vcpu_get_esr(vcpu)))
return kvm_hyp_handle_ptrauth(vcpu, exit_code);
if (kvm_hyp_handle_cntpct(vcpu))
return true;
return false;
}

View File

@ -37,7 +37,6 @@ static void __debug_save_spe(u64 *pmscr_el1)
/* Now drain all buffered data to memory */
psb_csync();
dsb(nsh);
}
static void __debug_restore_spe(u64 pmscr_el1)
@ -69,7 +68,6 @@ static void __debug_save_trace(u64 *trfcr_el1)
isb();
/* Drain the trace buffer to memory */
tsb_csync();
dsb(nsh);
}
static void __debug_restore_trace(u64 trfcr_el1)

View File

@ -297,6 +297,13 @@ int __pkvm_prot_finalize(void)
params->vttbr = kvm_get_vttbr(mmu);
params->vtcr = host_mmu.arch.vtcr;
params->hcr_el2 |= HCR_VM;
/*
* The CMO below not only cleans the updated params to the
* PoC, but also provides the DSB that ensures ongoing
* page-table walks that have started before we trapped to EL2
* have completed.
*/
kvm_flush_dcache_to_poc(params, sizeof(*params));
write_sysreg(params->hcr_el2, hcr_el2);

View File

@ -272,6 +272,17 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
*/
__debug_save_host_buffers_nvhe(vcpu);
/*
* We're about to restore some new MMU state. Make sure
* ongoing page-table walks that have started before we
* trapped to EL2 have completed. This also synchronises the
* above disabling of SPE and TRBE.
*
* See DDI0487I.a D8.1.5 "Out-of-context translation regimes",
* rule R_LFHQG and subsequent information statements.
*/
dsb(nsh);
__kvm_adjust_pc(vcpu);
/*
@ -306,6 +317,13 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
__timer_disable_traps(vcpu);
__hyp_vgic_save_state(vcpu);
/*
* Same thing as before the guest run: we're about to switch
* the MMU context, so let's make sure we don't have any
* ongoing EL1&0 translations.
*/
dsb(nsh);
__deactivate_traps(vcpu);
__load_host_stage2();

View File

@ -9,6 +9,7 @@
#include <linux/kvm_host.h>
#include <asm/kvm_hyp.h>
#include <asm/kvm_mmu.h>
void __kvm_timer_set_cntvoff(u64 cntvoff)
{
@ -35,14 +36,19 @@ void __timer_disable_traps(struct kvm_vcpu *vcpu)
*/
void __timer_enable_traps(struct kvm_vcpu *vcpu)
{
u64 val;
u64 clr = 0, set = 0;
/*
* Disallow physical timer access for the guest
* Physical counter access is allowed
* Physical counter access is allowed if no offset is enforced
* or running protected (we don't offset anything in this case).
*/
val = read_sysreg(cnthctl_el2);
val &= ~CNTHCTL_EL1PCEN;
val |= CNTHCTL_EL1PCTEN;
write_sysreg(val, cnthctl_el2);
clr = CNTHCTL_EL1PCEN;
if (is_protected_kvm_enabled() ||
!kern_hyp_va(vcpu->kvm)->arch.timer_data.poffset)
set |= CNTHCTL_EL1PCTEN;
else
clr |= CNTHCTL_EL1PCTEN;
sysreg_clear_set(cnthctl_el2, clr, set);
}

View File

@ -15,8 +15,31 @@ struct tlb_inv_context {
};
static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu,
struct tlb_inv_context *cxt)
struct tlb_inv_context *cxt,
bool nsh)
{
/*
* We have two requirements:
*
* - ensure that the page table updates are visible to all
* CPUs, for which a dsb(DOMAIN-st) is what we need, DOMAIN
* being either ish or nsh, depending on the invalidation
* type.
*
* - complete any speculative page table walk started before
* we trapped to EL2 so that we can mess with the MM
* registers out of context, for which dsb(nsh) is enough
*
* The composition of these two barriers is a dsb(DOMAIN), and
* the 'nsh' parameter tracks the distinction between
* Inner-Shareable and Non-Shareable, as specified by the
* callers.
*/
if (nsh)
dsb(nsh);
else
dsb(ish);
if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
u64 val;
@ -60,10 +83,8 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu,
{
struct tlb_inv_context cxt;
dsb(ishst);
/* Switch to requested VMID */
__tlb_switch_to_guest(mmu, &cxt);
__tlb_switch_to_guest(mmu, &cxt, false);
/*
* We could do so much better if we had the VA as well.
@ -113,10 +134,8 @@ void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu)
{
struct tlb_inv_context cxt;
dsb(ishst);
/* Switch to requested VMID */
__tlb_switch_to_guest(mmu, &cxt);
__tlb_switch_to_guest(mmu, &cxt, false);
__tlbi(vmalls12e1is);
dsb(ish);
@ -130,7 +149,7 @@ void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu)
struct tlb_inv_context cxt;
/* Switch to requested VMID */
__tlb_switch_to_guest(mmu, &cxt);
__tlb_switch_to_guest(mmu, &cxt, false);
__tlbi(vmalle1);
asm volatile("ic iallu");
@ -142,7 +161,8 @@ void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu)
void __kvm_flush_vm_context(void)
{
dsb(ishst);
/* Same remark as in __tlb_switch_to_guest() */
dsb(ish);
__tlbi(alle1is);
/*

View File

@ -227,11 +227,10 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
/*
* When we exit from the guest we change a number of CPU configuration
* parameters, such as traps. Make sure these changes take effect
* before running the host or additional guests.
* parameters, such as traps. We rely on the isb() in kvm_call_hyp*()
* to make sure these changes take effect before running the host or
* additional guests.
*/
isb();
return ret;
}

View File

@ -13,6 +13,7 @@
#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_hyp.h>
#include <asm/kvm_nested.h>
/*
* VHE: Host and guest must save mdscr_el1 and sp_el0 (and the PC and
@ -69,6 +70,17 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
__sysreg_save_user_state(host_ctxt);
/*
* When running a normal EL1 guest, we only load a new vcpu
* after a context switch, which imvolves a DSB, so all
* speculative EL1&0 walks will have already completed.
* If running NV, the vcpu may transition between vEL1 and
* vEL2 without a context switch, so make sure we complete
* those walks before loading a new context.
*/
if (vcpu_has_nv(vcpu))
dsb(nsh);
/*
* Load guest EL1 and user state
*

View File

@ -47,7 +47,7 @@ static void kvm_ptp_get_time(struct kvm_vcpu *vcpu, u64 *val)
cycles = systime_snapshot.cycles - vcpu->kvm->arch.timer_data.voffset;
break;
case KVM_PTP_PHYS_COUNTER:
cycles = systime_snapshot.cycles;
cycles = systime_snapshot.cycles - vcpu->kvm->arch.timer_data.poffset;
break;
default:
return;
@ -65,7 +65,7 @@ static void kvm_ptp_get_time(struct kvm_vcpu *vcpu, u64 *val)
val[3] = lower_32_bits(cycles);
}
static bool kvm_hvc_call_default_allowed(u32 func_id)
static bool kvm_smccc_default_allowed(u32 func_id)
{
switch (func_id) {
/*
@ -93,7 +93,7 @@ static bool kvm_hvc_call_default_allowed(u32 func_id)
}
}
static bool kvm_hvc_call_allowed(struct kvm_vcpu *vcpu, u32 func_id)
static bool kvm_smccc_test_fw_bmap(struct kvm_vcpu *vcpu, u32 func_id)
{
struct kvm_smccc_features *smccc_feat = &vcpu->kvm->arch.smccc_feat;
@ -117,20 +117,161 @@ static bool kvm_hvc_call_allowed(struct kvm_vcpu *vcpu, u32 func_id)
return test_bit(KVM_REG_ARM_VENDOR_HYP_BIT_PTP,
&smccc_feat->vendor_hyp_bmap);
default:
return kvm_hvc_call_default_allowed(func_id);
return false;
}
}
int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
#define SMC32_ARCH_RANGE_BEGIN ARM_SMCCC_VERSION_FUNC_ID
#define SMC32_ARCH_RANGE_END ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_32, \
0, ARM_SMCCC_FUNC_MASK)
#define SMC64_ARCH_RANGE_BEGIN ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_64, \
0, 0)
#define SMC64_ARCH_RANGE_END ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_64, \
0, ARM_SMCCC_FUNC_MASK)
static void init_smccc_filter(struct kvm *kvm)
{
int r;
mt_init(&kvm->arch.smccc_filter);
/*
* Prevent userspace from handling any SMCCC calls in the architecture
* range, avoiding the risk of misrepresenting Spectre mitigation status
* to the guest.
*/
r = mtree_insert_range(&kvm->arch.smccc_filter,
SMC32_ARCH_RANGE_BEGIN, SMC32_ARCH_RANGE_END,
xa_mk_value(KVM_SMCCC_FILTER_HANDLE),
GFP_KERNEL_ACCOUNT);
WARN_ON_ONCE(r);
r = mtree_insert_range(&kvm->arch.smccc_filter,
SMC64_ARCH_RANGE_BEGIN, SMC64_ARCH_RANGE_END,
xa_mk_value(KVM_SMCCC_FILTER_HANDLE),
GFP_KERNEL_ACCOUNT);
WARN_ON_ONCE(r);
}
static int kvm_smccc_set_filter(struct kvm *kvm, struct kvm_smccc_filter __user *uaddr)
{
const void *zero_page = page_to_virt(ZERO_PAGE(0));
struct kvm_smccc_filter filter;
u32 start, end;
int r;
if (copy_from_user(&filter, uaddr, sizeof(filter)))
return -EFAULT;
if (memcmp(filter.pad, zero_page, sizeof(filter.pad)))
return -EINVAL;
start = filter.base;
end = start + filter.nr_functions - 1;
if (end < start || filter.action >= NR_SMCCC_FILTER_ACTIONS)
return -EINVAL;
mutex_lock(&kvm->arch.config_lock);
if (kvm_vm_has_ran_once(kvm)) {
r = -EBUSY;
goto out_unlock;
}
r = mtree_insert_range(&kvm->arch.smccc_filter, start, end,
xa_mk_value(filter.action), GFP_KERNEL_ACCOUNT);
if (r)
goto out_unlock;
set_bit(KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED, &kvm->arch.flags);
out_unlock:
mutex_unlock(&kvm->arch.config_lock);
return r;
}
static u8 kvm_smccc_filter_get_action(struct kvm *kvm, u32 func_id)
{
unsigned long idx = func_id;
void *val;
if (!test_bit(KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED, &kvm->arch.flags))
return KVM_SMCCC_FILTER_HANDLE;
/*
* But where's the error handling, you say?
*
* mt_find() returns NULL if no entry was found, which just so happens
* to match KVM_SMCCC_FILTER_HANDLE.
*/
val = mt_find(&kvm->arch.smccc_filter, &idx, idx);
return xa_to_value(val);
}
static u8 kvm_smccc_get_action(struct kvm_vcpu *vcpu, u32 func_id)
{
/*
* Intervening actions in the SMCCC filter take precedence over the
* pseudo-firmware register bitmaps.
*/
u8 action = kvm_smccc_filter_get_action(vcpu->kvm, func_id);
if (action != KVM_SMCCC_FILTER_HANDLE)
return action;
if (kvm_smccc_test_fw_bmap(vcpu, func_id) ||
kvm_smccc_default_allowed(func_id))
return KVM_SMCCC_FILTER_HANDLE;
return KVM_SMCCC_FILTER_DENY;
}
static void kvm_prepare_hypercall_exit(struct kvm_vcpu *vcpu, u32 func_id)
{
u8 ec = ESR_ELx_EC(kvm_vcpu_get_esr(vcpu));
struct kvm_run *run = vcpu->run;
u64 flags = 0;
if (ec == ESR_ELx_EC_SMC32 || ec == ESR_ELx_EC_SMC64)
flags |= KVM_HYPERCALL_EXIT_SMC;
if (!kvm_vcpu_trap_il_is32bit(vcpu))
flags |= KVM_HYPERCALL_EXIT_16BIT;
run->exit_reason = KVM_EXIT_HYPERCALL;
run->hypercall = (typeof(run->hypercall)) {
.nr = func_id,
.flags = flags,
};
}
int kvm_smccc_call_handler(struct kvm_vcpu *vcpu)
{
struct kvm_smccc_features *smccc_feat = &vcpu->kvm->arch.smccc_feat;
u32 func_id = smccc_get_function(vcpu);
u64 val[4] = {SMCCC_RET_NOT_SUPPORTED};
u32 feature;
u8 action;
gpa_t gpa;
if (!kvm_hvc_call_allowed(vcpu, func_id))
action = kvm_smccc_get_action(vcpu, func_id);
switch (action) {
case KVM_SMCCC_FILTER_HANDLE:
break;
case KVM_SMCCC_FILTER_DENY:
goto out;
case KVM_SMCCC_FILTER_FWD_TO_USER:
kvm_prepare_hypercall_exit(vcpu, func_id);
return 0;
default:
WARN_RATELIMIT(1, "Unhandled SMCCC filter action: %d\n", action);
goto out;
}
switch (func_id) {
case ARM_SMCCC_VERSION_FUNC_ID:
@ -245,6 +386,13 @@ void kvm_arm_init_hypercalls(struct kvm *kvm)
smccc_feat->std_bmap = KVM_ARM_SMCCC_STD_FEATURES;
smccc_feat->std_hyp_bmap = KVM_ARM_SMCCC_STD_HYP_FEATURES;
smccc_feat->vendor_hyp_bmap = KVM_ARM_SMCCC_VENDOR_HYP_FEATURES;
init_smccc_filter(kvm);
}
void kvm_arm_teardown_hypercalls(struct kvm *kvm)
{
mtree_destroy(&kvm->arch.smccc_filter);
}
int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu)
@ -377,17 +525,16 @@ static int kvm_arm_set_fw_reg_bmap(struct kvm_vcpu *vcpu, u64 reg_id, u64 val)
if (val & ~fw_reg_features)
return -EINVAL;
mutex_lock(&kvm->lock);
mutex_lock(&kvm->arch.config_lock);
if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags) &&
val != *fw_reg_bmap) {
if (kvm_vm_has_ran_once(kvm) && val != *fw_reg_bmap) {
ret = -EBUSY;
goto out;
}
WRITE_ONCE(*fw_reg_bmap, val);
out:
mutex_unlock(&kvm->lock);
mutex_unlock(&kvm->arch.config_lock);
return ret;
}
@ -479,3 +626,25 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
return -EINVAL;
}
int kvm_vm_smccc_has_attr(struct kvm *kvm, struct kvm_device_attr *attr)
{
switch (attr->attr) {
case KVM_ARM_VM_SMCCC_FILTER:
return 0;
default:
return -ENXIO;
}
}
int kvm_vm_smccc_set_attr(struct kvm *kvm, struct kvm_device_attr *attr)
{
void __user *uaddr = (void __user *)attr->addr;
switch (attr->attr) {
case KVM_ARM_VM_SMCCC_FILTER:
return kvm_smccc_set_filter(kvm, uaddr);
default:
return -ENXIO;
}
}

View File

@ -874,13 +874,13 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
struct arm_pmu *arm_pmu;
int ret = -ENXIO;
mutex_lock(&kvm->lock);
lockdep_assert_held(&kvm->arch.config_lock);
mutex_lock(&arm_pmus_lock);
list_for_each_entry(entry, &arm_pmus, entry) {
arm_pmu = entry->arm_pmu;
if (arm_pmu->pmu.type == pmu_id) {
if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags) ||
if (kvm_vm_has_ran_once(kvm) ||
(kvm->arch.pmu_filter && kvm->arch.arm_pmu != arm_pmu)) {
ret = -EBUSY;
break;
@ -894,7 +894,6 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
}
mutex_unlock(&arm_pmus_lock);
mutex_unlock(&kvm->lock);
return ret;
}
@ -902,22 +901,20 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
{
struct kvm *kvm = vcpu->kvm;
lockdep_assert_held(&kvm->arch.config_lock);
if (!kvm_vcpu_has_pmu(vcpu))
return -ENODEV;
if (vcpu->arch.pmu.created)
return -EBUSY;
mutex_lock(&kvm->lock);
if (!kvm->arch.arm_pmu) {
/* No PMU set, get the default one */
kvm->arch.arm_pmu = kvm_pmu_probe_armpmu();
if (!kvm->arch.arm_pmu) {
mutex_unlock(&kvm->lock);
if (!kvm->arch.arm_pmu)
return -ENODEV;
}
}
mutex_unlock(&kvm->lock);
switch (attr->attr) {
case KVM_ARM_VCPU_PMU_V3_IRQ: {
@ -961,19 +958,13 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
filter.action != KVM_PMU_EVENT_DENY))
return -EINVAL;
mutex_lock(&kvm->lock);
if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags)) {
mutex_unlock(&kvm->lock);
if (kvm_vm_has_ran_once(kvm))
return -EBUSY;
}
if (!kvm->arch.pmu_filter) {
kvm->arch.pmu_filter = bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT);
if (!kvm->arch.pmu_filter) {
mutex_unlock(&kvm->lock);
if (!kvm->arch.pmu_filter)
return -ENOMEM;
}
/*
* The default depends on the first applied filter.
@ -992,8 +983,6 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
else
bitmap_clear(kvm->arch.pmu_filter, filter.base_event, filter.nevents);
mutex_unlock(&kvm->lock);
return 0;
}
case KVM_ARM_VCPU_PMU_V3_SET_PMU: {

View File

@ -62,6 +62,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
struct vcpu_reset_state *reset_state;
struct kvm *kvm = source_vcpu->kvm;
struct kvm_vcpu *vcpu = NULL;
int ret = PSCI_RET_SUCCESS;
unsigned long cpu_id;
cpu_id = smccc_get_arg1(source_vcpu);
@ -76,11 +77,15 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
*/
if (!vcpu)
return PSCI_RET_INVALID_PARAMS;
spin_lock(&vcpu->arch.mp_state_lock);
if (!kvm_arm_vcpu_stopped(vcpu)) {
if (kvm_psci_version(source_vcpu) != KVM_ARM_PSCI_0_1)
return PSCI_RET_ALREADY_ON;
ret = PSCI_RET_ALREADY_ON;
else
return PSCI_RET_INVALID_PARAMS;
ret = PSCI_RET_INVALID_PARAMS;
goto out_unlock;
}
reset_state = &vcpu->arch.reset_state;
@ -96,7 +101,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
*/
reset_state->r0 = smccc_get_arg3(source_vcpu);
WRITE_ONCE(reset_state->reset, true);
reset_state->reset = true;
kvm_make_request(KVM_REQ_VCPU_RESET, vcpu);
/*
@ -105,10 +110,12 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
*/
smp_wmb();
vcpu->arch.mp_state.mp_state = KVM_MP_STATE_RUNNABLE;
WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE);
kvm_vcpu_wake_up(vcpu);
return PSCI_RET_SUCCESS;
out_unlock:
spin_unlock(&vcpu->arch.mp_state_lock);
return ret;
}
static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu)
@ -168,8 +175,11 @@ static void kvm_prepare_system_event(struct kvm_vcpu *vcpu, u32 type, u64 flags)
* after this call is handled and before the VCPUs have been
* re-initialized.
*/
kvm_for_each_vcpu(i, tmp, vcpu->kvm)
tmp->arch.mp_state.mp_state = KVM_MP_STATE_STOPPED;
kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
spin_lock(&tmp->arch.mp_state_lock);
WRITE_ONCE(tmp->arch.mp_state.mp_state, KVM_MP_STATE_STOPPED);
spin_unlock(&tmp->arch.mp_state_lock);
}
kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP);
memset(&vcpu->run->system_event, 0, sizeof(vcpu->run->system_event));
@ -229,7 +239,6 @@ static unsigned long kvm_psci_check_allowed_function(struct kvm_vcpu *vcpu, u32
static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
{
struct kvm *kvm = vcpu->kvm;
u32 psci_fn = smccc_get_function(vcpu);
unsigned long val;
int ret = 1;
@ -254,9 +263,7 @@ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
kvm_psci_narrow_to_32bit(vcpu);
fallthrough;
case PSCI_0_2_FN64_CPU_ON:
mutex_lock(&kvm->lock);
val = kvm_psci_vcpu_on(vcpu);
mutex_unlock(&kvm->lock);
break;
case PSCI_0_2_FN_AFFINITY_INFO:
kvm_psci_narrow_to_32bit(vcpu);
@ -395,7 +402,6 @@ static int kvm_psci_1_x_call(struct kvm_vcpu *vcpu, u32 minor)
static int kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
{
struct kvm *kvm = vcpu->kvm;
u32 psci_fn = smccc_get_function(vcpu);
unsigned long val;
@ -405,9 +411,7 @@ static int kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
val = PSCI_RET_SUCCESS;
break;
case KVM_PSCI_FN_CPU_ON:
mutex_lock(&kvm->lock);
val = kvm_psci_vcpu_on(vcpu);
mutex_unlock(&kvm->lock);
break;
default:
val = PSCI_RET_NOT_SUPPORTED;
@ -435,6 +439,7 @@ static int kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
int kvm_psci_call(struct kvm_vcpu *vcpu)
{
u32 psci_fn = smccc_get_function(vcpu);
int version = kvm_psci_version(vcpu);
unsigned long val;
val = kvm_psci_check_allowed_function(vcpu, psci_fn);
@ -443,7 +448,7 @@ int kvm_psci_call(struct kvm_vcpu *vcpu)
return 1;
}
switch (kvm_psci_version(vcpu)) {
switch (version) {
case KVM_ARM_PSCI_1_1:
return kvm_psci_1_x_call(vcpu, 1);
case KVM_ARM_PSCI_1_0:
@ -453,6 +458,8 @@ int kvm_psci_call(struct kvm_vcpu *vcpu)
case KVM_ARM_PSCI_0_1:
return kvm_psci_0_1_call(vcpu);
default:
return -EINVAL;
WARN_ONCE(1, "Unknown PSCI version %d", version);
smccc_set_retval(vcpu, SMCCC_RET_NOT_SUPPORTED, 0, 0, 0);
return 1;
}
}

View File

@ -205,7 +205,7 @@ static int kvm_set_vm_width(struct kvm_vcpu *vcpu)
is32bit = vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT);
lockdep_assert_held(&kvm->lock);
lockdep_assert_held(&kvm->arch.config_lock);
if (test_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED, &kvm->arch.flags)) {
/*
@ -262,17 +262,18 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
bool loaded;
u32 pstate;
mutex_lock(&vcpu->kvm->lock);
mutex_lock(&vcpu->kvm->arch.config_lock);
ret = kvm_set_vm_width(vcpu);
if (!ret) {
reset_state = vcpu->arch.reset_state;
WRITE_ONCE(vcpu->arch.reset_state.reset, false);
}
mutex_unlock(&vcpu->kvm->lock);
mutex_unlock(&vcpu->kvm->arch.config_lock);
if (ret)
return ret;
spin_lock(&vcpu->arch.mp_state_lock);
reset_state = vcpu->arch.reset_state;
vcpu->arch.reset_state.reset = false;
spin_unlock(&vcpu->arch.mp_state_lock);
/* Reset PMU outside of the non-preemptible section */
kvm_pmu_vcpu_reset(vcpu);

View File

@ -1139,6 +1139,12 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
tmr = TIMER_PTIMER;
treg = TIMER_REG_CVAL;
break;
case SYS_CNTPCT_EL0:
case SYS_CNTPCTSS_EL0:
case SYS_AARCH32_CNTPCT:
tmr = TIMER_PTIMER;
treg = TIMER_REG_CNT;
break;
default:
print_sys_reg_msg(p, "%s", "Unhandled trapped timer register");
kvm_inject_undefined(vcpu);
@ -2075,6 +2081,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
AMU_AMEVTYPER1_EL0(14),
AMU_AMEVTYPER1_EL0(15),
{ SYS_DESC(SYS_CNTPCT_EL0), access_arch_timer },
{ SYS_DESC(SYS_CNTPCTSS_EL0), access_arch_timer },
{ SYS_DESC(SYS_CNTP_TVAL_EL0), access_arch_timer },
{ SYS_DESC(SYS_CNTP_CTL_EL0), access_arch_timer },
{ SYS_DESC(SYS_CNTP_CVAL_EL0), access_arch_timer },
@ -2525,10 +2533,12 @@ static const struct sys_reg_desc cp15_64_regs[] = {
{ Op1( 0), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, TTBR0_EL1 },
{ CP15_PMU_SYS_REG(DIRECT, 0, 0, 9, 0), .access = access_pmu_evcntr },
{ Op1( 0), CRn( 0), CRm(12), Op2( 0), access_gic_sgi }, /* ICC_SGI1R */
{ SYS_DESC(SYS_AARCH32_CNTPCT), access_arch_timer },
{ Op1( 1), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, TTBR1_EL1 },
{ Op1( 1), CRn( 0), CRm(12), Op2( 0), access_gic_sgi }, /* ICC_ASGI1R */
{ Op1( 2), CRn( 0), CRm(12), Op2( 0), access_gic_sgi }, /* ICC_SGI0R */
{ SYS_DESC(SYS_AARCH32_CNTP_CVAL), access_arch_timer },
{ SYS_DESC(SYS_AARCH32_CNTPCTSS), access_arch_timer },
};
static bool check_sysreg_table(const struct sys_reg_desc *table, unsigned int n,

View File

@ -206,6 +206,7 @@ TRACE_EVENT(kvm_get_timer_map,
__field( unsigned long, vcpu_id )
__field( int, direct_vtimer )
__field( int, direct_ptimer )
__field( int, emul_vtimer )
__field( int, emul_ptimer )
),
@ -214,14 +215,17 @@ TRACE_EVENT(kvm_get_timer_map,
__entry->direct_vtimer = arch_timer_ctx_index(map->direct_vtimer);
__entry->direct_ptimer =
(map->direct_ptimer) ? arch_timer_ctx_index(map->direct_ptimer) : -1;
__entry->emul_vtimer =
(map->emul_vtimer) ? arch_timer_ctx_index(map->emul_vtimer) : -1;
__entry->emul_ptimer =
(map->emul_ptimer) ? arch_timer_ctx_index(map->emul_ptimer) : -1;
),
TP_printk("VCPU: %ld, dv: %d, dp: %d, ep: %d",
TP_printk("VCPU: %ld, dv: %d, dp: %d, ev: %d, ep: %d",
__entry->vcpu_id,
__entry->direct_vtimer,
__entry->direct_ptimer,
__entry->emul_vtimer,
__entry->emul_ptimer)
);

View File

@ -85,7 +85,7 @@ static void *vgic_debug_start(struct seq_file *s, loff_t *pos)
struct kvm *kvm = s->private;
struct vgic_state_iter *iter;
mutex_lock(&kvm->lock);
mutex_lock(&kvm->arch.config_lock);
iter = kvm->arch.vgic.iter;
if (iter) {
iter = ERR_PTR(-EBUSY);
@ -104,7 +104,7 @@ static void *vgic_debug_start(struct seq_file *s, loff_t *pos)
if (end_of_vgic(iter))
iter = NULL;
out:
mutex_unlock(&kvm->lock);
mutex_unlock(&kvm->arch.config_lock);
return iter;
}
@ -132,12 +132,12 @@ static void vgic_debug_stop(struct seq_file *s, void *v)
if (IS_ERR(v))
return;
mutex_lock(&kvm->lock);
mutex_lock(&kvm->arch.config_lock);
iter = kvm->arch.vgic.iter;
kfree(iter->lpi_array);
kfree(iter);
kvm->arch.vgic.iter = NULL;
mutex_unlock(&kvm->lock);
mutex_unlock(&kvm->arch.config_lock);
}
static void print_dist_state(struct seq_file *s, struct vgic_dist *dist)

View File

@ -74,9 +74,6 @@ int kvm_vgic_create(struct kvm *kvm, u32 type)
unsigned long i;
int ret;
if (irqchip_in_kernel(kvm))
return -EEXIST;
/*
* This function is also called by the KVM_CREATE_IRQCHIP handler,
* which had no chance yet to check the availability of the GICv2
@ -87,10 +84,20 @@ int kvm_vgic_create(struct kvm *kvm, u32 type)
!kvm_vgic_global_state.can_emulate_gicv2)
return -ENODEV;
/* Must be held to avoid race with vCPU creation */
lockdep_assert_held(&kvm->lock);
ret = -EBUSY;
if (!lock_all_vcpus(kvm))
return ret;
mutex_lock(&kvm->arch.config_lock);
if (irqchip_in_kernel(kvm)) {
ret = -EEXIST;
goto out_unlock;
}
kvm_for_each_vcpu(i, vcpu, kvm) {
if (vcpu_has_run_once(vcpu))
goto out_unlock;
@ -118,6 +125,7 @@ int kvm_vgic_create(struct kvm *kvm, u32 type)
INIT_LIST_HEAD(&kvm->arch.vgic.rd_regions);
out_unlock:
mutex_unlock(&kvm->arch.config_lock);
unlock_all_vcpus(kvm);
return ret;
}
@ -227,9 +235,9 @@ int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
* KVM io device for the redistributor that belongs to this VCPU.
*/
if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
mutex_lock(&vcpu->kvm->lock);
mutex_lock(&vcpu->kvm->arch.config_lock);
ret = vgic_register_redist_iodev(vcpu);
mutex_unlock(&vcpu->kvm->lock);
mutex_unlock(&vcpu->kvm->arch.config_lock);
}
return ret;
}
@ -250,7 +258,6 @@ static void kvm_vgic_vcpu_enable(struct kvm_vcpu *vcpu)
* The function is generally called when nr_spis has been explicitly set
* by the guest through the KVM DEVICE API. If not nr_spis is set to 256.
* vgic_initialized() returns true when this function has succeeded.
* Must be called with kvm->lock held!
*/
int vgic_init(struct kvm *kvm)
{
@ -259,6 +266,8 @@ int vgic_init(struct kvm *kvm)
int ret = 0, i;
unsigned long idx;
lockdep_assert_held(&kvm->arch.config_lock);
if (vgic_initialized(kvm))
return 0;
@ -373,12 +382,13 @@ void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu)
vgic_cpu->rd_iodev.base_addr = VGIC_ADDR_UNDEF;
}
/* To be called with kvm->lock held */
static void __kvm_vgic_destroy(struct kvm *kvm)
{
struct kvm_vcpu *vcpu;
unsigned long i;
lockdep_assert_held(&kvm->arch.config_lock);
vgic_debug_destroy(kvm);
kvm_for_each_vcpu(i, vcpu, kvm)
@ -389,9 +399,9 @@ static void __kvm_vgic_destroy(struct kvm *kvm)
void kvm_vgic_destroy(struct kvm *kvm)
{
mutex_lock(&kvm->lock);
mutex_lock(&kvm->arch.config_lock);
__kvm_vgic_destroy(kvm);
mutex_unlock(&kvm->lock);
mutex_unlock(&kvm->arch.config_lock);
}
/**
@ -414,9 +424,9 @@ int vgic_lazy_init(struct kvm *kvm)
if (kvm->arch.vgic.vgic_model != KVM_DEV_TYPE_ARM_VGIC_V2)
return -EBUSY;
mutex_lock(&kvm->lock);
mutex_lock(&kvm->arch.config_lock);
ret = vgic_init(kvm);
mutex_unlock(&kvm->lock);
mutex_unlock(&kvm->arch.config_lock);
}
return ret;
@ -441,7 +451,7 @@ int kvm_vgic_map_resources(struct kvm *kvm)
if (likely(vgic_ready(kvm)))
return 0;
mutex_lock(&kvm->lock);
mutex_lock(&kvm->arch.config_lock);
if (vgic_ready(kvm))
goto out;
@ -459,7 +469,7 @@ int kvm_vgic_map_resources(struct kvm *kvm)
dist->ready = true;
out:
mutex_unlock(&kvm->lock);
mutex_unlock(&kvm->arch.config_lock);
return ret;
}

View File

@ -1958,6 +1958,16 @@ static int vgic_its_create(struct kvm_device *dev, u32 type)
mutex_init(&its->its_lock);
mutex_init(&its->cmd_lock);
/* Yep, even more trickery for lock ordering... */
#ifdef CONFIG_LOCKDEP
mutex_lock(&dev->kvm->arch.config_lock);
mutex_lock(&its->cmd_lock);
mutex_lock(&its->its_lock);
mutex_unlock(&its->its_lock);
mutex_unlock(&its->cmd_lock);
mutex_unlock(&dev->kvm->arch.config_lock);
#endif
its->vgic_its_base = VGIC_ADDR_UNDEF;
INIT_LIST_HEAD(&its->device_list);
@ -2045,6 +2055,13 @@ static int vgic_its_attr_regs_access(struct kvm_device *dev,
mutex_lock(&dev->kvm->lock);
if (!lock_all_vcpus(dev->kvm)) {
mutex_unlock(&dev->kvm->lock);
return -EBUSY;
}
mutex_lock(&dev->kvm->arch.config_lock);
if (IS_VGIC_ADDR_UNDEF(its->vgic_its_base)) {
ret = -ENXIO;
goto out;
@ -2058,11 +2075,6 @@ static int vgic_its_attr_regs_access(struct kvm_device *dev,
goto out;
}
if (!lock_all_vcpus(dev->kvm)) {
ret = -EBUSY;
goto out;
}
addr = its->vgic_its_base + offset;
len = region->access_flags & VGIC_ACCESS_64bit ? 8 : 4;
@ -2076,8 +2088,9 @@ static int vgic_its_attr_regs_access(struct kvm_device *dev,
} else {
*reg = region->its_read(dev->kvm, its, addr, len);
}
unlock_all_vcpus(dev->kvm);
out:
mutex_unlock(&dev->kvm->arch.config_lock);
unlock_all_vcpus(dev->kvm);
mutex_unlock(&dev->kvm->lock);
return ret;
}
@ -2749,14 +2762,15 @@ static int vgic_its_ctrl(struct kvm *kvm, struct vgic_its *its, u64 attr)
return 0;
mutex_lock(&kvm->lock);
mutex_lock(&its->its_lock);
if (!lock_all_vcpus(kvm)) {
mutex_unlock(&its->its_lock);
mutex_unlock(&kvm->lock);
return -EBUSY;
}
mutex_lock(&kvm->arch.config_lock);
mutex_lock(&its->its_lock);
switch (attr) {
case KVM_DEV_ARM_ITS_CTRL_RESET:
vgic_its_reset(kvm, its);
@ -2769,8 +2783,9 @@ static int vgic_its_ctrl(struct kvm *kvm, struct vgic_its *its, u64 attr)
break;
}
unlock_all_vcpus(kvm);
mutex_unlock(&its->its_lock);
mutex_unlock(&kvm->arch.config_lock);
unlock_all_vcpus(kvm);
mutex_unlock(&kvm->lock);
return ret;
}

View File

@ -46,7 +46,7 @@ int kvm_set_legacy_vgic_v2_addr(struct kvm *kvm, struct kvm_arm_device_addr *dev
struct vgic_dist *vgic = &kvm->arch.vgic;
int r;
mutex_lock(&kvm->lock);
mutex_lock(&kvm->arch.config_lock);
switch (FIELD_GET(KVM_ARM_DEVICE_TYPE_MASK, dev_addr->id)) {
case KVM_VGIC_V2_ADDR_TYPE_DIST:
r = vgic_check_type(kvm, KVM_DEV_TYPE_ARM_VGIC_V2);
@ -68,7 +68,7 @@ int kvm_set_legacy_vgic_v2_addr(struct kvm *kvm, struct kvm_arm_device_addr *dev
r = -ENODEV;
}
mutex_unlock(&kvm->lock);
mutex_unlock(&kvm->arch.config_lock);
return r;
}
@ -102,7 +102,7 @@ static int kvm_vgic_addr(struct kvm *kvm, struct kvm_device_attr *attr, bool wri
if (get_user(addr, uaddr))
return -EFAULT;
mutex_lock(&kvm->lock);
mutex_lock(&kvm->arch.config_lock);
switch (attr->attr) {
case KVM_VGIC_V2_ADDR_TYPE_DIST:
r = vgic_check_type(kvm, KVM_DEV_TYPE_ARM_VGIC_V2);
@ -191,7 +191,7 @@ static int kvm_vgic_addr(struct kvm *kvm, struct kvm_device_attr *attr, bool wri
}
out:
mutex_unlock(&kvm->lock);
mutex_unlock(&kvm->arch.config_lock);
if (!r && !write)
r = put_user(addr, uaddr);
@ -227,7 +227,7 @@ static int vgic_set_common_attr(struct kvm_device *dev,
(val & 31))
return -EINVAL;
mutex_lock(&dev->kvm->lock);
mutex_lock(&dev->kvm->arch.config_lock);
if (vgic_ready(dev->kvm) || dev->kvm->arch.vgic.nr_spis)
ret = -EBUSY;
@ -235,16 +235,16 @@ static int vgic_set_common_attr(struct kvm_device *dev,
dev->kvm->arch.vgic.nr_spis =
val - VGIC_NR_PRIVATE_IRQS;
mutex_unlock(&dev->kvm->lock);
mutex_unlock(&dev->kvm->arch.config_lock);
return ret;
}
case KVM_DEV_ARM_VGIC_GRP_CTRL: {
switch (attr->attr) {
case KVM_DEV_ARM_VGIC_CTRL_INIT:
mutex_lock(&dev->kvm->lock);
mutex_lock(&dev->kvm->arch.config_lock);
r = vgic_init(dev->kvm);
mutex_unlock(&dev->kvm->lock);
mutex_unlock(&dev->kvm->arch.config_lock);
return r;
case KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES:
/*
@ -260,7 +260,10 @@ static int vgic_set_common_attr(struct kvm_device *dev,
mutex_unlock(&dev->kvm->lock);
return -EBUSY;
}
mutex_lock(&dev->kvm->arch.config_lock);
r = vgic_v3_save_pending_tables(dev->kvm);
mutex_unlock(&dev->kvm->arch.config_lock);
unlock_all_vcpus(dev->kvm);
mutex_unlock(&dev->kvm->lock);
return r;
@ -342,44 +345,6 @@ int vgic_v2_parse_attr(struct kvm_device *dev, struct kvm_device_attr *attr,
return 0;
}
/* unlocks vcpus from @vcpu_lock_idx and smaller */
static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx)
{
struct kvm_vcpu *tmp_vcpu;
for (; vcpu_lock_idx >= 0; vcpu_lock_idx--) {
tmp_vcpu = kvm_get_vcpu(kvm, vcpu_lock_idx);
mutex_unlock(&tmp_vcpu->mutex);
}
}
void unlock_all_vcpus(struct kvm *kvm)
{
unlock_vcpus(kvm, atomic_read(&kvm->online_vcpus) - 1);
}
/* Returns true if all vcpus were locked, false otherwise */
bool lock_all_vcpus(struct kvm *kvm)
{
struct kvm_vcpu *tmp_vcpu;
unsigned long c;
/*
* Any time a vcpu is run, vcpu_load is called which tries to grab the
* vcpu->mutex. By grabbing the vcpu->mutex of all VCPUs we ensure
* that no other VCPUs are run and fiddle with the vgic state while we
* access it.
*/
kvm_for_each_vcpu(c, tmp_vcpu, kvm) {
if (!mutex_trylock(&tmp_vcpu->mutex)) {
unlock_vcpus(kvm, c - 1);
return false;
}
}
return true;
}
/**
* vgic_v2_attr_regs_access - allows user space to access VGIC v2 state
*
@ -411,15 +376,17 @@ static int vgic_v2_attr_regs_access(struct kvm_device *dev,
mutex_lock(&dev->kvm->lock);
if (!lock_all_vcpus(dev->kvm)) {
mutex_unlock(&dev->kvm->lock);
return -EBUSY;
}
mutex_lock(&dev->kvm->arch.config_lock);
ret = vgic_init(dev->kvm);
if (ret)
goto out;
if (!lock_all_vcpus(dev->kvm)) {
ret = -EBUSY;
goto out;
}
switch (attr->group) {
case KVM_DEV_ARM_VGIC_GRP_CPU_REGS:
ret = vgic_v2_cpuif_uaccess(vcpu, is_write, addr, &val);
@ -432,8 +399,9 @@ static int vgic_v2_attr_regs_access(struct kvm_device *dev,
break;
}
unlock_all_vcpus(dev->kvm);
out:
mutex_unlock(&dev->kvm->arch.config_lock);
unlock_all_vcpus(dev->kvm);
mutex_unlock(&dev->kvm->lock);
if (!ret && !is_write)
@ -569,12 +537,14 @@ static int vgic_v3_attr_regs_access(struct kvm_device *dev,
mutex_lock(&dev->kvm->lock);
if (unlikely(!vgic_initialized(dev->kvm))) {
ret = -EBUSY;
goto out;
if (!lock_all_vcpus(dev->kvm)) {
mutex_unlock(&dev->kvm->lock);
return -EBUSY;
}
if (!lock_all_vcpus(dev->kvm)) {
mutex_lock(&dev->kvm->arch.config_lock);
if (unlikely(!vgic_initialized(dev->kvm))) {
ret = -EBUSY;
goto out;
}
@ -609,8 +579,9 @@ static int vgic_v3_attr_regs_access(struct kvm_device *dev,
break;
}
unlock_all_vcpus(dev->kvm);
out:
mutex_unlock(&dev->kvm->arch.config_lock);
unlock_all_vcpus(dev->kvm);
mutex_unlock(&dev->kvm->lock);
if (!ret && uaccess && !is_write) {

View File

@ -111,7 +111,7 @@ static void vgic_mmio_write_v3_misc(struct kvm_vcpu *vcpu,
case GICD_CTLR: {
bool was_enabled, is_hwsgi;
mutex_lock(&vcpu->kvm->lock);
mutex_lock(&vcpu->kvm->arch.config_lock);
was_enabled = dist->enabled;
is_hwsgi = dist->nassgireq;
@ -139,7 +139,7 @@ static void vgic_mmio_write_v3_misc(struct kvm_vcpu *vcpu,
else if (!was_enabled && dist->enabled)
vgic_kick_vcpus(vcpu->kvm);
mutex_unlock(&vcpu->kvm->lock);
mutex_unlock(&vcpu->kvm->arch.config_lock);
break;
}
case GICD_TYPER:

View File

@ -530,13 +530,13 @@ unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu,
u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
u32 val;
mutex_lock(&vcpu->kvm->lock);
mutex_lock(&vcpu->kvm->arch.config_lock);
vgic_access_active_prepare(vcpu, intid);
val = __vgic_mmio_read_active(vcpu, addr, len);
vgic_access_active_finish(vcpu, intid);
mutex_unlock(&vcpu->kvm->lock);
mutex_unlock(&vcpu->kvm->arch.config_lock);
return val;
}
@ -625,13 +625,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
{
u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
mutex_lock(&vcpu->kvm->lock);
mutex_lock(&vcpu->kvm->arch.config_lock);
vgic_access_active_prepare(vcpu, intid);
__vgic_mmio_write_cactive(vcpu, addr, len, val);
vgic_access_active_finish(vcpu, intid);
mutex_unlock(&vcpu->kvm->lock);
mutex_unlock(&vcpu->kvm->arch.config_lock);
}
int vgic_mmio_uaccess_write_cactive(struct kvm_vcpu *vcpu,
@ -662,13 +662,13 @@ void vgic_mmio_write_sactive(struct kvm_vcpu *vcpu,
{
u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
mutex_lock(&vcpu->kvm->lock);
mutex_lock(&vcpu->kvm->arch.config_lock);
vgic_access_active_prepare(vcpu, intid);
__vgic_mmio_write_sactive(vcpu, addr, len, val);
vgic_access_active_finish(vcpu, intid);
mutex_unlock(&vcpu->kvm->lock);
mutex_unlock(&vcpu->kvm->arch.config_lock);
}
int vgic_mmio_uaccess_write_sactive(struct kvm_vcpu *vcpu,

View File

@ -232,9 +232,8 @@ int vgic_v4_request_vpe_irq(struct kvm_vcpu *vcpu, int irq)
* @kvm: Pointer to the VM being initialized
*
* We may be called each time a vITS is created, or when the
* vgic is initialized. This relies on kvm->lock to be
* held. In both cases, the number of vcpus should now be
* fixed.
* vgic is initialized. In both cases, the number of vcpus
* should now be fixed.
*/
int vgic_v4_init(struct kvm *kvm)
{
@ -243,6 +242,8 @@ int vgic_v4_init(struct kvm *kvm)
int nr_vcpus, ret;
unsigned long i;
lockdep_assert_held(&kvm->arch.config_lock);
if (!kvm_vgic_global_state.has_gicv4)
return 0; /* Nothing to see here... move along. */
@ -309,14 +310,14 @@ int vgic_v4_init(struct kvm *kvm)
/**
* vgic_v4_teardown - Free the GICv4 data structures
* @kvm: Pointer to the VM being destroyed
*
* Relies on kvm->lock to be held.
*/
void vgic_v4_teardown(struct kvm *kvm)
{
struct its_vm *its_vm = &kvm->arch.vgic.its_vm;
int i;
lockdep_assert_held(&kvm->arch.config_lock);
if (!its_vm->vpes)
return;

View File

@ -24,11 +24,13 @@ struct vgic_global kvm_vgic_global_state __ro_after_init = {
/*
* Locking order is always:
* kvm->lock (mutex)
* its->cmd_lock (mutex)
* its->its_lock (mutex)
* vgic_cpu->ap_list_lock must be taken with IRQs disabled
* kvm->lpi_list_lock must be taken with IRQs disabled
* vgic_irq->irq_lock must be taken with IRQs disabled
* vcpu->mutex (mutex)
* kvm->arch.config_lock (mutex)
* its->cmd_lock (mutex)
* its->its_lock (mutex)
* vgic_cpu->ap_list_lock must be taken with IRQs disabled
* kvm->lpi_list_lock must be taken with IRQs disabled
* vgic_irq->irq_lock must be taken with IRQs disabled
*
* As the ap_list_lock might be taken from the timer interrupt handler,
* we have to disable IRQs before taking this lock and everything lower
@ -573,6 +575,21 @@ int kvm_vgic_unmap_phys_irq(struct kvm_vcpu *vcpu, unsigned int vintid)
return 0;
}
int kvm_vgic_get_map(struct kvm_vcpu *vcpu, unsigned int vintid)
{
struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, vintid);
unsigned long flags;
int ret = -1;
raw_spin_lock_irqsave(&irq->irq_lock, flags);
if (irq->hw)
ret = irq->hwintid;
raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
vgic_put_irq(vcpu->kvm, irq);
return ret;
}
/**
* kvm_vgic_set_owner - Set the owner of an interrupt for a VM
*

View File

@ -273,9 +273,6 @@ int vgic_init(struct kvm *kvm);
void vgic_debug_init(struct kvm *kvm);
void vgic_debug_destroy(struct kvm *kvm);
bool lock_all_vcpus(struct kvm *kvm);
void unlock_all_vcpus(struct kvm *kvm);
static inline int vgic_v3_max_apr_idx(struct kvm_vcpu *vcpu)
{
struct vgic_cpu *cpu_if = &vcpu->arch.vgic_cpu;

View File

@ -23,6 +23,7 @@ HAS_DCPOP
HAS_DIT
HAS_E0PD
HAS_ECV
HAS_ECV_CNTPOFF
HAS_EPAN
HAS_GENERIC_AUTH
HAS_GENERIC_AUTH_ARCH_QARMA3

View File

@ -1952,6 +1952,10 @@ Sysreg CONTEXTIDR_EL2 3 4 13 0 1
Fields CONTEXTIDR_ELx
EndSysreg
Sysreg CNTPOFF_EL2 3 4 14 0 6
Field 63:0 PhysicalOffset
EndSysreg
Sysreg CPACR_EL12 3 5 1 0 2
Fields CPACR_ELx
EndSysreg

View File

@ -2,7 +2,7 @@
#ifndef __ASM_KASAN_H
#define __ASM_KASAN_H
#ifdef CONFIG_KASAN
#if defined(CONFIG_KASAN) && !defined(CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX)
#define _GLOBAL_KASAN(fn) _GLOBAL(__##fn)
#define _GLOBAL_TOC_KASAN(fn) _GLOBAL_TOC(__##fn)
#define EXPORT_SYMBOL_KASAN(fn) EXPORT_SYMBOL(__##fn)

View File

@ -30,11 +30,17 @@ extern int memcmp(const void *,const void *,__kernel_size_t);
extern void * memchr(const void *,int,__kernel_size_t);
void memcpy_flushcache(void *dest, const void *src, size_t size);
#ifdef CONFIG_KASAN
/* __mem variants are used by KASAN to implement instrumented meminstrinsics. */
#ifdef CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX
#define __memset memset
#define __memcpy memcpy
#define __memmove memmove
#else /* CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX */
void *__memset(void *s, int c, __kernel_size_t count);
void *__memcpy(void *to, const void *from, __kernel_size_t n);
void *__memmove(void *to, const void *from, __kernel_size_t n);
#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
#ifndef __SANITIZE_ADDRESS__
/*
* For files that are not instrumented (e.g. mm/slub.c) we
* should use not instrumented version of mem* functions.
@ -46,8 +52,9 @@ void *__memmove(void *to, const void *from, __kernel_size_t n);
#ifndef __NO_FORTIFY
#define __NO_FORTIFY /* FORTIFY_SOURCE uses __builtin_memcpy, etc. */
#endif
#endif
#endif /* !__SANITIZE_ADDRESS__ */
#endif /* CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX */
#endif /* CONFIG_KASAN */
#ifdef CONFIG_PPC64
#ifndef CONFIG_KASAN

View File

@ -13,8 +13,13 @@
# If you really need to reference something from prom_init.o add
# it to the list below:
grep "^CONFIG_KASAN=y$" ${KCONFIG_CONFIG} >/dev/null
if [ $? -eq 0 ]
has_renamed_memintrinsics()
{
grep -q "^CONFIG_KASAN=y$" ${KCONFIG_CONFIG} && \
! grep -q "^CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX=y" ${KCONFIG_CONFIG}
}
if has_renamed_memintrinsics
then
MEM_FUNCS="__memcpy __memset"
else

View File

@ -271,11 +271,16 @@ static bool access_error(bool is_write, bool is_exec, struct vm_area_struct *vma
}
/*
* Check for a read fault. This could be caused by a read on an
* inaccessible page (i.e. PROT_NONE), or a Radix MMU execute-only page.
* VM_READ, VM_WRITE and VM_EXEC all imply read permissions, as
* defined in protection_map[]. Read faults can only be caused by
* a PROT_NONE mapping, or with a PROT_EXEC-only mapping on Radix.
*/
if (unlikely(!(vma->vm_flags & VM_READ)))
if (unlikely(!vma_is_accessible(vma)))
return true;
if (unlikely(radix_enabled() && ((vma->vm_flags & VM_ACCESS_FLAGS) == VM_EXEC)))
return true;
/*
* We should ideally do the vma pkey access check here. But in the
* fault path, handle_mm_fault() also does the same check. To avoid

View File

@ -7,6 +7,7 @@ config PPC_PSERIES
select OF_DYNAMIC
select FORCE_PCI
select PCI_MSI
select GENERIC_ALLOCATOR
select PPC_XICS
select PPC_XIVE_SPAPR
select PPC_ICP_NATIVE

View File

@ -464,6 +464,28 @@ config TOOLCHAIN_HAS_ZIHINTPAUSE
depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zihintpause)
depends on LLD_VERSION >= 150000 || LD_VERSION >= 23600
config TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI
def_bool y
# https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=aed44286efa8ae8717a77d94b51ac3614e2ca6dc
depends on AS_IS_GNU && AS_VERSION >= 23800
help
Newer binutils versions default to ISA spec version 20191213 which
moves some instructions from the I extension to the Zicsr and Zifencei
extensions.
config TOOLCHAIN_NEEDS_OLD_ISA_SPEC
def_bool y
depends on TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI
# https://github.com/llvm/llvm-project/commit/22e199e6afb1263c943c0c0d4498694e15bf8a16
depends on CC_IS_CLANG && CLANG_VERSION < 170000
help
Certain versions of clang do not support zicsr and zifencei via -march
but newer versions of binutils require it for the reasons noted in the
help text of CONFIG_TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI. This
option causes an older ISA spec compatible with these older versions
of clang to be passed to GAS, which has the same result as passing zicsr
and zifencei to -march.
config FPU
bool "FPU support"
default y

Some files were not shown because too many files have changed in this diff Show More