linux/arch/arm64/mm
Jean-Philippe Brucker 48118151d8 arm64: mm: Pin down ASIDs for sharing mm with devices
To enable address space sharing with the IOMMU, introduce
arm64_mm_context_get() and arm64_mm_context_put(), that pin down a
context and ensure that it will keep its ASID after a rollover. Export
the symbols to let the modular SMMUv3 driver use them.

Pinning is necessary because a device constantly needs a valid ASID,
unlike tasks that only require one when running. Without pinning, we would
need to notify the IOMMU when we're about to use a new ASID for a task,
and it would get complicated when a new task is assigned a shared ASID.
Consider the following scenario with no ASID pinned:

1. Task t1 is running on CPUx with shared ASID (gen=1, asid=1)
2. Task t2 is scheduled on CPUx, gets ASID (1, 2)
3. Task tn is scheduled on CPUy, a rollover occurs, tn gets ASID (2, 1)
   We would now have to immediately generate a new ASID for t1, notify
   the IOMMU, and finally enable task tn. We are holding the lock during
   all that time, since we can't afford having another CPU trigger a
   rollover. The IOMMU issues invalidation commands that can take tens of
   milliseconds.

It gets needlessly complicated. All we wanted to do was schedule task tn,
that has no business with the IOMMU. By letting the IOMMU pin tasks when
needed, we avoid stalling the slow path, and let the pinning fail when
we're out of shareable ASIDs.

After a rollover, the allocator expects at least one ASID to be available
in addition to the reserved ones (one per CPU). So (NR_ASIDS - NR_CPUS -
1) is the maximum number of ASIDs that can be shared with the IOMMU.

Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Link: https://lore.kernel.org/r/20200918101852.582559-5-jean-philippe@linaro.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28 22:15:38 +01:00
..
cache.S arm64: mm: Use modern annotations for assembly functions 2020-01-08 12:23:38 +00:00
context.c arm64: mm: Pin down ASIDs for sharing mm with devices 2020-09-28 22:15:38 +01:00
copypage.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234 2019-06-19 17:09:07 +02:00
dma-mapping.c dma-mapping: drop the dev argument to arch_sync_dma_for_* 2019-11-20 20:31:38 +01:00
dump.c mm: don't include asm/pgtable.h if linux/mm.h is already included 2020-06-09 09:39:13 -07:00
extable.c bpf, arm64: Add BPF exception tables 2020-07-31 00:43:40 +02:00
fault.c mm/arm64: use general page fault accounting 2020-08-12 10:58:03 -07:00
flush.c mm: introduce page_size() 2019-09-24 15:54:08 -07:00
hugetlbpage.c mm: remove unneeded includes of <asm/pgalloc.h> 2020-08-07 11:33:26 -07:00
init.c mm/sparse: cleanup the code surrounding memory_present() 2020-08-07 11:33:27 -07:00
ioremap.c mm: remove unneeded includes of <asm/pgalloc.h> 2020-08-07 11:33:26 -07:00
kasan_init.c mm: consolidate pte_index() and pte_offset_*() definitions 2020-06-09 09:39:14 -07:00
Makefile arm64: mm: convert mm/dump.c to use walk_page_range() 2020-02-04 03:05:25 +00:00
mmap.c arm64, mm: move generic mmap layout functions to mm 2019-09-24 15:54:11 -07:00
mmu.c arm64/mm: enable vmem_altmap support for vmemmap mappings 2020-08-07 11:33:27 -07:00
numa.c mm/memory_hotplug: introduce default dummy memory_add_physaddr_to_nid() 2020-08-12 10:57:57 -07:00
pageattr.c mm: don't include asm/pgtable.h if linux/mm.h is already included 2020-06-09 09:39:13 -07:00
pgd.c mm: consolidate pgtable_cache_init() and pgd_cache_init() 2019-09-24 15:54:09 -07:00
physaddr.c License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
proc.S mm: reorder includes after introduction of linux/pgtable.h 2020-06-09 09:39:13 -07:00
ptdump_debugfs.c arm64/mm: Hold memory hotplug lock while walking for kernel page table dump 2020-03-04 15:35:22 +00:00