2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-22 20:23:57 +08:00
linux-next/arch/arm64/mm
Mark Rutland fb59d007a0 arm64: mm: dump: don't skip final region
If the final page table entry we walk is a valid mapping, the page table
dumping code will not log the region this entry is part of, as the final
note_page call in ptdump_show will trigger an early return. Luckily this
isn't seen on contemporary systems as they typically don't have enough
RAM to extend the linear mapping right to the end of the address space.

In note_page, we log a region  when we reach its end (i.e. we hit an
entry immediately afterwards which has different prot bits or is
invalid). The final entry has no subsequent entry, so we will not log
this immediately. We try to cater for this with a subsequent call to
note_page in ptdump_show, but this returns early as 0 < LOWEST_ADDR, and
hence we will skip a valid mapping if it spans to the final entry we
note.

Unlike 32-bit ARM, the pgd with the kernel mapping is never shared with
user mappings, so we do not need the check to ensure we don't log user
page tables. Due to the way addr is constructed in the walk_* functions,
it can never be less than LOWEST_ADDR when walking the page tables, so
it is not necessary to avoid dereferencing invalid table addresses. The
existing checks for st->current_prot and st->marker[1].start_address are
sufficient to ensure we will not print and/or dereference garbage when
trying to log information.

This patch removes the unnecessary check against LOWEST_ADDR, ensuring
we log all regions in the kernel page table, including those which span
right to the end of the address space.

Cc: Kees Cook <keescook@chromium.org>
Acked-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Steve Capper <steve.capper@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-12-11 12:08:07 +00:00
..
cache.S arm64: compat: align cacheflush syscall with arch/arm 2014-12-01 13:31:12 +00:00
context.c arm64: Process management 2012-09-17 13:41:58 +01:00
copypage.c arm64: export __cpu_{clear,copy}_user_page functions 2014-07-08 17:30:51 +01:00
dma-mapping.c arm64: add atomic pool for non-coherent and CMA allocations 2014-10-09 22:25:52 -04:00
dump.c arm64: mm: dump: don't skip final region 2014-12-11 12:08:07 +00:00
extable.c arm64: MMU fault handling and page table management 2012-09-17 13:41:57 +01:00
fault.c arm64: mm: report unhandled level-0 translation faults correctly 2014-11-21 14:22:22 +00:00
flush.c arm64: mm: enable RCU fast_gup 2014-10-09 22:26:01 -04:00
hugetlbpage.c hugetlb: restrict hugepage_migration_support() to x86_64 2014-06-04 16:53:51 -07:00
init.c arm64: add alternative runtime patching 2014-11-25 13:46:36 +00:00
ioremap.c arm64: Factor out fixmap initialization from ioremap 2014-11-25 15:56:45 +00:00
Makefile arm64: add support to dump the kernel page tables 2014-11-26 17:19:18 +00:00
mm.h arm64: remove the unnecessary arm64_swiotlb_init() 2014-12-05 12:19:52 +00:00
mmap.c arm64/mm: Remove hack in mmap randomize layout 2014-11-18 16:58:15 +00:00
mmu.c arm64: Factor out fixmap initialization from ioremap 2014-11-25 15:56:45 +00:00
pageattr.c arm64: pageattr: Correctly adjust unaligned start addresses 2014-09-12 16:34:50 +01:00
pgd.c arm64: pgalloc: consistently use PGALLOC_GFP 2014-11-20 12:05:18 +00:00
proc-macros.S arm64: mm: use ubfm for dcache_line_size 2014-01-22 16:23:58 +00:00
proc.S arm64: convert part of soft_restart() to assembly 2014-09-08 14:39:18 +01:00