From 5784d9fcfd43bd853654bb80c87ef293b9e8e80a Mon Sep 17 00:00:00 2001 From: Julian Sun Date: Mon, 2 Sep 2024 11:08:44 +0800 Subject: [PATCH 01/12] ocfs2: fix null-ptr-deref when journal load failed. During the mounting process, if journal_reset() fails because of too short journal, then lead to jbd2_journal_load() fails with NULL j_sb_buffer. Subsequently, ocfs2_journal_shutdown() calls jbd2_journal_flush()->jbd2_cleanup_journal_tail()-> __jbd2_update_log_tail()->jbd2_journal_update_sb_log_tail() ->lock_buffer(journal->j_sb_buffer), resulting in a null-pointer dereference error. To resolve this issue, we should check the JBD2_LOADED flag to ensure the journal was properly loaded. Additionally, use journal instead of osb->journal directly to simplify the code. Link: https://syzkaller.appspot.com/bug?extid=05b9b39d8bdfe1a0861f Link: https://lkml.kernel.org/r/20240902030844.422725-1-sunjunchao2870@gmail.com Fixes: f6f50e28f0cb ("jbd2: Fail to load a journal if it is too short") Signed-off-by: Julian Sun Reported-by: syzbot+05b9b39d8bdfe1a0861f@syzkaller.appspotmail.com Suggested-by: Joseph Qi Reviewed-by: Joseph Qi Cc: Mark Fasheh Cc: Joel Becker Cc: Junxiao Bi Cc: Changwei Ge Cc: Gang He Cc: Jun Piao Cc: Signed-off-by: Andrew Morton --- fs/ocfs2/journal.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c index 530fba34f6d3..1bf188b6866a 100644 --- a/fs/ocfs2/journal.c +++ b/fs/ocfs2/journal.c @@ -1055,7 +1055,7 @@ void ocfs2_journal_shutdown(struct ocfs2_super *osb) if (!igrab(inode)) BUG(); - num_running_trans = atomic_read(&(osb->journal->j_num_trans)); + num_running_trans = atomic_read(&(journal->j_num_trans)); trace_ocfs2_journal_shutdown(num_running_trans); /* Do a commit_cache here. It will flush our journal, *and* @@ -1074,9 +1074,10 @@ void ocfs2_journal_shutdown(struct ocfs2_super *osb) osb->commit_task = NULL; } - BUG_ON(atomic_read(&(osb->journal->j_num_trans)) != 0); + BUG_ON(atomic_read(&(journal->j_num_trans)) != 0); - if (ocfs2_mount_local(osb)) { + if (ocfs2_mount_local(osb) && + (journal->j_journal->j_flags & JBD2_LOADED)) { jbd2_journal_lock_updates(journal->j_journal); status = jbd2_journal_flush(journal->j_journal, 0); jbd2_journal_unlock_updates(journal->j_journal); From c03a82b4a0c935774afa01fd6d128b444fd930a1 Mon Sep 17 00:00:00 2001 From: Lizhi Xu Date: Mon, 2 Sep 2024 10:36:35 +0800 Subject: [PATCH 02/12] ocfs2: remove unreasonable unlock in ocfs2_read_blocks Patch series "Misc fixes for ocfs2_read_blocks", v5. This series contains 2 fixes for ocfs2_read_blocks(). The first patch fix the issue reported by syzbot, which detects bad unlock balance in ocfs2_read_blocks(). The second patch fixes an issue reported by Heming Zhao when reviewing above fix. This patch (of 2): There was a lock release before exiting, so remove the unreasonable unlock. Link: https://lkml.kernel.org/r/20240902023636.1843422-1-joseph.qi@linux.alibaba.com Link: https://lkml.kernel.org/r/20240902023636.1843422-2-joseph.qi@linux.alibaba.com Fixes: cf76c78595ca ("ocfs2: don't put and assigning null to bh allocated outside") Signed-off-by: Lizhi Xu Signed-off-by: Joseph Qi Reviewed-by: Heming Zhao Reviewed-by: Joseph Qi Reported-by: syzbot+ab134185af9ef88dfed5@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=ab134185af9ef88dfed5 Tested-by: syzbot+ab134185af9ef88dfed5@syzkaller.appspotmail.com Cc: Mark Fasheh Cc: Joel Becker Cc: Junxiao Bi Cc: Changwei Ge Cc: Gang He Cc: Jun Piao Cc: [4.20+] Signed-off-by: Andrew Morton --- fs/ocfs2/buffer_head_io.c | 1 - 1 file changed, 1 deletion(-) diff --git a/fs/ocfs2/buffer_head_io.c b/fs/ocfs2/buffer_head_io.c index cdb9b9bdea1f..e62c7e1de4eb 100644 --- a/fs/ocfs2/buffer_head_io.c +++ b/fs/ocfs2/buffer_head_io.c @@ -235,7 +235,6 @@ int ocfs2_read_blocks(struct ocfs2_caching_info *ci, u64 block, int nr, if (bhs[i] == NULL) { bhs[i] = sb_getblk(sb, block++); if (bhs[i] == NULL) { - ocfs2_metadata_cache_io_unlock(ci); status = -ENOMEM; mlog_errno(status); /* Don't forget to put previous bh! */ From 33b525cef4cff49e216e4133cc48452e11c0391e Mon Sep 17 00:00:00 2001 From: Lizhi Xu Date: Mon, 2 Sep 2024 10:36:36 +0800 Subject: [PATCH 03/12] ocfs2: fix possible null-ptr-deref in ocfs2_set_buffer_uptodate When doing cleanup, if flags without OCFS2_BH_READAHEAD, it may trigger NULL pointer dereference in the following ocfs2_set_buffer_uptodate() if bh is NULL. Link: https://lkml.kernel.org/r/20240902023636.1843422-3-joseph.qi@linux.alibaba.com Fixes: cf76c78595ca ("ocfs2: don't put and assigning null to bh allocated outside") Signed-off-by: Lizhi Xu Signed-off-by: Joseph Qi Reviewed-by: Joseph Qi Reported-by: Heming Zhao Suggested-by: Heming Zhao Cc: [4.20+] Cc: Changwei Ge Cc: Gang He Cc: Joel Becker Cc: Jun Piao Cc: Junxiao Bi Cc: Mark Fasheh Signed-off-by: Andrew Morton --- fs/ocfs2/buffer_head_io.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/ocfs2/buffer_head_io.c b/fs/ocfs2/buffer_head_io.c index e62c7e1de4eb..8f714406528d 100644 --- a/fs/ocfs2/buffer_head_io.c +++ b/fs/ocfs2/buffer_head_io.c @@ -388,7 +388,8 @@ read_failure: /* Always set the buffer in the cache, even if it was * a forced read, or read-ahead which hasn't yet * completed. */ - ocfs2_set_buffer_uptodate(ci, bh); + if (bh) + ocfs2_set_buffer_uptodate(ci, bh); } ocfs2_metadata_cache_io_unlock(ci); From 35fccce29feb3706f649726d410122dd81b92c18 Mon Sep 17 00:00:00 2001 From: Joseph Qi Date: Wed, 4 Sep 2024 15:10:03 +0800 Subject: [PATCH 04/12] ocfs2: cancel dqi_sync_work before freeing oinfo ocfs2_global_read_info() will initialize and schedule dqi_sync_work at the end, if error occurs after successfully reading global quota, it will trigger the following warning with CONFIG_DEBUG_OBJECTS_* enabled: ODEBUG: free active (active state 0) object: 00000000d8b0ce28 object type: timer_list hint: qsync_work_fn+0x0/0x16c This reports that there is an active delayed work when freeing oinfo in error handling, so cancel dqi_sync_work first. BTW, return status instead of -1 when .read_file_info fails. Link: https://syzkaller.appspot.com/bug?extid=f7af59df5d6b25f0febd Link: https://lkml.kernel.org/r/20240904071004.2067695-1-joseph.qi@linux.alibaba.com Fixes: 171bf93ce11f ("ocfs2: Periodic quota syncing") Signed-off-by: Joseph Qi Reviewed-by: Heming Zhao Reported-by: syzbot+f7af59df5d6b25f0febd@syzkaller.appspotmail.com Tested-by: syzbot+f7af59df5d6b25f0febd@syzkaller.appspotmail.com Cc: Mark Fasheh Cc: Joel Becker Cc: Junxiao Bi Cc: Changwei Ge Cc: Gang He Cc: Jun Piao Cc: Signed-off-by: Andrew Morton --- fs/ocfs2/quota_local.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c index 8ce462c64c51..73d3367c533b 100644 --- a/fs/ocfs2/quota_local.c +++ b/fs/ocfs2/quota_local.c @@ -692,7 +692,7 @@ static int ocfs2_local_read_info(struct super_block *sb, int type) int status; struct buffer_head *bh = NULL; struct ocfs2_quota_recovery *rec; - int locked = 0; + int locked = 0, global_read = 0; info->dqi_max_spc_limit = 0x7fffffffffffffffLL; info->dqi_max_ino_limit = 0x7fffffffffffffffLL; @@ -700,6 +700,7 @@ static int ocfs2_local_read_info(struct super_block *sb, int type) if (!oinfo) { mlog(ML_ERROR, "failed to allocate memory for ocfs2 quota" " info."); + status = -ENOMEM; goto out_err; } info->dqi_priv = oinfo; @@ -712,6 +713,7 @@ static int ocfs2_local_read_info(struct super_block *sb, int type) status = ocfs2_global_read_info(sb, type); if (status < 0) goto out_err; + global_read = 1; status = ocfs2_inode_lock(lqinode, &oinfo->dqi_lqi_bh, 1); if (status < 0) { @@ -782,10 +784,12 @@ out_err: if (locked) ocfs2_inode_unlock(lqinode, 1); ocfs2_release_local_quota_bitmaps(&oinfo->dqi_chunk); + if (global_read) + cancel_delayed_work_sync(&oinfo->dqi_sync_work); kfree(oinfo); } brelse(bh); - return -1; + return status; } /* Write local info to quota file */ From 0885ef4705607936fc36a38fd74356e1c465b023 Mon Sep 17 00:00:00 2001 From: Chris Li Date: Thu, 5 Sep 2024 01:08:17 -0700 Subject: [PATCH 05/12] mm: vmscan.c: fix OOM on swap stress test I found a regression on mm-unstable during my swap stress test, using tmpfs to compile linux. The test OOM very soon after the make spawns many cc processes. It bisects down to this change: 33dfe9204f29b415bbc0abb1a50642d1ba94f5e9 (mm/gup: clear the LRU flag of a page before adding to LRU batch) Yu Zhao propose the fix: "I think this is one of the potential side effects -- Huge mentioned earlier about isolate_lru_folios():" I test that with it the swap stress test no longer OOM. Link: https://lore.kernel.org/r/CAOUHufYi9h0kz5uW3LHHS3ZrVwEq-kKp8S6N-MZUmErNAXoXmw@mail.gmail.com/ Link: https://lkml.kernel.org/r/20240905-lru-flag-v2-1-8a2d9046c594@kernel.org Fixes: 33dfe9204f29 ("mm/gup: clear the LRU flag of a page before adding to LRU batch") Signed-off-by: Chris Li Suggested-by: Yu Zhao Suggested-by: Hugh Dickins Closes: https://lore.kernel.org/all/CAF8kJuNP5iTj2p07QgHSGOJsiUfYpJ2f4R1Q5-3BN9JiD9W_KA@mail.gmail.com/ Signed-off-by: Andrew Morton --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index bd489c1af228..a8d61a8b6894 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4300,7 +4300,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c } /* ineligible */ - if (zone > sc->reclaim_idx) { + if (!folio_test_lru(folio) || zone > sc->reclaim_idx) { gen = folio_inc_gen(lruvec, folio, false); list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]); return true; From fb497d6db7c19c797cbd694b52d1af87c4eebcc6 Mon Sep 17 00:00:00 2001 From: "Liam R. Howlett" Date: Wed, 4 Sep 2024 17:12:04 -0700 Subject: [PATCH 06/12] mm/damon/vaddr: protect vma traversal in __damon_va_thre_regions() with rcu read lock Traversing VMAs of a given maple tree should be protected by rcu read lock. However, __damon_va_three_regions() is not doing the protection. Hold the lock. Link: https://lkml.kernel.org/r/20240905001204.1481-1-sj@kernel.org Fixes: d0cf3dd47f0d ("damon: convert __damon_va_three_regions to use the VMA iterator") Signed-off-by: Liam R. Howlett Signed-off-by: SeongJae Park Reported-by: Guenter Roeck Closes: https://lore.kernel.org/b83651a0-5b24-4206-b860-cb54ffdf209b@roeck-us.net Tested-by: Guenter Roeck Cc: David Hildenbrand Cc: Matthew Wilcox Cc: Signed-off-by: Andrew Morton --- mm/damon/vaddr.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c index 58829baf8b5d..a0036dc78a3b 100644 --- a/mm/damon/vaddr.c +++ b/mm/damon/vaddr.c @@ -126,6 +126,7 @@ static int __damon_va_three_regions(struct mm_struct *mm, * If this is too slow, it can be optimised to examine the maple * tree gaps. */ + rcu_read_lock(); for_each_vma(vmi, vma) { unsigned long gap; @@ -146,6 +147,7 @@ static int __damon_va_three_regions(struct mm_struct *mm, next: prev = vma; } + rcu_read_unlock(); if (!sz_range(&second_gap) || !sz_range(&first_gap)) return -EINVAL; From 6040f650c56862a4ac40b00c37ef6ab1ddfcebb5 Mon Sep 17 00:00:00 2001 From: Sergey Senozhatsky Date: Fri, 6 Sep 2024 12:45:44 +0900 Subject: [PATCH 07/12] zsmalloc: use unique zsmalloc caches names Each zsmalloc pool maintains several named kmem-caches for zs_handle-s and zspage-s. On a system with multiple zsmalloc pools and CONFIG_DEBUG_VM this triggers kmem_cache_sanity_check(): kmem_cache of name 'zspage' already exists WARNING: at mm/slab_common.c:108 do_kmem_cache_create_usercopy+0xb5/0x310 ... kmem_cache of name 'zs_handle' already exists WARNING: at mm/slab_common.c:108 do_kmem_cache_create_usercopy+0xb5/0x310 ... We provide zram device name when init its zsmalloc pool, so we can use that same name for zsmalloc caches and, hence, create unique names that can easily be linked to zram device that has created them. So instead of having this cat /proc/slabinfo slabinfo - version: 2.1 zspage 46 46 ... zs_handle 128 128 ... zspage 34270 34270 ... zs_handle 34816 34816 ... zspage 0 0 ... zs_handle 0 0 ... We now have this cat /proc/slabinfo slabinfo - version: 2.1 zspage-zram2 46 46 ... zs_handle-zram2 128 128 ... zspage-zram0 34270 34270 ... zs_handle-zram0 34816 34816 ... zspage-zram1 0 0 ... zs_handle-zram1 0 0 ... Link: https://lkml.kernel.org/r/20240906035103.2435557-1-senozhatsky@chromium.org Fixes: 2e40e163a25a ("zsmalloc: decouple handle and object") Signed-off-by: Sergey Senozhatsky Cc: Minchan Kim Signed-off-by: Andrew Morton --- mm/zsmalloc.c | 25 ++++++++++++++++++------- 1 file changed, 18 insertions(+), 7 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 2d3163e4da96..b572aa84823c 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -54,6 +54,7 @@ #include #include #include +#include #include #include #include @@ -293,17 +294,27 @@ static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage) {} static int create_cache(struct zs_pool *pool) { - pool->handle_cachep = kmem_cache_create("zs_handle", ZS_HANDLE_SIZE, - 0, 0, NULL); - if (!pool->handle_cachep) - return 1; + char *name; - pool->zspage_cachep = kmem_cache_create("zspage", sizeof(struct zspage), - 0, 0, NULL); + name = kasprintf(GFP_KERNEL, "zs_handle-%s", pool->name); + if (!name) + return -ENOMEM; + pool->handle_cachep = kmem_cache_create(name, ZS_HANDLE_SIZE, + 0, 0, NULL); + kfree(name); + if (!pool->handle_cachep) + return -EINVAL; + + name = kasprintf(GFP_KERNEL, "zspage-%s", pool->name); + if (!name) + return -ENOMEM; + pool->zspage_cachep = kmem_cache_create(name, sizeof(struct zspage), + 0, 0, NULL); + kfree(name); if (!pool->zspage_cachep) { kmem_cache_destroy(pool->handle_cachep); pool->handle_cachep = NULL; - return 1; + return -EINVAL; } return 0; From b4afe4183ec77f230851ea139d91e5cf2644c68b Mon Sep 17 00:00:00 2001 From: Huang Ying Date: Fri, 6 Sep 2024 11:07:11 +0800 Subject: [PATCH 08/12] resource: fix region_intersects() vs add_memory_driver_managed() On a system with CXL memory, the resource tree (/proc/iomem) related to CXL memory may look like something as follows. 490000000-50fffffff : CXL Window 0 490000000-50fffffff : region0 490000000-50fffffff : dax0.0 490000000-50fffffff : System RAM (kmem) Because drivers/dax/kmem.c calls add_memory_driver_managed() during onlining CXL memory, which makes "System RAM (kmem)" a descendant of "CXL Window X". This confuses region_intersects(), which expects all "System RAM" resources to be at the top level of iomem_resource. This can lead to bugs. For example, when the following command line is executed to write some memory in CXL memory range via /dev/mem, $ dd if=data of=/dev/mem bs=$((1 << 10)) seek=$((0x490000000 >> 10)) count=1 dd: error writing '/dev/mem': Bad address 1+0 records in 0+0 records out 0 bytes copied, 0.0283507 s, 0.0 kB/s the command fails as expected. However, the error code is wrong. It should be "Operation not permitted" instead of "Bad address". More seriously, the /dev/mem permission checking in devmem_is_allowed() passes incorrectly. Although the accessing is prevented later because ioremap() isn't allowed to map system RAM, it is a potential security issue. During command executing, the following warning is reported in the kernel log for calling ioremap() on system RAM. ioremap on RAM at 0x0000000490000000 - 0x0000000490000fff WARNING: CPU: 2 PID: 416 at arch/x86/mm/ioremap.c:216 __ioremap_caller.constprop.0+0x131/0x35d Call Trace: memremap+0xcb/0x184 xlate_dev_mem_ptr+0x25/0x2f write_mem+0x94/0xfb vfs_write+0x128/0x26d ksys_write+0xac/0xfe do_syscall_64+0x9a/0xfd entry_SYSCALL_64_after_hwframe+0x4b/0x53 The details of command execution process are as follows. In the above resource tree, "System RAM" is a descendant of "CXL Window 0" instead of a top level resource. So, region_intersects() will report no System RAM resources in the CXL memory region incorrectly, because it only checks the top level resources. Consequently, devmem_is_allowed() will return 1 (allow access via /dev/mem) for CXL memory region incorrectly. Fortunately, ioremap() doesn't allow to map System RAM and reject the access. So, region_intersects() needs to be fixed to work correctly with the resource tree with "System RAM" not at top level as above. To fix it, if we found a unmatched resource in the top level, we will continue to search matched resources in its descendant resources. So, we will not miss any matched resources in resource tree anymore. In the new implementation, an example resource tree |------------- "CXL Window 0" ------------| |-- "System RAM" --| will behave similar as the following fake resource tree for region_intersects(, IORESOURCE_SYSTEM_RAM, ), |-- "System RAM" --||-- "CXL Window 0a" --| Where "CXL Window 0a" is part of the original "CXL Window 0" that isn't covered by "System RAM". Link: https://lkml.kernel.org/r/20240906030713.204292-2-ying.huang@intel.com Fixes: c221c0b0308f ("device-dax: "Hotplug" persistent memory for use like normal RAM") Signed-off-by: "Huang, Ying" Cc: Dan Williams Cc: David Hildenbrand Cc: Davidlohr Bueso Cc: Jonathan Cameron Cc: Dave Jiang Cc: Alison Schofield Cc: Vishal Verma Cc: Ira Weiny Cc: Alistair Popple Cc: Andy Shevchenko Cc: Bjorn Helgaas Cc: Baoquan He Cc: Signed-off-by: Andrew Morton --- kernel/resource.c | 58 ++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 50 insertions(+), 8 deletions(-) diff --git a/kernel/resource.c b/kernel/resource.c index 14777afb0a99..235dc77f8add 100644 --- a/kernel/resource.c +++ b/kernel/resource.c @@ -540,20 +540,62 @@ static int __region_intersects(struct resource *parent, resource_size_t start, size_t size, unsigned long flags, unsigned long desc) { - struct resource res; + resource_size_t ostart, oend; int type = 0; int other = 0; - struct resource *p; + struct resource *p, *dp; + bool is_type, covered; + struct resource res; res.start = start; res.end = start + size - 1; for (p = parent->child; p ; p = p->sibling) { - bool is_type = (((p->flags & flags) == flags) && - ((desc == IORES_DESC_NONE) || - (desc == p->desc))); - - if (resource_overlaps(p, &res)) - is_type ? type++ : other++; + if (!resource_overlaps(p, &res)) + continue; + is_type = (p->flags & flags) == flags && + (desc == IORES_DESC_NONE || desc == p->desc); + if (is_type) { + type++; + continue; + } + /* + * Continue to search in descendant resources as if the + * matched descendant resources cover some ranges of 'p'. + * + * |------------- "CXL Window 0" ------------| + * |-- "System RAM" --| + * + * will behave similar as the following fake resource + * tree when searching "System RAM". + * + * |-- "System RAM" --||-- "CXL Window 0a" --| + */ + covered = false; + ostart = max(res.start, p->start); + oend = min(res.end, p->end); + for_each_resource(p, dp, false) { + if (!resource_overlaps(dp, &res)) + continue; + is_type = (dp->flags & flags) == flags && + (desc == IORES_DESC_NONE || desc == dp->desc); + if (is_type) { + type++; + /* + * Range from 'ostart' to 'dp->start' + * isn't covered by matched resource. + */ + if (dp->start > ostart) + break; + if (dp->end >= oend) { + covered = true; + break; + } + /* Remove covered range */ + ostart = max(ostart, dp->end + 1); + } + } + if (!covered) + other++; } if (type == 0) From 2a058ab3286d6475b2082b90c2d2182d2fea4b39 Mon Sep 17 00:00:00 2001 From: "Vishal Moola (Oracle)" Date: Sat, 14 Sep 2024 12:41:18 -0700 Subject: [PATCH 09/12] mm: change vmf_anon_prepare() to __vmf_anon_prepare() Some callers of vmf_anon_prepare() may not want us to release the per-VMA lock ourselves. Rename vmf_anon_prepare() to __vmf_anon_prepare() and let the callers drop the lock when desired. Also, make vmf_anon_prepare() a wrapper that releases the per-VMA lock itself for any callers that don't care. This is in preparation to fix this bug reported by syzbot: https://lore.kernel.org/linux-mm/00000000000067c20b06219fbc26@google.com/ Link: https://lkml.kernel.org/r/20240914194243.245-1-vishal.moola@gmail.com Fixes: 9acad7ba3e25 ("hugetlb: use vmf_anon_prepare() instead of anon_vma_prepare()") Reported-by: syzbot+2dab93857ee95f2eeb08@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-mm/00000000000067c20b06219fbc26@google.com/ Signed-off-by: Vishal Moola (Oracle) Cc: Muchun Song Cc: Signed-off-by: Andrew Morton --- mm/internal.h | 11 ++++++++++- mm/memory.c | 8 +++----- 2 files changed, 13 insertions(+), 6 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index b4d86436565b..a963f67d3452 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -310,7 +310,16 @@ static inline void wake_throttle_isolated(pg_data_t *pgdat) wake_up(wqh); } -vm_fault_t vmf_anon_prepare(struct vm_fault *vmf); +vm_fault_t __vmf_anon_prepare(struct vm_fault *vmf); +static inline vm_fault_t vmf_anon_prepare(struct vm_fault *vmf) +{ + vm_fault_t ret = __vmf_anon_prepare(vmf); + + if (unlikely(ret & VM_FAULT_RETRY)) + vma_end_read(vmf->vma); + return ret; +} + vm_fault_t do_swap_page(struct vm_fault *vmf); void folio_rotate_reclaimable(struct folio *folio); bool __folio_end_writeback(struct folio *folio); diff --git a/mm/memory.c b/mm/memory.c index 3c01d68065be..95c347ba6305 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3259,7 +3259,7 @@ static inline vm_fault_t vmf_can_call_fault(const struct vm_fault *vmf) } /** - * vmf_anon_prepare - Prepare to handle an anonymous fault. + * __vmf_anon_prepare - Prepare to handle an anonymous fault. * @vmf: The vm_fault descriptor passed from the fault handler. * * When preparing to insert an anonymous page into a VMA from a @@ -3273,7 +3273,7 @@ static inline vm_fault_t vmf_can_call_fault(const struct vm_fault *vmf) * Return: 0 if fault handling can proceed. Any other value should be * returned to the caller. */ -vm_fault_t vmf_anon_prepare(struct vm_fault *vmf) +vm_fault_t __vmf_anon_prepare(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; vm_fault_t ret = 0; @@ -3281,10 +3281,8 @@ vm_fault_t vmf_anon_prepare(struct vm_fault *vmf) if (likely(vma->anon_vma)) return 0; if (vmf->flags & FAULT_FLAG_VMA_LOCK) { - if (!mmap_read_trylock(vma->vm_mm)) { - vma_end_read(vma); + if (!mmap_read_trylock(vma->vm_mm)) return VM_FAULT_RETRY; - } } if (__anon_vma_prepare(vma)) ret = VM_FAULT_OOM; From 98b74bb4d7e96b4da5ef3126511febe55b76b807 Mon Sep 17 00:00:00 2001 From: "Vishal Moola (Oracle)" Date: Sat, 14 Sep 2024 12:41:19 -0700 Subject: [PATCH 10/12] mm/hugetlb.c: fix UAF of vma in hugetlb fault pathway Syzbot reports a UAF in hugetlb_fault(). This happens because vmf_anon_prepare() could drop the per-VMA lock and allow the current VMA to be freed before hugetlb_vma_unlock_read() is called. We can fix this by using a modified version of vmf_anon_prepare() that doesn't release the VMA lock on failure, and then release it ourselves after hugetlb_vma_unlock_read(). Link: https://lkml.kernel.org/r/20240914194243.245-2-vishal.moola@gmail.com Fixes: 9acad7ba3e25 ("hugetlb: use vmf_anon_prepare() instead of anon_vma_prepare()") Reported-by: syzbot+2dab93857ee95f2eeb08@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-mm/00000000000067c20b06219fbc26@google.com/ Signed-off-by: Vishal Moola (Oracle) Cc: Muchun Song Cc: Signed-off-by: Andrew Morton --- mm/hugetlb.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index aaf508be0a2b..9a3a6e2dee97 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6048,7 +6048,7 @@ retry_avoidcopy: * When the original hugepage is shared one, it does not have * anon_vma prepared. */ - ret = vmf_anon_prepare(vmf); + ret = __vmf_anon_prepare(vmf); if (unlikely(ret)) goto out_release_all; @@ -6247,7 +6247,7 @@ static vm_fault_t hugetlb_no_page(struct address_space *mapping, } if (!(vma->vm_flags & VM_MAYSHARE)) { - ret = vmf_anon_prepare(vmf); + ret = __vmf_anon_prepare(vmf); if (unlikely(ret)) goto out; } @@ -6378,6 +6378,14 @@ static vm_fault_t hugetlb_no_page(struct address_space *mapping, folio_unlock(folio); out: hugetlb_vma_unlock_read(vma); + + /* + * We must check to release the per-VMA lock. __vmf_anon_prepare() is + * the only way ret can be set to VM_FAULT_RETRY. + */ + if (unlikely(ret & VM_FAULT_RETRY)) + vma_end_read(vma); + mutex_unlock(&hugetlb_fault_mutex_table[hash]); return ret; @@ -6599,6 +6607,14 @@ out_ptl: } out_mutex: hugetlb_vma_unlock_read(vma); + + /* + * We must check to release the per-VMA lock. __vmf_anon_prepare() in + * hugetlb_wp() is the only way ret can be set to VM_FAULT_RETRY. + */ + if (unlikely(ret & VM_FAULT_RETRY)) + vma_end_read(vma); + mutex_unlock(&hugetlb_fault_mutex_table[hash]); /* * Generally it's safe to hold refcount during waiting page lock. But From 2a1b8648d9be9f37f808a36c0f74adb8c53d06e6 Mon Sep 17 00:00:00 2001 From: Miaohe Lin Date: Sat, 14 Sep 2024 09:53:06 +0800 Subject: [PATCH 11/12] mm/huge_memory: ensure huge_zero_folio won't have large_rmappable flag set Ensure huge_zero_folio won't have large_rmappable flag set. So it can be reported as thp,zero correctly through stable_page_flags(). Link: https://lkml.kernel.org/r/20240914015306.3656791-1-linmiaohe@huawei.com Fixes: 5691753d73a2 ("mm: convert huge_zero_page to huge_zero_folio") Signed-off-by: Miaohe Lin Cc: David Hildenbrand Cc: Matthew Wilcox (Oracle) Cc: Signed-off-by: Andrew Morton --- mm/huge_memory.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 67c86a5d64a6..99b146d16a18 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -220,6 +220,8 @@ retry: count_vm_event(THP_ZERO_PAGE_ALLOC_FAILED); return false; } + /* Ensure zero folio won't have large_rmappable flag set. */ + folio_clear_large_rmappable(zero_folio); preempt_disable(); if (cmpxchg(&huge_zero_folio, NULL, zero_folio)) { preempt_enable(); From 22af8caff7d1ca22a1ff1a554180e53f7a6555af Mon Sep 17 00:00:00 2001 From: Lorenzo Stoakes Date: Fri, 13 Sep 2024 15:06:28 +0100 Subject: [PATCH 12/12] mm/madvise: process_madvise() drop capability check if same mm In commit 96cfe2c0fd23 ("mm/madvise: replace ptrace attach requirement for process_madvise") process_madvise() was updated to require the caller to possess the CAP_SYS_NICE capability to perform the operation, in addition to a check against PTRACE_MODE_READ performed by mm_access(). The mm_access() function explicitly checks to see if the address space of the process being referenced is the current one, in which case no check is performed. We, however, do not do this when checking the CAP_SYS_NICE capability. This means that we insist on the caller possessing this capability in order to perform madvise() operations on its own address space, which seems nonsensical. Simply add a check to allow for an invocation of this function with pidfd set to the current process without elevation. Link: https://lkml.kernel.org/r/20240913140628.77047-1-lorenzo.stoakes@oracle.com Fixes: 96cfe2c0fd23 ("mm/madvise: replace ptrace attach requirement for process_madvise") Signed-off-by: Lorenzo Stoakes Reviewed-by: Liam R. Howlett Acked-by: Vlastimil Babka Acked-by: Shakeel Butt Acked-by: David Rientjes Cc: Kees Cook Cc: Minchan Kim Cc: Suren Baghdasaryan Signed-off-by: Andrew Morton --- mm/madvise.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/madvise.c b/mm/madvise.c index 89089d84f8df..6e3a137b8e50 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -1527,7 +1527,7 @@ SYSCALL_DEFINE5(process_madvise, int, pidfd, const struct iovec __user *, vec, * Require CAP_SYS_NICE for influencing process performance. Note that * only non-destructive hints are currently supported. */ - if (!capable(CAP_SYS_NICE)) { + if (mm != current->mm && !capable(CAP_SYS_NICE)) { ret = -EPERM; goto release_mm; }