mirror of
https://github.com/torvalds/linux.git
synced 2025-12-07 20:06:24 +00:00
Patch series "Improve UFFDIO_MOVE scalability by removing anon_vma lock", v2. Userfaultfd has a scalability issue in its UFFDIO_MOVE ioctl, which is heavily used in Android as its java garbage collector uses it for concurrent heap compaction. The issue arises because UFFDIO_MOVE updates folio->mapping to an anon_vma with a different root, in order to move the folio from a src VMA to dst VMA. It performs the operation with the folio locked, but this is insufficient, because rmap_walk() can be performed on non-KSM anonymous folios without folio lock. This means that UFFDIO_MOVE has to acquire the anon_vma write lock of the root anon_vma belonging to the folio it wishes to move. This causes scalability bottleneck when multiple threads perform UFFDIO_MOVE simultanously on distinct pages of the same src VMA. In field traces of arm64 android devices, we have observed janky user interactions due to long (sometimes over ~50ms) uninterruptible sleeps on main UI thread caused by anon_vma lock contention in UFFDIO_MOVE. This is particularly severe during the beginning of GC's compaction phase when it is likely to have multiple threads involved. This patch resolves the issue by removing the exception in rmap_walk() for non-KSM anon folios by ensuring that all folios are locked during rmap walk. This is less problematic than it might seem, as the only major caller which utilises this mode is shrink_active_list(), which is covered in detail in the first patch of this series. As a result of changing our approach to locking, we can remove all the code that took steps to acquire an anon_vma write lock instead of a folio lock. This results in a significant simplification and scalability improvement of the code (currently only in UFFDIO_MOVE). Furthermore, as a side-effect, folio_lock_anon_vma_read() gets simpler as we don't need to worry that folio->mapping may have changed under us. This patch (of 2): Guarantee that rmap_walk() is called on locked folios so that threads changing folio->mapping and folio->index for non-KSM anon folios can serialize on fine-grained folio lock rather than anon_vma lock. Other folio types are already always locked before rmap_walk(). With this, we are going from 'not necessarily' locking the non-KSM anon folio to 'definitely' locking it during rmap walks. This patch is in preparation for removing anon_vma write-lock from UFFDIO_MOVE. With this patch, three functions are now expected to be called with a locked folio. To be careful of not missing any case, here is the exhaustive list of all their callers. 1) rmap_walk() is called from: a) folio_referenced() b) damon_folio_mkold() c) damon_folio_young() d) page_idle_clear_pte_refs() e) try_to_unmap() f) try_to_migrate() g) folio_mkclean() h) remove_migration_ptes() In the above list, first 4 are changed in this patch to try-lock non-KSM anon folios, similar to other types of folios. The remaining functions in the list already hold folio lock when calling rmap_walk(). 2) folio_lock_anon_vma_read() is called from following functions: a) collect_procs_anon() b) page_idle_clear_pte_refs() c) damon_folio_mkold() d) damon_folio_young() e) folio_referenced() f) try_to_unmap() g) try_to_migrate() All the functions in above list, except collect_procs_anon(), are covered by the rmap_walk() list above. For collect_procs_anon(), with kill_procs_now() changed to take folio lock in this patch ensures that all callers of folio_lock_anon_vma_read() now hold the lock. 3) folio_get_anon_vma() is called from following functions, all of which already hold the folio lock: a) move_pages_huge_pmd() b) __folio_split() c) move_pages_ptes() d) migrate_folio_unmap() e) unmap_and_move_huge_page() Functionally, this patch doesn't break the logic because rmap walkers generally do some other check to see if what is expected to mapped did happen so it's fine, or otherwise treat things as best-effort. Among the 4 functions changed in this patch, folio_referenced() is the only core-mm function, and is also frequently accessed. To assess the impact of locking non-KSM anon folios in shrink_active_list()->folio_referenced() path, we performed an app cycle test on an arm64 android device. During the whole duration of the test there were over 140k invocations of shrink_active_list(), out of which over 29k had at least one non-KSM anon folio on which folio_referenced() was called. In none of these invocations folio_trylock() failed. Of course, we now take a lock where we wouldn't previously have. In the past it would have had a major impact in causing a CoW write fault to copy a page in do_wp_page(), as commit09854ba94c("mm: do_wp_page() simplification") caused a failure to obtain folio lock to result in a page copy even if one wasn't necessary. However, since commit6c287605fd("mm: remember exclusively mapped anonymous pages with PG_anon_exclusive"), and the introduction of the folio anon exclusive flag, this issue is significantly mitigated. The only case remaining that we might worry about from this perspective is that of read-only folios immediately after fork where the anon exclusive bit will not have been set yet. We note however in the case of read-only just-forked folios that wp_can_reuse_anon_folio() will notice the raised reference count established by shrink_active_list() via isolate_lru_folios() and refuse to reuse in any case, so this will in fact have no impact - the folio lock is ultimately immaterial here. All-in-all it appears that there is little opportunity for meaningful negative impact from this change. Link: https://lkml.kernel.org/r/20250923071019.775806-1-lokeshgidra@google.com Link: https://lkml.kernel.org/r/20250923071019.775806-2-lokeshgidra@google.com Signed-off-by: Lokesh Gidra <lokeshgidra@google.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Peter Xu <peterx@redhat.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Harry Yoo <harry.yoo@oracle.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Barry Song <baohua@kernel.org> Cc: SeongJae Park <sj@kernel.org> Cc: Jann Horn <jannh@google.com> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Nicolas Geoffray <ngeoffray@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
223 lines
5.5 KiB
C
223 lines
5.5 KiB
C
// SPDX-License-Identifier: GPL-2.0
|
|
#include <linux/init.h>
|
|
#include <linux/memblock.h>
|
|
#include <linux/fs.h>
|
|
#include <linux/sysfs.h>
|
|
#include <linux/kobject.h>
|
|
#include <linux/memory_hotplug.h>
|
|
#include <linux/mm.h>
|
|
#include <linux/mmzone.h>
|
|
#include <linux/pagemap.h>
|
|
#include <linux/rmap.h>
|
|
#include <linux/mmu_notifier.h>
|
|
#include <linux/page_ext.h>
|
|
#include <linux/page_idle.h>
|
|
|
|
#include "internal.h"
|
|
|
|
#define BITMAP_CHUNK_SIZE sizeof(u64)
|
|
#define BITMAP_CHUNK_BITS (BITMAP_CHUNK_SIZE * BITS_PER_BYTE)
|
|
|
|
/*
|
|
* Idle page tracking only considers user memory pages, for other types of
|
|
* pages the idle flag is always unset and an attempt to set it is silently
|
|
* ignored.
|
|
*
|
|
* We treat a page as a user memory page if it is on an LRU list, because it is
|
|
* always safe to pass such a page to rmap_walk(), which is essential for idle
|
|
* page tracking. With such an indicator of user pages we can skip isolated
|
|
* pages, but since there are not usually many of them, it will hardly affect
|
|
* the overall result.
|
|
*
|
|
* This function tries to get a user memory page by pfn as described above.
|
|
*/
|
|
static struct folio *page_idle_get_folio(unsigned long pfn)
|
|
{
|
|
struct page *page = pfn_to_online_page(pfn);
|
|
struct folio *folio;
|
|
|
|
if (!page || PageTail(page))
|
|
return NULL;
|
|
|
|
folio = page_folio(page);
|
|
if (!folio_test_lru(folio) || !folio_try_get(folio))
|
|
return NULL;
|
|
if (unlikely(page_folio(page) != folio || !folio_test_lru(folio))) {
|
|
folio_put(folio);
|
|
folio = NULL;
|
|
}
|
|
return folio;
|
|
}
|
|
|
|
static bool page_idle_clear_pte_refs_one(struct folio *folio,
|
|
struct vm_area_struct *vma,
|
|
unsigned long addr, void *arg)
|
|
{
|
|
DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, addr, 0);
|
|
bool referenced = false;
|
|
|
|
while (page_vma_mapped_walk(&pvmw)) {
|
|
addr = pvmw.address;
|
|
if (pvmw.pte) {
|
|
/*
|
|
* For PTE-mapped THP, one sub page is referenced,
|
|
* the whole THP is referenced.
|
|
*
|
|
* PFN swap PTEs, such as device-exclusive ones, that
|
|
* actually map pages are "old" from a CPU perspective.
|
|
* The MMU notifier takes care of any device aspects.
|
|
*/
|
|
if (likely(pte_present(ptep_get(pvmw.pte))))
|
|
referenced |= ptep_test_and_clear_young(vma, addr, pvmw.pte);
|
|
referenced |= mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE);
|
|
} else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
|
|
if (pmdp_clear_young_notify(vma, addr, pvmw.pmd))
|
|
referenced = true;
|
|
} else {
|
|
/* unexpected pmd-mapped page? */
|
|
WARN_ON_ONCE(1);
|
|
}
|
|
}
|
|
|
|
if (referenced) {
|
|
folio_clear_idle(folio);
|
|
/*
|
|
* We cleared the referenced bit in a mapping to this page. To
|
|
* avoid interference with page reclaim, mark it young so that
|
|
* folio_referenced() will return > 0.
|
|
*/
|
|
folio_set_young(folio);
|
|
}
|
|
return true;
|
|
}
|
|
|
|
static void page_idle_clear_pte_refs(struct folio *folio)
|
|
{
|
|
/*
|
|
* Since rwc.try_lock is unused, rwc is effectively immutable, so we
|
|
* can make it static to save some cycles and stack.
|
|
*/
|
|
static struct rmap_walk_control rwc = {
|
|
.rmap_one = page_idle_clear_pte_refs_one,
|
|
.anon_lock = folio_lock_anon_vma_read,
|
|
};
|
|
|
|
if (!folio_mapped(folio) || !folio_raw_mapping(folio))
|
|
return;
|
|
|
|
if (!folio_trylock(folio))
|
|
return;
|
|
|
|
rmap_walk(folio, &rwc);
|
|
folio_unlock(folio);
|
|
}
|
|
|
|
static ssize_t page_idle_bitmap_read(struct file *file, struct kobject *kobj,
|
|
const struct bin_attribute *attr, char *buf,
|
|
loff_t pos, size_t count)
|
|
{
|
|
u64 *out = (u64 *)buf;
|
|
struct folio *folio;
|
|
unsigned long pfn, end_pfn;
|
|
int bit;
|
|
|
|
if (pos % BITMAP_CHUNK_SIZE || count % BITMAP_CHUNK_SIZE)
|
|
return -EINVAL;
|
|
|
|
pfn = pos * BITS_PER_BYTE;
|
|
if (pfn >= max_pfn)
|
|
return 0;
|
|
|
|
end_pfn = pfn + count * BITS_PER_BYTE;
|
|
if (end_pfn > max_pfn)
|
|
end_pfn = max_pfn;
|
|
|
|
for (; pfn < end_pfn; pfn++) {
|
|
bit = pfn % BITMAP_CHUNK_BITS;
|
|
if (!bit)
|
|
*out = 0ULL;
|
|
folio = page_idle_get_folio(pfn);
|
|
if (folio) {
|
|
if (folio_test_idle(folio)) {
|
|
/*
|
|
* The page might have been referenced via a
|
|
* pte, in which case it is not idle. Clear
|
|
* refs and recheck.
|
|
*/
|
|
page_idle_clear_pte_refs(folio);
|
|
if (folio_test_idle(folio))
|
|
*out |= 1ULL << bit;
|
|
}
|
|
folio_put(folio);
|
|
}
|
|
if (bit == BITMAP_CHUNK_BITS - 1)
|
|
out++;
|
|
cond_resched();
|
|
}
|
|
return (char *)out - buf;
|
|
}
|
|
|
|
static ssize_t page_idle_bitmap_write(struct file *file, struct kobject *kobj,
|
|
const struct bin_attribute *attr, char *buf,
|
|
loff_t pos, size_t count)
|
|
{
|
|
const u64 *in = (u64 *)buf;
|
|
struct folio *folio;
|
|
unsigned long pfn, end_pfn;
|
|
int bit;
|
|
|
|
if (pos % BITMAP_CHUNK_SIZE || count % BITMAP_CHUNK_SIZE)
|
|
return -EINVAL;
|
|
|
|
pfn = pos * BITS_PER_BYTE;
|
|
if (pfn >= max_pfn)
|
|
return -ENXIO;
|
|
|
|
end_pfn = pfn + count * BITS_PER_BYTE;
|
|
if (end_pfn > max_pfn)
|
|
end_pfn = max_pfn;
|
|
|
|
for (; pfn < end_pfn; pfn++) {
|
|
bit = pfn % BITMAP_CHUNK_BITS;
|
|
if ((*in >> bit) & 1) {
|
|
folio = page_idle_get_folio(pfn);
|
|
if (folio) {
|
|
page_idle_clear_pte_refs(folio);
|
|
folio_set_idle(folio);
|
|
folio_put(folio);
|
|
}
|
|
}
|
|
if (bit == BITMAP_CHUNK_BITS - 1)
|
|
in++;
|
|
cond_resched();
|
|
}
|
|
return (char *)in - buf;
|
|
}
|
|
|
|
static const struct bin_attribute page_idle_bitmap_attr =
|
|
__BIN_ATTR(bitmap, 0600,
|
|
page_idle_bitmap_read, page_idle_bitmap_write, 0);
|
|
|
|
static const struct bin_attribute *const page_idle_bin_attrs[] = {
|
|
&page_idle_bitmap_attr,
|
|
NULL,
|
|
};
|
|
|
|
static const struct attribute_group page_idle_attr_group = {
|
|
.bin_attrs = page_idle_bin_attrs,
|
|
.name = "page_idle",
|
|
};
|
|
|
|
static int __init page_idle_init(void)
|
|
{
|
|
int err;
|
|
|
|
err = sysfs_create_group(mm_kobj, &page_idle_attr_group);
|
|
if (err) {
|
|
pr_err("page_idle: register sysfs failed\n");
|
|
return err;
|
|
}
|
|
return 0;
|
|
}
|
|
subsys_initcall(page_idle_init);
|