mm: swap: remove scan_swap_map_slots() references from comments

The scan_swap_map_slots() helper has been removed, but several comments
still referred to it in swap allocation and reclaim paths.  This patch
cleans up those outdated references and reflows the affected comment
blocks to match kernel coding style.

Link: https://lkml.kernel.org/r/20251031065011.40863-6-youngjun.park@lge.com
Signed-off-by: Youngjun Park <youngjun.park@lge.com>
Reviewed-by: Baoquan He <bhe@redhat.com>
Acked-by: Chris Li <chrisl@kernel.org>
Cc: Barry Song <baohua@kernel.org>
Cc: Kairui Song <kasong@tencent.com>
Cc: Kemeng Shi <shikemeng@huaweicloud.com>
Cc: Nhat Pham <nphamcs@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Youngjun Park
2025-10-31 15:50:11 +09:00
committed by Andrew Morton
parent 4c239d5f59
commit b7dd80f8f9

View File

@@ -236,11 +236,10 @@ again:
ret = -nr_pages;
/*
* When this function is called from scan_swap_map_slots() and it's
* called by vmscan.c at reclaiming folios. So we hold a folio lock
* here. We have to use trylock for avoiding deadlock. This is a special
* case and you should use folio_free_swap() with explicit folio_lock()
* in usual operations.
* We hold a folio lock here. We have to use trylock for
* avoiding deadlock. This is a special case and you should
* use folio_free_swap() with explicit folio_lock() in usual
* operations.
*/
if (!folio_trylock(folio))
goto out;
@@ -1365,14 +1364,13 @@ start_over:
spin_lock(&swap_avail_lock);
/*
* if we got here, it's likely that si was almost full before,
* and since scan_swap_map_slots() can drop the si->lock,
* multiple callers probably all tried to get a page from the
* same si and it filled up before we could get one; or, the si
* filled up between us dropping swap_avail_lock and taking
* si->lock. Since we dropped the swap_avail_lock, the
* swap_avail_head list may have been modified; so if next is
* still in the swap_avail_head list then try it, otherwise
* start over if we have not gotten any slots.
* filled up between us dropping swap_avail_lock.
* Since we dropped the swap_avail_lock, the swap_avail_list
* may have been modified; so if next is still in the
* swap_avail_head list then try it, otherwise start over if we
* have not gotten any slots.
*/
if (plist_node_empty(&next->avail_list))
goto start_over;