arm64/mm: Document why linear map split failure upon vm_reset_perms is not problematic

Consider the following code path:

(1) vmalloc -> (2) set_vm_flush_reset_perms -> (3) set_memory_ro/set_memory_rox
-> .... (4) use the mapping .... -> (5) vfree -> (6) vm_reset_perms
-> (7) set_area_direct_map.
Or, it may happen that we encounter failure at (3) and directly jump to (5).

In both cases, (7) may fail due to linear map split failure. But, we care
about its success *only* for the region which got successfully changed by
(3). Such a region is guaranteed to be pte-mapped.

The TLDR is that (7) will surely succeed for the regions we care about.

Signed-off-by: Dev Jain <dev.jain@arm.com>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This commit is contained in:
Dev Jain
2025-11-12 11:57:16 +05:30
committed by Catalin Marinas
parent e5efd56fa1
commit 0c2988aaa4

View File

@@ -185,6 +185,15 @@ static int change_memory_common(unsigned long addr, int numpages,
*/
if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
pgprot_val(clear_mask) == PTE_RDONLY)) {
/*
* Note: One may wonder what happens if the calls to
* set_area_direct_map() in vm_reset_perms() fail due ENOMEM on
* linear map split failure. Observe that we care about those
* calls to succeed *only* for the region whose permissions
* are not default. Such a region is guaranteed to be
* pte-mapped, because the below call can change those
* permissions to non-default only after splitting that region.
*/
for (i = 0; i < area->nr_pages; i++) {
ret = __change_memory_common((u64)page_address(area->pages[i]),
PAGE_SIZE, set_mask, clear_mask);