Merge tag 'mm-nonmm-stable-2025-12-06-11-14' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull non-MM updates from Andrew Morton:

 - "panic: sys_info: Refactor and fix a potential issue" (Andy Shevchenko)
   fixes a build issue and does some cleanup in ib/sys_info.c

 - "Implement mul_u64_u64_div_u64_roundup()" (David Laight)
   enhances the 64-bit math code on behalf of a PWM driver and beefs up
   the test module for these library functions

 - "scripts/gdb/symbols: make BPF debug info available to GDB" (Ilya Leoshkevich)
   makes BPF symbol names, sizes, and line numbers available to the GDB
   debugger

 - "Enable hung_task and lockup cases to dump system info on demand" (Feng Tang)
   adds a sysctl which can be used to cause additional info dumping when
   the hung-task and lockup detectors fire

 - "lib/base64: add generic encoder/decoder, migrate users" (Kuan-Wei Chiu)
   adds a general base64 encoder/decoder to lib/ and migrates several
   users away from their private implementations

 - "rbree: inline rb_first() and rb_last()" (Eric Dumazet)
   makes TCP a little faster

 - "liveupdate: Rework KHO for in-kernel users" (Pasha Tatashin)
   reworks the KEXEC Handover interfaces in preparation for Live Update
   Orchestrator (LUO), and possibly for other future clients

 - "kho: simplify state machine and enable dynamic updates" (Pasha Tatashin)
   increases the flexibility of KEXEC Handover. Also preparation for LUO

 - "Live Update Orchestrator" (Pasha Tatashin)
   is a major new feature targeted at cloud environments. Quoting the
   cover letter:

      This series introduces the Live Update Orchestrator, a kernel
      subsystem designed to facilitate live kernel updates using a
      kexec-based reboot. This capability is critical for cloud
      environments, allowing hypervisors to be updated with minimal
      downtime for running virtual machines. LUO achieves this by
      preserving the state of selected resources, such as memory,
      devices and their dependencies, across the kernel transition.

      As a key feature, this series includes support for preserving
      memfd file descriptors, which allows critical in-memory data, such
      as guest RAM or any other large memory region, to be maintained in
      RAM across the kexec reboot.

   Mike Rappaport merits a mention here, for his extensive review and
   testing work.

 - "kexec: reorganize kexec and kdump sysfs" (Sourabh Jain)
   moves the kexec and kdump sysfs entries from /sys/kernel/ to
   /sys/kernel/kexec/ and adds back-compatibility symlinks which can
   hopefully be removed one day

 - "kho: fixes for vmalloc restoration" (Mike Rapoport)
   fixes a BUG which was being hit during KHO restoration of vmalloc()
   regions

* tag 'mm-nonmm-stable-2025-12-06-11-14' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (139 commits)
  calibrate: update header inclusion
  Reinstate "resource: avoid unnecessary lookups in find_next_iomem_res()"
  vmcoreinfo: track and log recoverable hardware errors
  kho: fix restoring of contiguous ranges of order-0 pages
  kho: kho_restore_vmalloc: fix initialization of pages array
  MAINTAINERS: TPM DEVICE DRIVER: update the W-tag
  init: replace simple_strtoul with kstrtoul to improve lpj_setup
  KHO: fix boot failure due to kmemleak access to non-PRESENT pages
  Documentation/ABI: new kexec and kdump sysfs interface
  Documentation/ABI: mark old kexec sysfs deprecated
  kexec: move sysfs entries to /sys/kernel/kexec
  test_kho: always print restore status
  kho: free chunks using free_page() instead of kfree()
  selftests/liveupdate: add kexec test for multiple and empty sessions
  selftests/liveupdate: add simple kexec-based selftest for LUO
  selftests/liveupdate: add userspace API selftests
  docs: add documentation for memfd preservation via LUO
  mm: memfd_luo: allow preserving memfd
  liveupdate: luo_file: add private argument to store runtime state
  mm: shmem: export some functions to internal.h
  ...
This commit is contained in:
Linus Torvalds
2025-12-06 14:01:20 -08:00
439 changed files with 8050 additions and 1820 deletions

View File

@@ -303,6 +303,7 @@ Hans de Goede <hansg@kernel.org> <hdegoede@redhat.com>
Hans Verkuil <hverkuil@kernel.org> <hverkuil@xs4all.nl>
Hans Verkuil <hverkuil@kernel.org> <hverkuil-cisco@xs4all.nl>
Hans Verkuil <hverkuil@kernel.org> <hansverk@cisco.com>
Hao Ge <hao.ge@linux.dev> <gehao@kylinos.cn>
Harry Yoo <harry.yoo@oracle.com> <42.hyeyoo@gmail.com>
Heiko Carstens <hca@linux.ibm.com> <h.carstens@de.ibm.com>
Heiko Carstens <hca@linux.ibm.com> <heiko.carstens@de.ibm.com>
@@ -503,9 +504,7 @@ Mark Brown <broonie@sirena.org.uk>
Mark Starovoytov <mstarovo@pm.me> <mstarovoitov@marvell.com>
Markus Schneider-Pargmann <msp@baylibre.com> <mpa@pengutronix.de>
Mark Yao <markyao0591@gmail.com> <mark.yao@rock-chips.com>
Martin Kepplinger <martink@posteo.de> <martin.kepplinger@ginzinger.com>
Martin Kepplinger <martink@posteo.de> <martin.kepplinger@puri.sm>
Martin Kepplinger <martink@posteo.de> <martin.kepplinger@theobroma-systems.com>
Martin Kepplinger-Novakovic <martink@posteo.de> <martin.kepplinger-novakovic@ginzinger.com>
Martyna Szapar-Mudlaw <martyna.szapar-mudlaw@linux.intel.com> <martyna.szapar-mudlaw@intel.com>
Mathieu Othacehe <othacehe@gnu.org> <m.othacehe@gmail.com>
Mat Martineau <martineau@kernel.org> <mathew.j.martineau@linux.intel.com>
@@ -856,6 +855,9 @@ Vivien Didelot <vivien.didelot@gmail.com> <vivien.didelot@savoirfairelinux.com>
Vlad Dogaru <ddvlad@gmail.com> <vlad.dogaru@intel.com>
Vladimir Davydov <vdavydov.dev@gmail.com> <vdavydov@parallels.com>
Vladimir Davydov <vdavydov.dev@gmail.com> <vdavydov@virtuozzo.com>
WangYuli <wangyuli@aosc.io> <wangyl5933@chinaunicom.cn>
WangYuli <wangyuli@aosc.io> <wangyuli@deepin.org>
WangYuli <wangyuli@aosc.io> <wangyuli@uniontech.com>
Weiwen Hu <huweiwen@linux.alibaba.com> <sehuww@mail.scut.edu.cn>
WeiXiong Liao <gmpy.liaowx@gmail.com> <liaoweixiong@allwinnertech.com>
Wen Gong <quic_wgong@quicinc.com> <wgong@codeaurora.org>
@@ -867,6 +869,7 @@ Yakir Yang <kuankuan.y@gmail.com> <ykk@rock-chips.com>
Yanteng Si <si.yanteng@linux.dev> <siyanteng@loongson.cn>
Ying Huang <huang.ying.caritas@gmail.com> <ying.huang@intel.com>
Yosry Ahmed <yosry.ahmed@linux.dev> <yosryahmed@google.com>
Yu-Chun Lin <eleanor.lin@realtek.com> <eleanor15x@gmail.com>
Yusuke Goda <goda.yusuke@renesas.com>
Zack Rusin <zack.rusin@broadcom.com> <zackr@vmware.com>
Zhu Yanjun <zyjzyj2000@gmail.com> <yanjunz@nvidia.com>

View File

@@ -2056,16 +2056,15 @@ S: Korte Heul 95
S: 1403 ND BUSSUM
S: The Netherlands
N: Martin Kepplinger
N: Martin Kepplinger-Novakovic
E: martink@posteo.de
E: martin.kepplinger@puri.sm
W: http://www.martinkepplinger.com
P: 4096R/5AB387D3 F208 2B88 0F9E 4239 3468 6E3F 5003 98DF 5AB3 87D3
D: mma8452 accelerators iio driver
D: pegasus_notetaker input driver
D: imx8m media and hi846 sensor driver
D: Kernel fixes and cleanups
S: Garnisonstraße 26
S: 4020 Linz
S: Keplerstr. 6
S: 4050 Traun
S: Austria
N: Karl Keyte

View File

@@ -0,0 +1,71 @@
NOTE: all the ABIs listed in this file are deprecated and will be removed after 2028.
Here are the alternative ABIs:
+------------------------------------+-----------------------------------------+
| Deprecated | Alternative |
+------------------------------------+-----------------------------------------+
| /sys/kernel/kexec_loaded | /sys/kernel/kexec/loaded |
+------------------------------------+-----------------------------------------+
| /sys/kernel/kexec_crash_loaded | /sys/kernel/kexec/crash_loaded |
+------------------------------------+-----------------------------------------+
| /sys/kernel/kexec_crash_size | /sys/kernel/kexec/crash_size |
+------------------------------------+-----------------------------------------+
| /sys/kernel/crash_elfcorehdr_size | /sys/kernel/kexec/crash_elfcorehdr_size |
+------------------------------------+-----------------------------------------+
| /sys/kernel/kexec_crash_cma_ranges | /sys/kernel/kexec/crash_cma_ranges |
+------------------------------------+-----------------------------------------+
What: /sys/kernel/kexec_loaded
Date: Jun 2006
Contact: kexec@lists.infradead.org
Description: read only
Indicates whether a new kernel image has been loaded
into memory using the kexec system call. It shows 1 if
a kexec image is present and ready to boot, or 0 if none
is loaded.
User: kexec tools, kdump service
What: /sys/kernel/kexec_crash_loaded
Date: Jun 2006
Contact: kexec@lists.infradead.org
Description: read only
Indicates whether a crash (kdump) kernel is currently
loaded into memory. It shows 1 if a crash kernel has been
successfully loaded for panic handling, or 0 if no crash
kernel is present.
User: Kexec tools, Kdump service
What: /sys/kernel/kexec_crash_size
Date: Dec 2009
Contact: kexec@lists.infradead.org
Description: read/write
Shows the amount of memory reserved for loading the crash
(kdump) kernel. It reports the size, in bytes, of the
crash kernel area defined by the crashkernel= parameter.
This interface also allows reducing the crashkernel
reservation by writing a smaller value, and the reclaimed
space is added back to the system RAM.
User: Kdump service
What: /sys/kernel/crash_elfcorehdr_size
Date: Aug 2023
Contact: kexec@lists.infradead.org
Description: read only
Indicates the preferred size of the memory buffer for the
ELF core header used by the crash (kdump) kernel. It defines
how much space is needed to hold metadata about the crashed
system, including CPU and memory information. This information
is used by the user space utility kexec to support updating the
in-kernel kdump image during hotplug operations.
User: Kexec tools
What: /sys/kernel/kexec_crash_cma_ranges
Date: Nov 2025
Contact: kexec@lists.infradead.org
Description: read only
Provides information about the memory ranges reserved from
the Contiguous Memory Allocator (CMA) area that are allocated
to the crash (kdump) kernel. It lists the start and end physical
addresses of CMA regions assigned for crashkernel use.
User: kdump service

View File

@@ -0,0 +1,61 @@
What: /sys/kernel/kexec/*
Date: Nov 2025
Contact: kexec@lists.infradead.org
Description:
The /sys/kernel/kexec/* directory contains sysfs files
that provide information about the configuration status
of kexec and kdump.
What: /sys/kernel/kexec/loaded
Date: Nov 2025
Contact: kexec@lists.infradead.org
Description: read only
Indicates whether a new kernel image has been loaded
into memory using the kexec system call. It shows 1 if
a kexec image is present and ready to boot, or 0 if none
is loaded.
User: kexec tools, kdump service
What: /sys/kernel/kexec/crash_loaded
Date: Nov 2025
Contact: kexec@lists.infradead.org
Description: read only
Indicates whether a crash (kdump) kernel is currently
loaded into memory. It shows 1 if a crash kernel has been
successfully loaded for panic handling, or 0 if no crash
kernel is present.
User: Kexec tools, Kdump service
What: /sys/kernel/kexec/crash_size
Date: Nov 2025
Contact: kexec@lists.infradead.org
Description: read/write
Shows the amount of memory reserved for loading the crash
(kdump) kernel. It reports the size, in bytes, of the
crash kernel area defined by the crashkernel= parameter.
This interface also allows reducing the crashkernel
reservation by writing a smaller value, and the reclaimed
space is added back to the system RAM.
User: Kdump service
What: /sys/kernel/kexec/crash_elfcorehdr_size
Date: Nov 2025
Contact: kexec@lists.infradead.org
Description: read only
Indicates the preferred size of the memory buffer for the
ELF core header used by the crash (kdump) kernel. It defines
how much space is needed to hold metadata about the crashed
system, including CPU and memory information. This information
is used by the user space utility kexec to support updating the
in-kernel kdump image during hotplug operations.
User: Kexec tools
What: /sys/kernel/kexec/crash_cma_ranges
Date: Nov 2025
Contact: kexec@lists.infradead.org
Description: read only
Provides information about the memory ranges reserved from
the Contiguous Memory Allocator (CMA) area that are allocated
to the crash (kdump) kernel. It lists the start and end physical
addresses of CMA regions assigned for crashkernel use.
User: kdump service

View File

@@ -223,12 +223,13 @@ The flags are::
f Include the function name
s Include the source file name
l Include line number
d Include call trace
For ``print_hex_dump_debug()`` and ``print_hex_dump_bytes()``, only
the ``p`` flag has meaning, other flags are ignored.
Note the regexp ``^[-+=][fslmpt_]+$`` matches a flags specification.
To clear all flags at once, use ``=_`` or ``-fslmpt``.
Note the regexp ``^[-+=][fslmptd_]+$`` matches a flags specification.
To clear all flags at once, use ``=_`` or ``-fslmptd``.
Debug messages during Boot Process

View File

@@ -2114,14 +2114,20 @@ Kernel parameters
the added memory block itself do not be affected.
hung_task_panic=
[KNL] Should the hung task detector generate panics.
Format: 0 | 1
[KNL] Number of hung tasks to trigger kernel panic.
Format: <int>
A value of 1 instructs the kernel to panic when a
hung task is detected. The default value is controlled
by the CONFIG_BOOTPARAM_HUNG_TASK_PANIC build-time
option. The value selected by this boot parameter can
be changed later by the kernel.hung_task_panic sysctl.
When set to a non-zero value, a kernel panic will be triggered if
the number of detected hung tasks reaches this value.
0: don't panic
1: panic immediately on first hung task
N: panic after N hung tasks are detected in a single scan
The default value is controlled by the
CONFIG_BOOTPARAM_HUNG_TASK_PANIC build-time option. The value
selected by this boot parameter can be changed later by the
kernel.hung_task_panic sysctl.
hvc_iucv= [S390] Number of z/VM IUCV hypervisor console (HVC)
terminal devices. Valid values: 0..8

View File

@@ -397,13 +397,14 @@ a hung task is detected.
hung_task_panic
===============
Controls the kernel's behavior when a hung task is detected.
When set to a non-zero value, a kernel panic will be triggered if the
number of hung tasks found during a single scan reaches this value.
This file shows up if ``CONFIG_DETECT_HUNG_TASK`` is enabled.
= =================================================
= =======================================================
0 Continue operation. This is the default behavior.
1 Panic immediately.
= =================================================
N Panic when N hung tasks are found during a single scan.
= =======================================================
hung_task_check_count
@@ -421,6 +422,11 @@ the system boot.
This file shows up if ``CONFIG_DETECT_HUNG_TASK`` is enabled.
hung_task_sys_info
==================
A comma separated list of extra system information to be dumped when
hung task is detected, for example, "tasks,mem,timers,locks,...".
Refer 'panic_sys_info' section below for more details.
hung_task_timeout_secs
======================
@@ -515,6 +521,15 @@ default), only processes with the CAP_SYS_ADMIN capability may create
io_uring instances.
kernel_sys_info
===============
A comma separated list of extra system information to be dumped when
soft/hard lockup is detected, for example, "tasks,mem,timers,locks,...".
Refer 'panic_sys_info' section below for more details.
It serves as the default kernel control knob, which will take effect
when a kernel module calls sys_info() with parameter==0.
kexec_load_disabled
===================
@@ -576,6 +591,11 @@ if leaking kernel pointer values to unprivileged users is a concern.
When ``kptr_restrict`` is set to 2, kernel pointers printed using
%pK will be replaced with 0s regardless of privileges.
softlockup_sys_info & hardlockup_sys_info
=========================================
A comma separated list of extra system information to be dumped when
soft/hard lockup is detected, for example, "tasks,mem,timers,locks,...".
Refer 'panic_sys_info' section below for more details.
modprobe
========
@@ -910,8 +930,8 @@ to 'panic_print'. Possible values are:
============= ===================================================
tasks print all tasks info
mem print system memory info
timer print timers info
lock print locks info if CONFIG_LOCKDEP is on
timers print timers info
locks print locks info if CONFIG_LOCKDEP is on
ftrace print ftrace buffer
all_bt print all CPUs backtrace (if available in the arch)
blocked_tasks print only tasks in uninterruptible (blocked) state

View File

@@ -138,6 +138,7 @@ Documents that don't fit elsewhere or which have yet to be categorized.
:maxdepth: 1
librs
liveupdate
netlink
.. only:: subproject and html

View File

@@ -70,5 +70,5 @@ in the FDT. That state is called the KHO finalization phase.
Public API
==========
.. kernel-doc:: kernel/kexec_handover.c
.. kernel-doc:: kernel/liveupdate/kexec_handover.c
:export:

View File

@@ -0,0 +1,61 @@
.. SPDX-License-Identifier: GPL-2.0
========================
Live Update Orchestrator
========================
:Author: Pasha Tatashin <pasha.tatashin@soleen.com>
.. kernel-doc:: kernel/liveupdate/luo_core.c
:doc: Live Update Orchestrator (LUO)
LUO Sessions
============
.. kernel-doc:: kernel/liveupdate/luo_session.c
:doc: LUO Sessions
LUO Preserving File Descriptors
===============================
.. kernel-doc:: kernel/liveupdate/luo_file.c
:doc: LUO File Descriptors
Live Update Orchestrator ABI
============================
.. kernel-doc:: include/linux/kho/abi/luo.h
:doc: Live Update Orchestrator ABI
The following types of file descriptors can be preserved
.. toctree::
:maxdepth: 1
../mm/memfd_preservation
Public API
==========
.. kernel-doc:: include/linux/liveupdate.h
.. kernel-doc:: include/linux/kho/abi/luo.h
:functions:
.. kernel-doc:: kernel/liveupdate/luo_core.c
:export:
.. kernel-doc:: kernel/liveupdate/luo_file.c
:export:
Internal API
============
.. kernel-doc:: kernel/liveupdate/luo_core.c
:internal:
.. kernel-doc:: kernel/liveupdate/luo_session.c
:internal:
.. kernel-doc:: kernel/liveupdate/luo_file.c
:internal:
See Also
========
- :doc:`Live Update uAPI </userspace-api/liveupdate>`
- :doc:`/core-api/kho/concepts`

View File

@@ -1238,6 +1238,16 @@ Others
The patch file does not appear to be in unified-diff format. Please
regenerate the patch file before sending it to the maintainer.
**PLACEHOLDER_USE**
Detects unhandled placeholder text left in cover letters or commit headers/logs.
Common placeholders include lines like::
*** SUBJECT HERE ***
*** BLURB HERE ***
These typically come from autogenerated templates. Replace them with a proper
subject and description before sending.
**PRINTF_0XDECIMAL**
Prefixing 0x with decimal output is defective and should be corrected.

View File

@@ -0,0 +1,60 @@
.. SPDX-License-Identifier: GPL-2.0
=================================================
Recoverable Hardware Error Tracking in vmcoreinfo
=================================================
Overview
--------
This feature provides a generic infrastructure within the Linux kernel to track
and log recoverable hardware errors. These are hardware recoverable errors
visible that might not cause immediate panics but may influence health, mainly
because new code path will be executed in the kernel.
By recording counts and timestamps of recoverable errors into the vmcoreinfo
crash dump notes, this infrastructure aids post-mortem crash analysis tools in
correlating hardware events with kernel failures. This enables faster triage
and better understanding of root causes, especially in large-scale cloud
environments where hardware issues are common.
Benefits
--------
- Facilitates correlation of hardware recoverable errors with kernel panics or
unusual code paths that lead to system crashes.
- Provides operators and cloud providers quick insights, improving reliability
and reducing troubleshooting time.
- Complements existing full hardware diagnostics without replacing them.
Data Exposure and Consumption
-----------------------------
- The tracked error data consists of per-error-type counts and timestamps of
last occurrence.
- This data is stored in the `hwerror_data` array, categorized by error source
types like CPU, memory, PCI, CXL, and others.
- It is exposed via vmcoreinfo crash dump notes and can be read using tools
like `crash`, `drgn`, or other kernel crash analysis utilities.
- There is no other way to read these data other than from crash dumps.
- These errors are divided by area, which includes CPU, Memory, PCI, CXL and
others.
Typical usage example (in drgn REPL):
.. code-block:: python
>>> prog['hwerror_data']
(struct hwerror_info[HWERR_RECOV_MAX]){
{
.count = (int)844,
.timestamp = (time64_t)1752852018,
},
...
}
Enabling
--------
- This feature is enabled when CONFIG_VMCORE_INFO is set.

View File

@@ -97,6 +97,7 @@ Subsystem-specific APIs
gpio/index
hsi
hte/index
hw-recoverable-errors
i2c
iio/index
infiniband

View File

@@ -48,6 +48,7 @@ documentation, or deleted if it has served its purpose.
hugetlbfs_reserv
ksm
memory-model
memfd_preservation
mmu_notifier
multigen_lru
numa

View File

@@ -0,0 +1,23 @@
.. SPDX-License-Identifier: GPL-2.0-or-later
==========================
Memfd Preservation via LUO
==========================
.. kernel-doc:: mm/memfd_luo.c
:doc: Memfd Preservation via LUO
Memfd Preservation ABI
======================
.. kernel-doc:: include/linux/kho/abi/memfd.h
:doc: DOC: memfd Live Update ABI
.. kernel-doc:: include/linux/kho/abi/memfd.h
:internal:
See Also
========
- :doc:`/core-api/liveupdate`
- :doc:`/core-api/kho/concepts`

View File

@@ -61,6 +61,7 @@ Everything else
:maxdepth: 1
ELF
liveupdate
netlink/index
sysfs-platform_profile
vduse

View File

@@ -385,6 +385,8 @@ Code Seq# Include File Comments
0xB8 01-02 uapi/misc/mrvl_cn10k_dpi.h Marvell CN10K DPI driver
0xB8 all uapi/linux/mshv.h Microsoft Hyper-V /dev/mshv driver
<mailto:linux-hyperv@vger.kernel.org>
0xBA 00-0F uapi/linux/liveupdate.h Pasha Tatashin
<mailto:pasha.tatashin@soleen.com>
0xC0 00-0F linux/usb/iowarrior.h
0xCA 00-0F uapi/misc/cxl.h Dead since 6.15
0xCA 10-2F uapi/misc/ocxl.h

View File

@@ -0,0 +1,20 @@
.. SPDX-License-Identifier: GPL-2.0
================
Live Update uAPI
================
:Author: Pasha Tatashin <pasha.tatashin@soleen.com>
ioctl interface
===============
.. kernel-doc:: kernel/liveupdate/luo_core.c
:doc: LUO ioctl Interface
ioctl uAPI
===========
.. kernel-doc:: include/uapi/linux/liveupdate.h
See Also
========
- :doc:`Live Update Orchestrator </core-api/liveupdate>`

View File

@@ -11659,7 +11659,7 @@ T: git git://linuxtv.org/media.git
F: drivers/media/i2c/hi556.c
HYNIX HI846 SENSOR DRIVER
M: Martin Kepplinger <martin.kepplinger@puri.sm>
M: Martin Kepplinger-Novakovic <martink@posteo.de>
L: linux-media@vger.kernel.org
S: Maintained
F: drivers/media/i2c/hi846.c
@@ -11744,6 +11744,7 @@ HUNG TASK DETECTOR
M: Andrew Morton <akpm@linux-foundation.org>
R: Lance Yang <lance.yang@linux.dev>
R: Masami Hiramatsu <mhiramat@kernel.org>
R: Petr Mladek <pmladek@suse.com>
L: linux-kernel@vger.kernel.org
S: Maintained
F: include/linux/hung_task.h
@@ -13891,14 +13892,15 @@ F: kernel/kexec*
KEXEC HANDOVER (KHO)
M: Alexander Graf <graf@amazon.com>
M: Mike Rapoport <rppt@kernel.org>
M: Changyuan Lyu <changyuanl@google.com>
M: Pasha Tatashin <pasha.tatashin@soleen.com>
R: Pratyush Yadav <pratyush@kernel.org>
L: kexec@lists.infradead.org
L: linux-mm@kvack.org
S: Maintained
F: Documentation/admin-guide/mm/kho.rst
F: Documentation/core-api/kho/*
F: include/linux/kexec_handover.h
F: kernel/kexec_handover.c
F: kernel/liveupdate/kexec_handover*
F: lib/test_kho.c
F: tools/testing/selftests/kho/
@@ -14567,6 +14569,22 @@ F: samples/livepatch/
F: scripts/livepatch/
F: tools/testing/selftests/livepatch/
LIVE UPDATE
M: Pasha Tatashin <pasha.tatashin@soleen.com>
M: Mike Rapoport <rppt@kernel.org>
R: Pratyush Yadav <pratyush@kernel.org>
L: linux-kernel@vger.kernel.org
S: Maintained
F: Documentation/core-api/liveupdate.rst
F: Documentation/mm/memfd_preservation.rst
F: Documentation/userspace-api/liveupdate.rst
F: include/linux/liveupdate.h
F: include/linux/liveupdate/
F: include/uapi/linux/liveupdate.h
F: kernel/liveupdate/
F: mm/memfd_luo.c
F: tools/testing/selftests/liveupdate/
LLC (802.2)
L: netdev@vger.kernel.org
S: Odd fixes
@@ -15668,7 +15686,7 @@ F: include/media/imx.h
MEDIA DRIVERS FOR FREESCALE IMX7/8
M: Rui Miguel Silva <rmfrfs@gmail.com>
M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
M: Martin Kepplinger <martin.kepplinger@puri.sm>
M: Martin Kepplinger-Novakovic <martink@posteo.de>
R: Purism Kernel Team <kernel@puri.sm>
R: Frank Li <Frank.Li@nxp.com>
L: imx@lists.linux.dev
@@ -18420,10 +18438,11 @@ F: net/sunrpc/
NILFS2 FILESYSTEM
M: Ryusuke Konishi <konishi.ryusuke@gmail.com>
M: Viacheslav Dubeyko <slava@dubeyko.com>
L: linux-nilfs@vger.kernel.org
S: Supported
S: Maintained
W: https://nilfs.sourceforge.io/
T: git https://github.com/konis/nilfs2.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/vdubeyko/nilfs2.git
F: Documentation/filesystems/nilfs2.rst
F: fs/nilfs2/
F: include/trace/events/nilfs2.h
@@ -25103,7 +25122,6 @@ F: drivers/regulator/sy8106a-regulator.c
SYNC FILE FRAMEWORK
M: Sumit Semwal <sumit.semwal@linaro.org>
R: Gustavo Padovan <gustavo@padovan.org>
L: linux-media@vger.kernel.org
L: dri-devel@lists.freedesktop.org
S: Maintained
@@ -26308,7 +26326,7 @@ M: Jarkko Sakkinen <jarkko@kernel.org>
R: Jason Gunthorpe <jgg@ziepe.ca>
L: linux-integrity@vger.kernel.org
S: Maintained
W: https://codeberg.org/jarkko/linux-tpmdd-test
W: https://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd-test.git/about/
Q: https://patchwork.kernel.org/project/linux-integrity/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd.git
F: Documentation/devicetree/bindings/tpm/

View File

@@ -232,17 +232,14 @@ config HAVE_EFFICIENT_UNALIGNED_ACCESS
config ARCH_USE_BUILTIN_BSWAP
bool
help
Modern versions of GCC (since 4.4) have builtin functions
for handling byte-swapping. Using these, instead of the old
inline assembler that the architecture code provides in the
__arch_bswapXX() macros, allows the compiler to see what's
happening and offers more opportunity for optimisation. In
particular, the compiler will be able to combine the byteswap
with a nearby load or store and use load-and-swap or
store-and-swap instructions if the architecture has them. It
should almost *never* result in code which is worse than the
hand-coded assembler in <asm/swab.h>. But just in case it
does, the use of the builtins is optional.
GCC and Clang have builtin functions for handling byte-swapping.
Using these allows the compiler to see what's happening and
offers more opportunity for optimisation. In particular, the
compiler will be able to combine the byteswap with a nearby load
or store and use load-and-swap or store-and-swap instructions if
the architecture has them. It should almost *never* result in code
which is worse than the hand-coded assembler in <asm/swab.h>.
But just in case it does, the use of the builtins is optional.
Any architecture with load-and-swap or store-and-swap
instructions should set this. And it shouldn't hurt to set it

View File

@@ -1161,8 +1161,6 @@ config AEABI
disambiguate both ABIs and allow for backward compatibility support
(selected with CONFIG_OABI_COMPAT).
To use this you need GCC version 4.0.0 or later.
config OABI_COMPAT
bool "Allow old ABI binaries to run with this kernel (EXPERIMENTAL)"
depends on AEABI && !THUMB2_KERNEL

View File

@@ -308,7 +308,7 @@ CONFIG_PANIC_ON_OOPS=y
CONFIG_PANIC_TIMEOUT=-1
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_BOOTPARAM_HUNG_TASK_PANIC=y
CONFIG_BOOTPARAM_HUNG_TASK_PANIC=1
CONFIG_WQ_WATCHDOG=y
# CONFIG_SCHED_DEBUG is not set
CONFIG_FUNCTION_TRACER=y

View File

@@ -5,4 +5,12 @@
/* crash kernel regions are Page size agliged */
#define CRASH_ALIGN PAGE_SIZE
#ifdef CONFIG_ARCH_HAS_GENERIC_CRASHKERNEL_RESERVATION
static inline bool arch_add_crash_res_to_iomem(void)
{
return false;
}
#define arch_add_crash_res_to_iomem arch_add_crash_res_to_iomem
#endif
#endif /* _ASM_POWERPC_CRASH_RESERVE_H */

View File

@@ -60,6 +60,12 @@ static inline u64 div_u64_rem(u64 dividend, u32 divisor, u32 *remainder)
}
#define div_u64_rem div_u64_rem
/*
* gcc tends to zero extend 32bit values and do full 64bit maths.
* Define asm functions that avoid this.
* (clang generates better code for the C versions.)
*/
#ifndef __clang__
static inline u64 mul_u32_u32(u32 a, u32 b)
{
u32 high, low;
@@ -71,6 +77,19 @@ static inline u64 mul_u32_u32(u32 a, u32 b)
}
#define mul_u32_u32 mul_u32_u32
static inline u64 add_u64_u32(u64 a, u32 b)
{
u32 high = a >> 32, low = a;
asm ("addl %[b], %[low]; adcl $0, %[high]"
: [low] "+r" (low), [high] "+r" (high)
: [b] "rm" (b) );
return low | (u64)high << 32;
}
#define add_u64_u32 add_u64_u32
#endif
/*
* __div64_32() is never called on x86, so prevent the
* generic definition from getting built.
@@ -84,21 +103,25 @@ static inline u64 mul_u32_u32(u32 a, u32 b)
* Will generate an #DE when the result doesn't fit u64, could fix with an
* __ex_table[] entry when it becomes an issue.
*/
static inline u64 mul_u64_u64_div_u64(u64 a, u64 mul, u64 div)
static inline u64 mul_u64_add_u64_div_u64(u64 rax, u64 mul, u64 add, u64 div)
{
u64 q;
u64 rdx;
asm ("mulq %2; divq %3" : "=a" (q)
: "a" (a), "rm" (mul), "rm" (div)
: "rdx");
asm ("mulq %[mul]" : "+a" (rax), "=d" (rdx) : [mul] "rm" (mul));
return q;
if (!statically_true(!add))
asm ("addq %[add], %[lo]; adcq $0, %[hi]" :
[lo] "+r" (rax), [hi] "+r" (rdx) : [add] "irm" (add));
asm ("divq %[div]" : "+a" (rax), "+d" (rdx) : [div] "rm" (div));
return rax;
}
#define mul_u64_u64_div_u64 mul_u64_u64_div_u64
#define mul_u64_add_u64_div_u64 mul_u64_add_u64_div_u64
static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 div)
{
return mul_u64_u64_div_u64(a, mul, div);
return mul_u64_add_u64_div_u64(a, mul, 0, div);
}
#define mul_u64_u32_div mul_u64_u32_div

View File

@@ -45,6 +45,7 @@
#include <linux/task_work.h>
#include <linux/hardirq.h>
#include <linux/kexec.h>
#include <linux/vmcore_info.h>
#include <asm/fred.h>
#include <asm/cpu_device_id.h>
@@ -1729,6 +1730,9 @@ noinstr void do_machine_check(struct pt_regs *regs)
}
out:
/* Given it didn't panic, mark it as recoverable */
hwerr_log_error_type(HWERR_RECOV_OTHERS);
instrumentation_end();
clear:

View File

@@ -44,6 +44,7 @@
#include <linux/uuid.h>
#include <linux/ras.h>
#include <linux/task_work.h>
#include <linux/vmcore_info.h>
#include <acpi/actbl1.h>
#include <acpi/ghes.h>
@@ -864,6 +865,40 @@ int cxl_cper_kfifo_get(struct cxl_cper_work_data *wd)
}
EXPORT_SYMBOL_NS_GPL(cxl_cper_kfifo_get, "CXL");
static void ghes_log_hwerr(int sev, guid_t *sec_type)
{
if (sev != CPER_SEV_RECOVERABLE)
return;
if (guid_equal(sec_type, &CPER_SEC_PROC_ARM) ||
guid_equal(sec_type, &CPER_SEC_PROC_GENERIC) ||
guid_equal(sec_type, &CPER_SEC_PROC_IA)) {
hwerr_log_error_type(HWERR_RECOV_CPU);
return;
}
if (guid_equal(sec_type, &CPER_SEC_CXL_PROT_ERR) ||
guid_equal(sec_type, &CPER_SEC_CXL_GEN_MEDIA_GUID) ||
guid_equal(sec_type, &CPER_SEC_CXL_DRAM_GUID) ||
guid_equal(sec_type, &CPER_SEC_CXL_MEM_MODULE_GUID)) {
hwerr_log_error_type(HWERR_RECOV_CXL);
return;
}
if (guid_equal(sec_type, &CPER_SEC_PCIE) ||
guid_equal(sec_type, &CPER_SEC_PCI_X_BUS)) {
hwerr_log_error_type(HWERR_RECOV_PCI);
return;
}
if (guid_equal(sec_type, &CPER_SEC_PLATFORM_MEM)) {
hwerr_log_error_type(HWERR_RECOV_MEMORY);
return;
}
hwerr_log_error_type(HWERR_RECOV_OTHERS);
}
static void ghes_do_proc(struct ghes *ghes,
const struct acpi_hest_generic_status *estatus)
{
@@ -885,6 +920,7 @@ static void ghes_do_proc(struct ghes *ghes,
if (gdata->validation_bits & CPER_SEC_VALID_FRU_TEXT)
fru_text = gdata->fru_text;
ghes_log_hwerr(sev, sec_type);
if (guid_equal(sec_type, &CPER_SEC_PLATFORM_MEM)) {
struct cper_sec_mem_err *mem_err = acpi_hest_get_payload(gdata);

View File

@@ -178,7 +178,7 @@ struct nvme_dhchap_key *nvme_auth_extract_key(unsigned char *secret,
if (!key)
return ERR_PTR(-ENOMEM);
key_len = base64_decode(secret, allocated_len, key->key);
key_len = base64_decode(secret, allocated_len, key->key, true, BASE64_STD);
if (key_len < 0) {
pr_debug("base64 key decoding error %d\n",
key_len);
@@ -663,7 +663,7 @@ int nvme_auth_generate_digest(u8 hmac_id, u8 *psk, size_t psk_len,
if (ret)
goto out_free_digest;
ret = base64_encode(digest, digest_len, enc);
ret = base64_encode(digest, digest_len, enc, true, BASE64_STD);
if (ret < hmac_len) {
ret = -ENOKEY;
goto out_free_digest;

View File

@@ -30,6 +30,7 @@
#include <linux/kfifo.h>
#include <linux/ratelimit.h>
#include <linux/slab.h>
#include <linux/vmcore_info.h>
#include <acpi/apei.h>
#include <acpi/ghes.h>
#include <ras/ras_event.h>
@@ -765,6 +766,7 @@ static void pci_dev_aer_stats_incr(struct pci_dev *pdev,
break;
case AER_NONFATAL:
aer_info->dev_total_nonfatal_errs++;
hwerr_log_error_type(HWERR_RECOV_PCI);
counter = &aer_info->dev_nonfatal_errs[0];
max = AER_MAX_TYPEOF_UNCOR_ERRS;
break;

View File

@@ -15,59 +15,6 @@
#include "mds_client.h"
#include "crypto.h"
/*
* The base64url encoding used by fscrypt includes the '_' character, which may
* cause problems in snapshot names (which can not start with '_'). Thus, we
* used the base64 encoding defined for IMAP mailbox names (RFC 3501) instead,
* which replaces '-' and '_' by '+' and ','.
*/
static const char base64_table[65] =
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+,";
int ceph_base64_encode(const u8 *src, int srclen, char *dst)
{
u32 ac = 0;
int bits = 0;
int i;
char *cp = dst;
for (i = 0; i < srclen; i++) {
ac = (ac << 8) | src[i];
bits += 8;
do {
bits -= 6;
*cp++ = base64_table[(ac >> bits) & 0x3f];
} while (bits >= 6);
}
if (bits)
*cp++ = base64_table[(ac << (6 - bits)) & 0x3f];
return cp - dst;
}
int ceph_base64_decode(const char *src, int srclen, u8 *dst)
{
u32 ac = 0;
int bits = 0;
int i;
u8 *bp = dst;
for (i = 0; i < srclen; i++) {
const char *p = strchr(base64_table, src[i]);
if (p == NULL || src[i] == 0)
return -1;
ac = (ac << 6) | (p - base64_table);
bits += 6;
if (bits >= 8) {
bits -= 8;
*bp++ = (u8)(ac >> bits);
}
}
if (ac & ((1 << bits) - 1))
return -1;
return bp - dst;
}
static int ceph_crypt_get_context(struct inode *inode, void *ctx, size_t len)
{
struct ceph_inode_info *ci = ceph_inode(inode);
@@ -318,7 +265,7 @@ int ceph_encode_encrypted_dname(struct inode *parent, char *buf, int elen)
}
/* base64 encode the encrypted name */
elen = ceph_base64_encode(cryptbuf, len, p);
elen = base64_encode(cryptbuf, len, p, false, BASE64_IMAP);
doutc(cl, "base64-encoded ciphertext name = %.*s\n", elen, p);
/* To understand the 240 limit, see CEPH_NOHASH_NAME_MAX comments */
@@ -412,7 +359,8 @@ int ceph_fname_to_usr(const struct ceph_fname *fname, struct fscrypt_str *tname,
tname = &_tname;
}
declen = ceph_base64_decode(name, name_len, tname->name);
declen = base64_decode(name, name_len,
tname->name, false, BASE64_IMAP);
if (declen <= 0) {
ret = -EIO;
goto out;
@@ -426,7 +374,7 @@ int ceph_fname_to_usr(const struct ceph_fname *fname, struct fscrypt_str *tname,
ret = fscrypt_fname_disk_to_usr(dir, 0, 0, &iname, oname);
if (!ret && (dir != fname->dir)) {
char tmp_buf[CEPH_BASE64_CHARS(NAME_MAX)];
char tmp_buf[BASE64_CHARS(NAME_MAX)];
name_len = snprintf(tmp_buf, sizeof(tmp_buf), "_%.*s_%ld",
oname->len, oname->name, dir->i_ino);

View File

@@ -8,6 +8,7 @@
#include <crypto/sha2.h>
#include <linux/fscrypt.h>
#include <linux/base64.h>
#define CEPH_FSCRYPT_BLOCK_SHIFT 12
#define CEPH_FSCRYPT_BLOCK_SIZE (_AC(1, UL) << CEPH_FSCRYPT_BLOCK_SHIFT)
@@ -89,11 +90,6 @@ static inline u32 ceph_fscrypt_auth_len(struct ceph_fscrypt_auth *fa)
*/
#define CEPH_NOHASH_NAME_MAX (180 - SHA256_DIGEST_SIZE)
#define CEPH_BASE64_CHARS(nbytes) DIV_ROUND_UP((nbytes) * 4, 3)
int ceph_base64_encode(const u8 *src, int srclen, char *dst);
int ceph_base64_decode(const char *src, int srclen, u8 *dst);
void ceph_fscrypt_set_ops(struct super_block *sb);
void ceph_fscrypt_free_dummy_policy(struct ceph_fs_client *fsc);

View File

@@ -998,13 +998,14 @@ static int prep_encrypted_symlink_target(struct ceph_mds_request *req,
if (err)
goto out;
req->r_path2 = kmalloc(CEPH_BASE64_CHARS(osd_link.len) + 1, GFP_KERNEL);
req->r_path2 = kmalloc(BASE64_CHARS(osd_link.len) + 1, GFP_KERNEL);
if (!req->r_path2) {
err = -ENOMEM;
goto out;
}
len = ceph_base64_encode(osd_link.name, osd_link.len, req->r_path2);
len = base64_encode(osd_link.name, osd_link.len,
req->r_path2, false, BASE64_IMAP);
req->r_path2[len] = '\0';
out:
fscrypt_fname_free_buffer(&osd_link);

View File

@@ -947,7 +947,7 @@ static int decode_encrypted_symlink(struct ceph_mds_client *mdsc,
if (!sym)
return -ENOMEM;
declen = ceph_base64_decode(encsym, enclen, sym);
declen = base64_decode(encsym, enclen, sym, false, BASE64_IMAP);
if (declen < 0) {
pr_err_client(cl,
"can't decode symlink (%d). Content: %.*s\n",

View File

@@ -16,6 +16,7 @@
#include <linux/export.h>
#include <linux/namei.h>
#include <linux/scatterlist.h>
#include <linux/base64.h>
#include "fscrypt_private.h"
@@ -71,7 +72,7 @@ struct fscrypt_nokey_name {
/* Encoded size of max-size no-key name */
#define FSCRYPT_NOKEY_NAME_MAX_ENCODED \
FSCRYPT_BASE64URL_CHARS(FSCRYPT_NOKEY_NAME_MAX)
BASE64_CHARS(FSCRYPT_NOKEY_NAME_MAX)
static inline bool fscrypt_is_dot_dotdot(const struct qstr *str)
{
@@ -162,84 +163,6 @@ static int fname_decrypt(const struct inode *inode,
return 0;
}
static const char base64url_table[65] =
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_";
#define FSCRYPT_BASE64URL_CHARS(nbytes) DIV_ROUND_UP((nbytes) * 4, 3)
/**
* fscrypt_base64url_encode() - base64url-encode some binary data
* @src: the binary data to encode
* @srclen: the length of @src in bytes
* @dst: (output) the base64url-encoded string. Not NUL-terminated.
*
* Encodes data using base64url encoding, i.e. the "Base 64 Encoding with URL
* and Filename Safe Alphabet" specified by RFC 4648. '='-padding isn't used,
* as it's unneeded and not required by the RFC. base64url is used instead of
* base64 to avoid the '/' character, which isn't allowed in filenames.
*
* Return: the length of the resulting base64url-encoded string in bytes.
* This will be equal to FSCRYPT_BASE64URL_CHARS(srclen).
*/
static int fscrypt_base64url_encode(const u8 *src, int srclen, char *dst)
{
u32 ac = 0;
int bits = 0;
int i;
char *cp = dst;
for (i = 0; i < srclen; i++) {
ac = (ac << 8) | src[i];
bits += 8;
do {
bits -= 6;
*cp++ = base64url_table[(ac >> bits) & 0x3f];
} while (bits >= 6);
}
if (bits)
*cp++ = base64url_table[(ac << (6 - bits)) & 0x3f];
return cp - dst;
}
/**
* fscrypt_base64url_decode() - base64url-decode a string
* @src: the string to decode. Doesn't need to be NUL-terminated.
* @srclen: the length of @src in bytes
* @dst: (output) the decoded binary data
*
* Decodes a string using base64url encoding, i.e. the "Base 64 Encoding with
* URL and Filename Safe Alphabet" specified by RFC 4648. '='-padding isn't
* accepted, nor are non-encoding characters such as whitespace.
*
* This implementation hasn't been optimized for performance.
*
* Return: the length of the resulting decoded binary data in bytes,
* or -1 if the string isn't a valid base64url string.
*/
static int fscrypt_base64url_decode(const char *src, int srclen, u8 *dst)
{
u32 ac = 0;
int bits = 0;
int i;
u8 *bp = dst;
for (i = 0; i < srclen; i++) {
const char *p = strchr(base64url_table, src[i]);
if (p == NULL || src[i] == 0)
return -1;
ac = (ac << 6) | (p - base64url_table);
bits += 6;
if (bits >= 8) {
bits -= 8;
*bp++ = (u8)(ac >> bits);
}
}
if (ac & ((1 << bits) - 1))
return -1;
return bp - dst;
}
bool __fscrypt_fname_encrypted_size(const union fscrypt_policy *policy,
u32 orig_len, u32 max_len,
u32 *encrypted_len_ret)
@@ -387,8 +310,8 @@ int fscrypt_fname_disk_to_usr(const struct inode *inode,
nokey_name.sha256);
size = FSCRYPT_NOKEY_NAME_MAX;
}
oname->len = fscrypt_base64url_encode((const u8 *)&nokey_name, size,
oname->name);
oname->len = base64_encode((const u8 *)&nokey_name, size,
oname->name, false, BASE64_URLSAFE);
return 0;
}
EXPORT_SYMBOL(fscrypt_fname_disk_to_usr);
@@ -467,8 +390,8 @@ int fscrypt_setup_filename(struct inode *dir, const struct qstr *iname,
if (fname->crypto_buf.name == NULL)
return -ENOMEM;
ret = fscrypt_base64url_decode(iname->name, iname->len,
fname->crypto_buf.name);
ret = base64_decode(iname->name, iname->len,
fname->crypto_buf.name, false, BASE64_URLSAFE);
if (ret < (int)offsetof(struct fscrypt_nokey_name, bytes[1]) ||
(ret > offsetof(struct fscrypt_nokey_name, sha256) &&
ret != FSCRYPT_NOKEY_NAME_MAX)) {

View File

@@ -49,7 +49,7 @@ static int nilfs_ioctl_wrap_copy(struct the_nilfs *nilfs,
void *, size_t, size_t))
{
void *buf;
void __user *base = (void __user *)(unsigned long)argv->v_base;
void __user *base = u64_to_user_ptr(argv->v_base);
size_t maxmembs, total, n;
ssize_t nr;
int ret, i;
@@ -836,7 +836,6 @@ static int nilfs_ioctl_clean_segments(struct inode *inode, struct file *filp,
sizeof(struct nilfs_bdesc),
sizeof(__u64),
};
void __user *base;
void *kbufs[5];
struct the_nilfs *nilfs;
size_t len, nsegs;
@@ -863,7 +862,7 @@ static int nilfs_ioctl_clean_segments(struct inode *inode, struct file *filp,
* use kmalloc() for its buffer because the memory used for the
* segment numbers is small enough.
*/
kbufs[4] = memdup_array_user((void __user *)(unsigned long)argv[4].v_base,
kbufs[4] = memdup_array_user(u64_to_user_ptr(argv[4].v_base),
nsegs, sizeof(__u64));
if (IS_ERR(kbufs[4])) {
ret = PTR_ERR(kbufs[4]);
@@ -883,20 +882,14 @@ static int nilfs_ioctl_clean_segments(struct inode *inode, struct file *filp,
goto out_free;
len = argv[n].v_size * argv[n].v_nmembs;
base = (void __user *)(unsigned long)argv[n].v_base;
if (len == 0) {
kbufs[n] = NULL;
continue;
}
kbufs[n] = vmalloc(len);
if (!kbufs[n]) {
ret = -ENOMEM;
goto out_free;
}
if (copy_from_user(kbufs[n], base, len)) {
ret = -EFAULT;
vfree(kbufs[n]);
kbufs[n] = vmemdup_user(u64_to_user_ptr(argv[n].v_base), len);
if (IS_ERR(kbufs[n])) {
ret = PTR_ERR(kbufs[n]);
goto out_free;
}
}
@@ -928,7 +921,7 @@ static int nilfs_ioctl_clean_segments(struct inode *inode, struct file *filp,
out_free:
while (--n >= 0)
vfree(kbufs[n]);
kvfree(kbufs[n]);
kfree(kbufs[4]);
out:
mnt_drop_write_file(filp);
@@ -1181,7 +1174,6 @@ static int nilfs_ioctl_set_suinfo(struct inode *inode, struct file *filp,
struct nilfs_transaction_info ti;
struct nilfs_argv argv;
size_t len;
void __user *base;
void *kbuf;
int ret;
@@ -1212,18 +1204,12 @@ static int nilfs_ioctl_set_suinfo(struct inode *inode, struct file *filp,
goto out;
}
base = (void __user *)(unsigned long)argv.v_base;
kbuf = vmalloc(len);
if (!kbuf) {
ret = -ENOMEM;
kbuf = vmemdup_user(u64_to_user_ptr(argv.v_base), len);
if (IS_ERR(kbuf)) {
ret = PTR_ERR(kbuf);
goto out;
}
if (copy_from_user(kbuf, base, len)) {
ret = -EFAULT;
goto out_free;
}
nilfs_transaction_begin(inode->i_sb, &ti, 0);
ret = nilfs_sufile_set_suinfo(nilfs->ns_sufile, kbuf, argv.v_size,
argv.v_nmembs);
@@ -1232,8 +1218,7 @@ static int nilfs_ioctl_set_suinfo(struct inode *inode, struct file *filp,
else
nilfs_transaction_commit(inode->i_sb); /* never fails */
out_free:
vfree(kbuf);
kvfree(kbuf);
out:
mnt_drop_write_file(filp);
return ret;

View File

@@ -302,8 +302,21 @@ static int ocfs2_check_dir_entry(struct inode *dir,
unsigned long offset)
{
const char *error_msg = NULL;
const int rlen = le16_to_cpu(de->rec_len);
const unsigned long next_offset = ((char *) de - buf) + rlen;
unsigned long next_offset;
int rlen;
if (offset > size - OCFS2_DIR_REC_LEN(1)) {
/* Dirent is (maybe partially) beyond the buffer
* boundaries so touching 'de' members is unsafe.
*/
mlog(ML_ERROR, "directory entry (#%llu: offset=%lu) "
"too close to end or out-of-bounds",
(unsigned long long)OCFS2_I(dir)->ip_blkno, offset);
return 0;
}
rlen = le16_to_cpu(de->rec_len);
next_offset = ((char *) de - buf) + rlen;
if (unlikely(rlen < OCFS2_DIR_REC_LEN(1)))
error_msg = "rec_len is smaller than minimal";
@@ -778,6 +791,14 @@ static int ocfs2_dx_dir_lookup_rec(struct inode *inode,
struct ocfs2_extent_block *eb;
struct ocfs2_extent_rec *rec = NULL;
if (le16_to_cpu(el->l_count) !=
ocfs2_extent_recs_per_dx_root(inode->i_sb)) {
ret = ocfs2_error(inode->i_sb,
"Inode %lu has invalid extent list length %u\n",
inode->i_ino, le16_to_cpu(el->l_count));
goto out;
}
if (el->l_tree_depth) {
ret = ocfs2_find_leaf(INODE_CACHE(inode), el, major_hash,
&eb_bh);
@@ -3423,6 +3444,14 @@ static int ocfs2_find_dir_space_id(struct inode *dir, struct buffer_head *di_bh,
offset += le16_to_cpu(de->rec_len);
}
if (!last_de) {
ret = ocfs2_error(sb, "Directory entry (#%llu: size=%lld) "
"is unexpectedly short",
(unsigned long long)OCFS2_I(dir)->ip_blkno,
i_size_read(dir));
goto out;
}
/*
* We're going to require expansion of the directory - figure
* out how many blocks we'll need so that a place for the
@@ -4104,10 +4133,15 @@ static int ocfs2_expand_inline_dx_root(struct inode *dir,
}
dx_root->dr_flags &= ~OCFS2_DX_FLAG_INLINE;
memset(&dx_root->dr_list, 0, osb->sb->s_blocksize -
offsetof(struct ocfs2_dx_root_block, dr_list));
dx_root->dr_list.l_tree_depth = 0;
dx_root->dr_list.l_count =
cpu_to_le16(ocfs2_extent_recs_per_dx_root(osb->sb));
dx_root->dr_list.l_next_free_rec = 0;
memset(&dx_root->dr_list.l_recs, 0,
osb->sb->s_blocksize -
(offsetof(struct ocfs2_dx_root_block, dr_list) +
offsetof(struct ocfs2_extent_list, l_recs)));
/* This should never fail considering we start with an empty
* dx_root. */

View File

@@ -201,13 +201,15 @@ bail:
static int ocfs2_dinode_has_extents(struct ocfs2_dinode *di)
{
/* inodes flagged with other stuff in id2 */
if (di->i_flags & (OCFS2_SUPER_BLOCK_FL | OCFS2_LOCAL_ALLOC_FL |
OCFS2_CHAIN_FL | OCFS2_DEALLOC_FL))
if (le32_to_cpu(di->i_flags) &
(OCFS2_SUPER_BLOCK_FL | OCFS2_LOCAL_ALLOC_FL | OCFS2_CHAIN_FL |
OCFS2_DEALLOC_FL))
return 0;
/* i_flags doesn't indicate when id2 is a fast symlink */
if (S_ISLNK(di->i_mode) && di->i_size && di->i_clusters == 0)
if (S_ISLNK(le16_to_cpu(di->i_mode)) && le64_to_cpu(di->i_size) &&
!le32_to_cpu(di->i_clusters))
return 0;
if (di->i_dyn_features & OCFS2_INLINE_DATA_FL)
if (le16_to_cpu(di->i_dyn_features) & OCFS2_INLINE_DATA_FL)
return 0;
return 1;
@@ -1460,7 +1462,7 @@ int ocfs2_validate_inode_block(struct super_block *sb,
goto bail;
}
if (!(di->i_flags & cpu_to_le32(OCFS2_VALID_FL))) {
if (!(le32_to_cpu(di->i_flags) & OCFS2_VALID_FL)) {
rc = ocfs2_error(sb,
"Invalid dinode #%llu: OCFS2_VALID_FL not set\n",
(unsigned long long)bh->b_blocknr);
@@ -1484,6 +1486,41 @@ int ocfs2_validate_inode_block(struct super_block *sb,
goto bail;
}
if ((le16_to_cpu(di->i_dyn_features) & OCFS2_INLINE_DATA_FL) &&
le32_to_cpu(di->i_clusters)) {
rc = ocfs2_error(sb, "Invalid dinode %llu: %u clusters\n",
(unsigned long long)bh->b_blocknr,
le32_to_cpu(di->i_clusters));
goto bail;
}
if (le32_to_cpu(di->i_flags) & OCFS2_CHAIN_FL) {
struct ocfs2_chain_list *cl = &di->id2.i_chain;
u16 bpc = 1 << (OCFS2_SB(sb)->s_clustersize_bits -
sb->s_blocksize_bits);
if (le16_to_cpu(cl->cl_count) != ocfs2_chain_recs_per_inode(sb)) {
rc = ocfs2_error(sb, "Invalid dinode %llu: chain list count %u\n",
(unsigned long long)bh->b_blocknr,
le16_to_cpu(cl->cl_count));
goto bail;
}
if (le16_to_cpu(cl->cl_next_free_rec) > le16_to_cpu(cl->cl_count)) {
rc = ocfs2_error(sb, "Invalid dinode %llu: chain list index %u\n",
(unsigned long long)bh->b_blocknr,
le16_to_cpu(cl->cl_next_free_rec));
goto bail;
}
if (OCFS2_SB(sb)->bitmap_blkno &&
OCFS2_SB(sb)->bitmap_blkno != le64_to_cpu(di->i_blkno) &&
le16_to_cpu(cl->cl_bpc) != bpc) {
rc = ocfs2_error(sb, "Invalid dinode %llu: bits per cluster %u\n",
(unsigned long long)bh->b_blocknr,
le16_to_cpu(cl->cl_bpc));
goto bail;
}
}
rc = 0;
bail:
@@ -1671,6 +1708,8 @@ int ocfs2_read_inode_block_full(struct inode *inode, struct buffer_head **bh,
rc = ocfs2_read_blocks(INODE_CACHE(inode), OCFS2_I(inode)->ip_blkno,
1, &tmp, flags, ocfs2_validate_inode_block);
if (rc < 0)
make_bad_inode(inode);
/* If ocfs2_read_blocks() got us a new bh, pass it up. */
if (!rc && !*bh)
*bh = tmp;

View File

@@ -98,7 +98,13 @@ static int __ocfs2_move_extent(handle_t *handle,
rec = &el->l_recs[index];
BUG_ON(ext_flags != rec->e_flags);
if (ext_flags != rec->e_flags) {
ret = ocfs2_error(inode->i_sb,
"Inode %llu has corrupted extent %d with flags 0x%x at cpos %u\n",
(unsigned long long)ino, index, rec->e_flags, cpos);
goto out;
}
/*
* after moving/defraging to new location, the extent is not going
* to be refcounted anymore.
@@ -1036,6 +1042,12 @@ int ocfs2_ioctl_move_extents(struct file *filp, void __user *argp)
if (range.me_threshold > i_size_read(inode))
range.me_threshold = i_size_read(inode);
if (range.me_flags & ~(OCFS2_MOVE_EXT_FL_AUTO_DEFRAG |
OCFS2_MOVE_EXT_FL_PART_DEFRAG)) {
status = -EINVAL;
goto out_free;
}
if (range.me_flags & OCFS2_MOVE_EXT_FL_AUTO_DEFRAG) {
context->auto_defrag = 1;

View File

@@ -468,7 +468,8 @@ struct ocfs2_extent_list {
__le16 l_reserved1;
__le64 l_reserved2; /* Pad to
sizeof(ocfs2_extent_rec) */
/*10*/ struct ocfs2_extent_rec l_recs[]; /* Extent records */
/* Extent records */
/*10*/ struct ocfs2_extent_rec l_recs[] __counted_by_le(l_count);
};
/*
@@ -482,7 +483,8 @@ struct ocfs2_chain_list {
__le16 cl_count; /* Total chains in this list */
__le16 cl_next_free_rec; /* Next unused chain slot */
__le64 cl_reserved1;
/*10*/ struct ocfs2_chain_rec cl_recs[]; /* Chain records */
/* Chain records */
/*10*/ struct ocfs2_chain_rec cl_recs[] __counted_by_le(cl_count);
};
/*
@@ -494,7 +496,8 @@ struct ocfs2_truncate_log {
/*00*/ __le16 tl_count; /* Total records in this log */
__le16 tl_used; /* Number of records in use */
__le32 tl_reserved1;
/*08*/ struct ocfs2_truncate_rec tl_recs[]; /* Truncate records */
/* Truncate records */
/*08*/ struct ocfs2_truncate_rec tl_recs[] __counted_by_le(tl_count);
};
/*
@@ -796,9 +799,10 @@ struct ocfs2_dx_entry_list {
* possible in de_entries */
__le16 de_num_used; /* Current number of
* de_entries entries */
struct ocfs2_dx_entry de_entries[]; /* Indexed dir entries
* in a packed array of
* length de_num_used */
/* Indexed dir entries in a packed
* array of length de_num_used.
*/
struct ocfs2_dx_entry de_entries[] __counted_by_le(de_count);
};
#define OCFS2_DX_FLAG_INLINE 0x01
@@ -934,7 +938,8 @@ struct ocfs2_refcount_list {
__le16 rl_used; /* Current number of used records */
__le32 rl_reserved2;
__le64 rl_reserved1; /* Pad to sizeof(ocfs2_refcount_record) */
/*10*/ struct ocfs2_refcount_rec rl_recs[]; /* Refcount records */
/* Refcount records */
/*10*/ struct ocfs2_refcount_rec rl_recs[] __counted_by_le(rl_count);
};
@@ -1020,7 +1025,8 @@ struct ocfs2_xattr_header {
buckets. A block uses
xb_check and sets
this field to zero.) */
struct ocfs2_xattr_entry xh_entries[]; /* xattr entry list. */
/* xattr entry list. */
struct ocfs2_xattr_entry xh_entries[] __counted_by_le(xh_count);
};
/*

View File

@@ -34,6 +34,7 @@
#include <linux/pagevec.h>
#include <linux/swap.h>
#include <linux/security.h>
#include <linux/string.h>
#include <linux/fsnotify.h>
#include <linux/quotaops.h>
#include <linux/namei.h>
@@ -621,7 +622,7 @@ static int ocfs2_create_refcount_tree(struct inode *inode,
/* Initialize ocfs2_refcount_block. */
rb = (struct ocfs2_refcount_block *)new_bh->b_data;
memset(rb, 0, inode->i_sb->s_blocksize);
strcpy((void *)rb, OCFS2_REFCOUNT_BLOCK_SIGNATURE);
strscpy(rb->rf_signature, OCFS2_REFCOUNT_BLOCK_SIGNATURE);
rb->rf_suballoc_slot = cpu_to_le16(meta_ac->ac_alloc_slot);
rb->rf_suballoc_loc = cpu_to_le64(suballoc_loc);
rb->rf_suballoc_bit = cpu_to_le16(suballoc_bit_start);
@@ -1562,7 +1563,7 @@ static int ocfs2_new_leaf_refcount_block(handle_t *handle,
/* Initialize ocfs2_refcount_block. */
new_rb = (struct ocfs2_refcount_block *)new_bh->b_data;
memset(new_rb, 0, sb->s_blocksize);
strcpy((void *)new_rb, OCFS2_REFCOUNT_BLOCK_SIGNATURE);
strscpy(new_rb->rf_signature, OCFS2_REFCOUNT_BLOCK_SIGNATURE);
new_rb->rf_suballoc_slot = cpu_to_le16(meta_ac->ac_alloc_slot);
new_rb->rf_suballoc_loc = cpu_to_le64(suballoc_loc);
new_rb->rf_suballoc_bit = cpu_to_le16(suballoc_bit_start);

View File

@@ -2908,7 +2908,7 @@ static int ocfs2_create_xattr_block(struct inode *inode,
/* Initialize ocfs2_xattr_block */
xblk = (struct ocfs2_xattr_block *)new_bh->b_data;
memset(xblk, 0, inode->i_sb->s_blocksize);
strcpy((void *)xblk, OCFS2_XATTR_BLOCK_SIGNATURE);
strscpy(xblk->xb_signature, OCFS2_XATTR_BLOCK_SIGNATURE);
xblk->xb_suballoc_slot = cpu_to_le16(ctxt->meta_ac->ac_alloc_slot);
xblk->xb_suballoc_loc = cpu_to_le64(suballoc_loc);
xblk->xb_suballoc_bit = cpu_to_le16(suballoc_bit_start);

View File

@@ -20,7 +20,6 @@
#define KPMSIZE sizeof(u64)
#define KPMMASK (KPMSIZE - 1)
#define KPMBITS (KPMSIZE * BITS_PER_BYTE)
enum kpage_operation {
KPAGE_FLAGS,

View File

@@ -8,9 +8,15 @@
#include <linux/types.h>
enum base64_variant {
BASE64_STD, /* RFC 4648 (standard) */
BASE64_URLSAFE, /* RFC 4648 (base64url) */
BASE64_IMAP, /* RFC 3501 */
};
#define BASE64_CHARS(nbytes) DIV_ROUND_UP((nbytes) * 4, 3)
int base64_encode(const u8 *src, int len, char *dst);
int base64_decode(const char *src, int len, u8 *dst);
int base64_encode(const u8 *src, int len, char *dst, bool padding, enum base64_variant variant);
int base64_decode(const char *src, int len, u8 *dst, bool padding, enum base64_variant variant);
#endif /* _LINUX_BASE64_H */

View File

@@ -273,12 +273,6 @@ static inline void *offset_to_ptr(const int *off)
#endif /* __ASSEMBLY__ */
#ifdef CONFIG_64BIT
#define ARCH_SEL(a,b) a
#else
#define ARCH_SEL(a,b) b
#endif
/*
* Force the compiler to emit 'sym' as a symbol, so that we can reference
* it from inline assembler. Necessary in case 'sym' could be inlined

View File

@@ -32,6 +32,12 @@ int __init parse_crashkernel(char *cmdline, unsigned long long system_ram,
void __init reserve_crashkernel_cma(unsigned long long cma_size);
#ifdef CONFIG_ARCH_HAS_GENERIC_CRASHKERNEL_RESERVATION
#ifndef arch_add_crash_res_to_iomem
static inline bool arch_add_crash_res_to_iomem(void)
{
return true;
}
#endif
#ifndef DEFAULT_CRASH_KERNEL_LOW_SIZE
#define DEFAULT_CRASH_KERNEL_LOW_SIZE (128UL << 20)
#endif

View File

@@ -38,11 +38,12 @@ struct _ddebug {
#define _DPRINTK_FLAGS_INCL_LINENO (1<<3)
#define _DPRINTK_FLAGS_INCL_TID (1<<4)
#define _DPRINTK_FLAGS_INCL_SOURCENAME (1<<5)
#define _DPRINTK_FLAGS_INCL_STACK (1<<6)
#define _DPRINTK_FLAGS_INCL_ANY \
(_DPRINTK_FLAGS_INCL_MODNAME | _DPRINTK_FLAGS_INCL_FUNCNAME |\
_DPRINTK_FLAGS_INCL_LINENO | _DPRINTK_FLAGS_INCL_TID |\
_DPRINTK_FLAGS_INCL_SOURCENAME)
_DPRINTK_FLAGS_INCL_SOURCENAME | _DPRINTK_FLAGS_INCL_STACK)
#if defined DEBUG
#define _DPRINTK_FLAGS_DEFAULT _DPRINTK_FLAGS_PRINT
@@ -160,6 +161,12 @@ void __dynamic_ibdev_dbg(struct _ddebug *descriptor,
const struct ib_device *ibdev,
const char *fmt, ...);
#define __dynamic_dump_stack(desc) \
{ \
if (desc.flags & _DPRINTK_FLAGS_INCL_STACK) \
dump_stack(); \
}
#define DEFINE_DYNAMIC_DEBUG_METADATA_CLS(name, cls, fmt) \
static struct _ddebug __aligned(8) \
__section("__dyndbg") name = { \
@@ -220,8 +227,10 @@ void __dynamic_ibdev_dbg(struct _ddebug *descriptor,
*/
#define __dynamic_func_call_cls(id, cls, fmt, func, ...) do { \
DEFINE_DYNAMIC_DEBUG_METADATA_CLS(id, cls, fmt); \
if (DYNAMIC_DEBUG_BRANCH(id)) \
if (DYNAMIC_DEBUG_BRANCH(id)) { \
func(&id, ##__VA_ARGS__); \
__dynamic_dump_stack(id); \
} \
} while (0)
#define __dynamic_func_call(id, fmt, func, ...) \
__dynamic_func_call_cls(id, _DPRINTK_CLASS_DFLT, fmt, \
@@ -229,8 +238,10 @@ void __dynamic_ibdev_dbg(struct _ddebug *descriptor,
#define __dynamic_func_call_cls_no_desc(id, cls, fmt, func, ...) do { \
DEFINE_DYNAMIC_DEBUG_METADATA_CLS(id, cls, fmt); \
if (DYNAMIC_DEBUG_BRANCH(id)) \
if (DYNAMIC_DEBUG_BRANCH(id)) { \
func(__VA_ARGS__); \
__dynamic_dump_stack(id); \
} \
} while (0)
#define __dynamic_func_call_no_desc(id, fmt, func, ...) \
__dynamic_func_call_cls_no_desc(id, _DPRINTK_CLASS_DFLT, \

View File

@@ -2,22 +2,16 @@
#ifndef LINUX_KEXEC_HANDOVER_H
#define LINUX_KEXEC_HANDOVER_H
#include <linux/types.h>
#include <linux/err.h>
#include <linux/errno.h>
#include <linux/types.h>
struct kho_scratch {
phys_addr_t addr;
phys_addr_t size;
};
/* KHO Notifier index */
enum kho_event {
KEXEC_KHO_FINALIZE = 0,
KEXEC_KHO_ABORT = 1,
};
struct folio;
struct notifier_block;
struct page;
#define DECLARE_KHOSER_PTR(name, type) \
@@ -37,8 +31,6 @@ struct page;
(typeof((s).ptr))((s).phys ? phys_to_virt((s).phys) : NULL); \
})
struct kho_serialization;
struct kho_vmalloc_chunk;
struct kho_vmalloc {
DECLARE_KHOSER_PTR(first, struct kho_vmalloc_chunk *);
@@ -52,17 +44,21 @@ bool kho_is_enabled(void);
bool is_kho_boot(void);
int kho_preserve_folio(struct folio *folio);
void kho_unpreserve_folio(struct folio *folio);
int kho_preserve_pages(struct page *page, unsigned int nr_pages);
void kho_unpreserve_pages(struct page *page, unsigned int nr_pages);
int kho_preserve_vmalloc(void *ptr, struct kho_vmalloc *preservation);
void kho_unpreserve_vmalloc(struct kho_vmalloc *preservation);
void *kho_alloc_preserve(size_t size);
void kho_unpreserve_free(void *mem);
void kho_restore_free(void *mem);
struct folio *kho_restore_folio(phys_addr_t phys);
struct page *kho_restore_pages(phys_addr_t phys, unsigned int nr_pages);
void *kho_restore_vmalloc(const struct kho_vmalloc *preservation);
int kho_add_subtree(struct kho_serialization *ser, const char *name, void *fdt);
int kho_add_subtree(const char *name, void *fdt);
void kho_remove_subtree(void *fdt);
int kho_retrieve_subtree(const char *name, phys_addr_t *phys);
int register_kho_notifier(struct notifier_block *nb);
int unregister_kho_notifier(struct notifier_block *nb);
void kho_memory_init(void);
void kho_populate(phys_addr_t fdt_phys, u64 fdt_len, phys_addr_t scratch_phys,
@@ -83,17 +79,31 @@ static inline int kho_preserve_folio(struct folio *folio)
return -EOPNOTSUPP;
}
static inline void kho_unpreserve_folio(struct folio *folio) { }
static inline int kho_preserve_pages(struct page *page, unsigned int nr_pages)
{
return -EOPNOTSUPP;
}
static inline void kho_unpreserve_pages(struct page *page, unsigned int nr_pages) { }
static inline int kho_preserve_vmalloc(void *ptr,
struct kho_vmalloc *preservation)
{
return -EOPNOTSUPP;
}
static inline void kho_unpreserve_vmalloc(struct kho_vmalloc *preservation) { }
static inline void *kho_alloc_preserve(size_t size)
{
return ERR_PTR(-EOPNOTSUPP);
}
static inline void kho_unpreserve_free(void *mem) { }
static inline void kho_restore_free(void *mem) { }
static inline struct folio *kho_restore_folio(phys_addr_t phys)
{
return NULL;
@@ -110,30 +120,19 @@ static inline void *kho_restore_vmalloc(const struct kho_vmalloc *preservation)
return NULL;
}
static inline int kho_add_subtree(struct kho_serialization *ser,
const char *name, void *fdt)
static inline int kho_add_subtree(const char *name, void *fdt)
{
return -EOPNOTSUPP;
}
static inline void kho_remove_subtree(void *fdt) { }
static inline int kho_retrieve_subtree(const char *name, phys_addr_t *phys)
{
return -EOPNOTSUPP;
}
static inline int register_kho_notifier(struct notifier_block *nb)
{
return -EOPNOTSUPP;
}
static inline int unregister_kho_notifier(struct notifier_block *nb)
{
return -EOPNOTSUPP;
}
static inline void kho_memory_init(void)
{
}
static inline void kho_memory_init(void) { }
static inline void kho_populate(phys_addr_t fdt_phys, u64 fdt_len,
phys_addr_t scratch_phys, u64 scratch_len)

166
include/linux/kho/abi/luo.h Normal file
View File

@@ -0,0 +1,166 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2025, Google LLC.
* Pasha Tatashin <pasha.tatashin@soleen.com>
*/
/**
* DOC: Live Update Orchestrator ABI
*
* This header defines the stable Application Binary Interface used by the
* Live Update Orchestrator to pass state from a pre-update kernel to a
* post-update kernel. The ABI is built upon the Kexec HandOver framework
* and uses a Flattened Device Tree to describe the preserved data.
*
* This interface is a contract. Any modification to the FDT structure, node
* properties, compatible strings, or the layout of the `__packed` serialization
* structures defined here constitutes a breaking change. Such changes require
* incrementing the version number in the relevant `_COMPATIBLE` string to
* prevent a new kernel from misinterpreting data from an old kernel.
*
* Changes are allowed provided the compatibility version is incremented;
* however, backward/forward compatibility is only guaranteed for kernels
* supporting the same ABI version.
*
* FDT Structure Overview:
* The entire LUO state is encapsulated within a single KHO entry named "LUO".
* This entry contains an FDT with the following layout:
*
* .. code-block:: none
*
* / {
* compatible = "luo-v1";
* liveupdate-number = <...>;
*
* luo-session {
* compatible = "luo-session-v1";
* luo-session-header = <phys_addr_of_session_header_ser>;
* };
* };
*
* Main LUO Node (/):
*
* - compatible: "luo-v1"
* Identifies the overall LUO ABI version.
* - liveupdate-number: u64
* A counter tracking the number of successful live updates performed.
*
* Session Node (luo-session):
* This node describes all preserved user-space sessions.
*
* - compatible: "luo-session-v1"
* Identifies the session ABI version.
* - luo-session-header: u64
* The physical address of a `struct luo_session_header_ser`. This structure
* is the header for a contiguous block of memory containing an array of
* `struct luo_session_ser`, one for each preserved session.
*
* Serialization Structures:
* The FDT properties point to memory regions containing arrays of simple,
* `__packed` structures. These structures contain the actual preserved state.
*
* - struct luo_session_header_ser:
* Header for the session array. Contains the total page count of the
* preserved memory block and the number of `struct luo_session_ser`
* entries that follow.
*
* - struct luo_session_ser:
* Metadata for a single session, including its name and a physical pointer
* to another preserved memory block containing an array of
* `struct luo_file_ser` for all files in that session.
*
* - struct luo_file_ser:
* Metadata for a single preserved file. Contains the `compatible` string to
* find the correct handler in the new kernel, a user-provided `token` for
* identification, and an opaque `data` handle for the handler to use.
*/
#ifndef _LINUX_KHO_ABI_LUO_H
#define _LINUX_KHO_ABI_LUO_H
#include <uapi/linux/liveupdate.h>
/*
* The LUO FDT hooks all LUO state for sessions, fds, etc.
* In the root it also carries "liveupdate-number" 64-bit property that
* corresponds to the number of live-updates performed on this machine.
*/
#define LUO_FDT_SIZE PAGE_SIZE
#define LUO_FDT_KHO_ENTRY_NAME "LUO"
#define LUO_FDT_COMPATIBLE "luo-v1"
#define LUO_FDT_LIVEUPDATE_NUM "liveupdate-number"
#define LIVEUPDATE_HNDL_COMPAT_LENGTH 48
/**
* struct luo_file_ser - Represents the serialized preserves files.
* @compatible: File handler compatible string.
* @data: Private data
* @token: User provided token for this file
*
* If this structure is modified, LUO_SESSION_COMPATIBLE must be updated.
*/
struct luo_file_ser {
char compatible[LIVEUPDATE_HNDL_COMPAT_LENGTH];
u64 data;
u64 token;
} __packed;
/**
* struct luo_file_set_ser - Represents the serialized metadata for file set
* @files: The physical address of a contiguous memory block that holds
* the serialized state of files (array of luo_file_ser) in this file
* set.
* @count: The total number of files that were part of this session during
* serialization. Used for iteration and validation during
* restoration.
*/
struct luo_file_set_ser {
u64 files;
u64 count;
} __packed;
/*
* LUO FDT session node
* LUO_FDT_SESSION_HEADER: is a u64 physical address of struct
* luo_session_header_ser
*/
#define LUO_FDT_SESSION_NODE_NAME "luo-session"
#define LUO_FDT_SESSION_COMPATIBLE "luo-session-v2"
#define LUO_FDT_SESSION_HEADER "luo-session-header"
/**
* struct luo_session_header_ser - Header for the serialized session data block.
* @count: The number of `struct luo_session_ser` entries that immediately
* follow this header in the memory block.
*
* This structure is located at the beginning of a contiguous block of
* physical memory preserved across the kexec. It provides the necessary
* metadata to interpret the array of session entries that follow.
*
* If this structure is modified, `LUO_FDT_SESSION_COMPATIBLE` must be updated.
*/
struct luo_session_header_ser {
u64 count;
} __packed;
/**
* struct luo_session_ser - Represents the serialized metadata for a LUO session.
* @name: The unique name of the session, provided by the userspace at
* the time of session creation.
* @file_set_ser: Serialized files belonging to this session,
*
* This structure is used to package session-specific metadata for transfer
* between kernels via Kexec Handover. An array of these structures (one per
* session) is created and passed to the new kernel, allowing it to reconstruct
* the session context.
*
* If this structure is modified, `LUO_FDT_SESSION_COMPATIBLE` must be updated.
*/
struct luo_session_ser {
char name[LIVEUPDATE_SESSION_NAME_LENGTH];
struct luo_file_set_ser file_set_ser;
} __packed;
#endif /* _LINUX_KHO_ABI_LUO_H */

View File

@@ -0,0 +1,77 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2025, Google LLC.
* Pasha Tatashin <pasha.tatashin@soleen.com>
*
* Copyright (C) 2025 Amazon.com Inc. or its affiliates.
* Pratyush Yadav <ptyadav@amazon.de>
*/
#ifndef _LINUX_KHO_ABI_MEMFD_H
#define _LINUX_KHO_ABI_MEMFD_H
#include <linux/types.h>
#include <linux/kexec_handover.h>
/**
* DOC: memfd Live Update ABI
*
* This header defines the ABI for preserving the state of a memfd across a
* kexec reboot using the LUO.
*
* The state is serialized into a packed structure `struct memfd_luo_ser`
* which is handed over to the next kernel via the KHO mechanism.
*
* This interface is a contract. Any modification to the structure layout
* constitutes a breaking change. Such changes require incrementing the
* version number in the MEMFD_LUO_FH_COMPATIBLE string.
*/
/**
* MEMFD_LUO_FOLIO_DIRTY - The folio is dirty.
*
* This flag indicates the folio contains data from user. A non-dirty folio is
* one that was allocated (say using fallocate(2)) but not written to.
*/
#define MEMFD_LUO_FOLIO_DIRTY BIT(0)
/**
* MEMFD_LUO_FOLIO_UPTODATE - The folio is up-to-date.
*
* An up-to-date folio has been zeroed out. shmem zeroes out folios on first
* use. This flag tracks which folios need zeroing.
*/
#define MEMFD_LUO_FOLIO_UPTODATE BIT(1)
/**
* struct memfd_luo_folio_ser - Serialized state of a single folio.
* @pfn: The page frame number of the folio.
* @flags: Flags to describe the state of the folio.
* @index: The page offset (pgoff_t) of the folio within the original file.
*/
struct memfd_luo_folio_ser {
u64 pfn:52;
u64 flags:12;
u64 index;
} __packed;
/**
* struct memfd_luo_ser - Main serialization structure for a memfd.
* @pos: The file's current position (f_pos).
* @size: The total size of the file in bytes (i_size).
* @nr_folios: Number of folios in the folios array.
* @folios: KHO vmalloc descriptor pointing to the array of
* struct memfd_luo_folio_ser.
*/
struct memfd_luo_ser {
u64 pos;
u64 size;
u64 nr_folios;
struct kho_vmalloc folios;
} __packed;
/* The compatibility string for memfd file handler */
#define MEMFD_LUO_FH_COMPATIBLE "memfd-v1"
#endif /* _LINUX_KHO_ABI_MEMFD_H */

138
include/linux/liveupdate.h Normal file
View File

@@ -0,0 +1,138 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2025, Google LLC.
* Pasha Tatashin <pasha.tatashin@soleen.com>
*/
#ifndef _LINUX_LIVEUPDATE_H
#define _LINUX_LIVEUPDATE_H
#include <linux/bug.h>
#include <linux/compiler.h>
#include <linux/kho/abi/luo.h>
#include <linux/list.h>
#include <linux/types.h>
#include <uapi/linux/liveupdate.h>
struct liveupdate_file_handler;
struct file;
/**
* struct liveupdate_file_op_args - Arguments for file operation callbacks.
* @handler: The file handler being called.
* @retrieved: The retrieve status for the 'can_finish / finish'
* operation.
* @file: The file object. For retrieve: [OUT] The callback sets
* this to the new file. For other ops: [IN] The caller sets
* this to the file being operated on.
* @serialized_data: The opaque u64 handle, preserve/prepare/freeze may update
* this field.
* @private_data: Private data for the file used to hold runtime state that
* is not preserved. Set by the handler's .preserve()
* callback, and must be freed in the handler's
* .unpreserve() callback.
*
* This structure bundles all parameters for the file operation callbacks.
* The 'data' and 'file' fields are used for both input and output.
*/
struct liveupdate_file_op_args {
struct liveupdate_file_handler *handler;
bool retrieved;
struct file *file;
u64 serialized_data;
void *private_data;
};
/**
* struct liveupdate_file_ops - Callbacks for live-updatable files.
* @can_preserve: Required. Lightweight check to see if this handler is
* compatible with the given file.
* @preserve: Required. Performs state-saving for the file.
* @unpreserve: Required. Cleans up any resources allocated by @preserve.
* @freeze: Optional. Final actions just before kernel transition.
* @unfreeze: Optional. Undo freeze operations.
* @retrieve: Required. Restores the file in the new kernel.
* @can_finish: Optional. Check if this FD can finish, i.e. all restoration
* pre-requirements for this FD are satisfied. Called prior to
* finish, in order to do successful finish calls for all
* resources in the session.
* @finish: Required. Final cleanup in the new kernel.
* @owner: Module reference
*
* All operations (except can_preserve) receive a pointer to a
* 'struct liveupdate_file_op_args' containing the necessary context.
*/
struct liveupdate_file_ops {
bool (*can_preserve)(struct liveupdate_file_handler *handler,
struct file *file);
int (*preserve)(struct liveupdate_file_op_args *args);
void (*unpreserve)(struct liveupdate_file_op_args *args);
int (*freeze)(struct liveupdate_file_op_args *args);
void (*unfreeze)(struct liveupdate_file_op_args *args);
int (*retrieve)(struct liveupdate_file_op_args *args);
bool (*can_finish)(struct liveupdate_file_op_args *args);
void (*finish)(struct liveupdate_file_op_args *args);
struct module *owner;
};
/**
* struct liveupdate_file_handler - Represents a handler for a live-updatable file type.
* @ops: Callback functions
* @compatible: The compatibility string (e.g., "memfd-v1", "vfiofd-v1")
* that uniquely identifies the file type this handler
* supports. This is matched against the compatible string
* associated with individual &struct file instances.
*
* Modules that want to support live update for specific file types should
* register an instance of this structure. LUO uses this registration to
* determine if a given file can be preserved and to find the appropriate
* operations to manage its state across the update.
*/
struct liveupdate_file_handler {
const struct liveupdate_file_ops *ops;
const char compatible[LIVEUPDATE_HNDL_COMPAT_LENGTH];
/* private: */
/*
* Used for linking this handler instance into a global list of
* registered file handlers.
*/
struct list_head __private list;
};
#ifdef CONFIG_LIVEUPDATE
/* Return true if live update orchestrator is enabled */
bool liveupdate_enabled(void);
/* Called during kexec to tell LUO that entered into reboot */
int liveupdate_reboot(void);
int liveupdate_register_file_handler(struct liveupdate_file_handler *fh);
int liveupdate_unregister_file_handler(struct liveupdate_file_handler *fh);
#else /* CONFIG_LIVEUPDATE */
static inline bool liveupdate_enabled(void)
{
return false;
}
static inline int liveupdate_reboot(void)
{
return 0;
}
static inline int liveupdate_register_file_handler(struct liveupdate_file_handler *fh)
{
return -EOPNOTSUPP;
}
static inline int liveupdate_unregister_file_handler(struct liveupdate_file_handler *fh)
{
return -EOPNOTSUPP;
}
#endif /* CONFIG_LIVEUPDATE */
#endif /* _LINUX_LIVEUPDATE_H */

View File

@@ -148,11 +148,16 @@ __STRUCT_FRACT(u32)
/**
* abs - return absolute value of an argument
* @x: the value. If it is unsigned type, it is converted to signed type first.
* char is treated as if it was signed (regardless of whether it really is)
* but the macro's return type is preserved as char.
* @x: the value.
*
* Return: an absolute value of x.
* If it is unsigned type, @x is converted to signed type first.
* char is treated as if it was signed (regardless of whether it really is)
* but the macro's return type is preserved as char.
*
* NOTE, for signed type if @x is the minimum, the returned result is undefined
* as there is not enough bits to represent it as a positive number.
*
* Return: an absolute value of @x.
*/
#define abs(x) __abs_choose_expr(x, long long, \
__abs_choose_expr(x, long, \

View File

@@ -158,6 +158,17 @@ static inline u64 mul_u32_u32(u32 a, u32 b)
}
#endif
#ifndef add_u64_u32
/*
* Many a GCC version also messes this up.
* Zero extending b and then spilling everything to stack.
*/
static inline u64 add_u64_u32(u64 a, u32 b)
{
return a + b;
}
#endif
#if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__)
#ifndef mul_u64_u32_shr
@@ -282,7 +293,53 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor)
}
#endif /* mul_u64_u32_div */
u64 mul_u64_u64_div_u64(u64 a, u64 mul, u64 div);
/**
* mul_u64_add_u64_div_u64 - unsigned 64bit multiply, add, and divide
* @a: first unsigned 64bit multiplicand
* @b: second unsigned 64bit multiplicand
* @c: unsigned 64bit addend
* @d: unsigned 64bit divisor
*
* Multiply two 64bit values together to generate a 128bit product
* add a third value and then divide by a fourth.
* The Generic code divides by 0 if @d is zero and returns ~0 on overflow.
* Architecture specific code may trap on zero or overflow.
*
* Return: (@a * @b + @c) / @d
*/
u64 mul_u64_add_u64_div_u64(u64 a, u64 b, u64 c, u64 d);
/**
* mul_u64_u64_div_u64 - unsigned 64bit multiply and divide
* @a: first unsigned 64bit multiplicand
* @b: second unsigned 64bit multiplicand
* @d: unsigned 64bit divisor
*
* Multiply two 64bit values together to generate a 128bit product
* and then divide by a third value.
* The Generic code divides by 0 if @d is zero and returns ~0 on overflow.
* Architecture specific code may trap on zero or overflow.
*
* Return: @a * @b / @d
*/
#define mul_u64_u64_div_u64(a, b, d) mul_u64_add_u64_div_u64(a, b, 0, d)
/**
* mul_u64_u64_div_u64_roundup - unsigned 64bit multiply and divide rounded up
* @a: first unsigned 64bit multiplicand
* @b: second unsigned 64bit multiplicand
* @d: unsigned 64bit divisor
*
* Multiply two 64bit values together to generate a 128bit product
* and then divide and round up.
* The Generic code divides by 0 if @d is zero and returns ~0 on overflow.
* Architecture specific code may trap on zero or overflow.
*
* Return: (@a * @b + @d - 1) / @d
*/
#define mul_u64_u64_div_u64_roundup(a, b, d) \
({ u64 _tmp = (d); mul_u64_add_u64_div_u64(a, b, _tmp - 1, _tmp); })
/**
* DIV64_U64_ROUND_UP - unsigned 64bit divide with 64bit divisor rounded up

View File

@@ -16,7 +16,7 @@
bool __ret_cond = !!(condition); \
bool __ret_once = false; \
\
if (unlikely(__ret_cond && !__already_done)) { \
if (unlikely(__ret_cond) && unlikely(!__already_done)) {\
__already_done = true; \
__ret_once = true; \
} \

View File

@@ -86,7 +86,6 @@ static inline void set_arch_panic_timeout(int timeout, int arch_default_timeout)
struct taint_flag {
char c_true; /* character printed when tainted */
char c_false; /* character printed when not tainted */
bool module; /* also show as a per-module taint flag */
const char *desc; /* verbose description of the set taint flag */
};

View File

@@ -43,8 +43,36 @@ extern void rb_erase(struct rb_node *, struct rb_root *);
/* Find logical next and previous nodes in a tree */
extern struct rb_node *rb_next(const struct rb_node *);
extern struct rb_node *rb_prev(const struct rb_node *);
extern struct rb_node *rb_first(const struct rb_root *);
extern struct rb_node *rb_last(const struct rb_root *);
/*
* This function returns the first node (in sort order) of the tree.
*/
static inline struct rb_node *rb_first(const struct rb_root *root)
{
struct rb_node *n;
n = root->rb_node;
if (!n)
return NULL;
while (n->rb_left)
n = n->rb_left;
return n;
}
/*
* This function returns the last node (in sort order) of the tree.
*/
static inline struct rb_node *rb_last(const struct rb_root *root)
{
struct rb_node *n;
n = root->rb_node;
if (!n)
return NULL;
while (n->rb_right)
n = n->rb_right;
return n;
}
/* Postorder iteration - always visit the parent after its children */
extern struct rb_node *rb_first_postorder(const struct rb_root *);

View File

@@ -10,6 +10,7 @@
#include <linux/xattr.h>
#include <linux/fs_parser.h>
#include <linux/userfaultfd_k.h>
#include <linux/bits.h>
struct swap_iocb;
@@ -19,6 +20,19 @@ struct swap_iocb;
#define SHMEM_MAXQUOTAS 2
#endif
/* Suppress pre-accounting of the entire object size. */
#define SHMEM_F_NORESERVE BIT(0)
/* Disallow swapping. */
#define SHMEM_F_LOCKED BIT(1)
/*
* Disallow growing, shrinking, or hole punching in the inode. Combined with
* folio pinning, makes sure the inode's mapping stays fixed.
*
* In some ways similar to F_SEAL_GROW | F_SEAL_SHRINK, but can be removed and
* isn't directly visible to userspace.
*/
#define SHMEM_F_MAPPING_FROZEN BIT(2)
struct shmem_inode_info {
spinlock_t lock;
unsigned int seals; /* shmem seals */
@@ -186,6 +200,15 @@ static inline bool shmem_file(struct file *file)
return shmem_mapping(file->f_mapping);
}
/* Must be called with inode lock taken exclusive. */
static inline void shmem_freeze(struct inode *inode, bool freeze)
{
if (freeze)
SHMEM_I(inode)->flags |= SHMEM_F_MAPPING_FROZEN;
else
SHMEM_I(inode)->flags &= ~SHMEM_F_MAPPING_FROZEN;
}
/*
* If fallocate(FALLOC_FL_KEEP_SIZE) has been used, there may be pages
* beyond i_size's notion of EOF, which fallocate has committed to reserving:

View File

@@ -14,7 +14,7 @@
#define SYS_INFO_LOCKS 0x00000008
#define SYS_INFO_FTRACE 0x00000010
#define SYS_INFO_PANIC_CONSOLE_REPLAY 0x00000020
#define SYS_INFO_ALL_CPU_BT 0x00000040
#define SYS_INFO_ALL_BT 0x00000040
#define SYS_INFO_BLOCKED_TASKS 0x00000080
void sys_info(unsigned long si_mask);

View File

@@ -161,8 +161,6 @@ __copy_to_user(void __user *to, const void *from, unsigned long n)
* directly in the normal copy_to/from_user(), the other ones go
* through an extern _copy_to/from_user(), which expands the same code
* here.
*
* Rust code always uses the extern definition.
*/
static inline __must_check unsigned long
_inline_copy_from_user(void *to, const void __user *from, unsigned long n)
@@ -192,8 +190,10 @@ fail:
memset(to + (n - res), 0, res);
return res;
}
#ifndef INLINE_COPY_FROM_USER
extern __must_check unsigned long
_copy_from_user(void *, const void __user *, unsigned long);
#endif
static inline __must_check unsigned long
_inline_copy_to_user(void __user *to, const void *from, unsigned long n)
@@ -207,8 +207,10 @@ _inline_copy_to_user(void __user *to, const void *from, unsigned long n)
}
return n;
}
#ifndef INLINE_COPY_TO_USER
extern __must_check unsigned long
_copy_to_user(void __user *, const void *, unsigned long);
#endif
static __always_inline unsigned long __must_check
copy_from_user(void *to, const void __user *from, unsigned long n)

View File

@@ -136,10 +136,10 @@
#define PTR_IF(cond, ptr) ((cond) ? (ptr) : NULL)
/**
* to_user_ptr - cast a pointer passed as u64 from user space to void __user *
* u64_to_user_ptr - cast a pointer passed as u64 from user space to void __user *
* @x: The u64 value from user space, usually via IOCTL
*
* to_user_ptr() simply casts a pointer passed as u64 from user space to void
* u64_to_user_ptr() simply casts a pointer passed as u64 from user space to void
* __user * correctly. Using this lets us get rid of all the tiresome casts.
*/
#define u64_to_user_ptr(x) \

View File

@@ -5,6 +5,7 @@
#include <linux/linkage.h>
#include <linux/elfcore.h>
#include <linux/elf.h>
#include <uapi/linux/vmcore.h>
#define CRASH_CORE_NOTE_HEAD_BYTES ALIGN(sizeof(struct elf_note), 4)
#define CRASH_CORE_NOTE_NAME_BYTES ALIGN(sizeof(NN_PRSTATUS), 4)
@@ -77,4 +78,11 @@ extern u32 *vmcoreinfo_note;
Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type,
void *data, size_t data_len);
void final_note(Elf_Word *buf);
#ifdef CONFIG_VMCORE_INFO
void hwerr_log_error_type(enum hwerr_error_type src);
#else
static inline void hwerr_log_error_type(enum hwerr_error_type src) {};
#endif
#endif /* LINUX_VMCORE_INFO_H */

View File

@@ -141,21 +141,7 @@ static inline unsigned long xxhash(const void *input, size_t length,
*/
/**
* struct xxh32_state - private xxh32 state, do not use members directly
*/
struct xxh32_state {
uint32_t total_len_32;
uint32_t large_len;
uint32_t v1;
uint32_t v2;
uint32_t v3;
uint32_t v4;
uint32_t mem32[4];
uint32_t memsize;
};
/**
* struct xxh32_state - private xxh64 state, do not use members directly
* struct xxh64_state - private xxh64 state, do not use members directly
*/
struct xxh64_state {
uint64_t total_len;
@@ -167,16 +153,6 @@ struct xxh64_state {
uint32_t memsize;
};
/**
* xxh32_reset() - reset the xxh32 state to start a new hashing operation
*
* @state: The xxh32 state to reset.
* @seed: Initialize the hash state with this seed.
*
* Call this function on any xxh32_state to prepare for a new hashing operation.
*/
void xxh32_reset(struct xxh32_state *state, uint32_t seed);
/**
* xxh64_reset() - reset the xxh64 state to start a new hashing operation
*
@@ -210,24 +186,4 @@ int xxh64_update(struct xxh64_state *state, const void *input, size_t length);
*/
uint64_t xxh64_digest(const struct xxh64_state *state);
/*-**************************
* Utils
***************************/
/**
* xxh32_copy_state() - copy the source state into the destination state
*
* @src: The source xxh32 state.
* @dst: The destination xxh32 state.
*/
void xxh32_copy_state(struct xxh32_state *dst, const struct xxh32_state *src);
/**
* xxh64_copy_state() - copy the source state into the destination state
*
* @src: The source xxh64 state.
* @dst: The destination xxh64 state.
*/
void xxh64_copy_state(struct xxh64_state *dst, const struct xxh64_state *src);
#endif /* XXHASH_H */

View File

@@ -0,0 +1,216 @@
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/*
* Userspace interface for /dev/liveupdate
* Live Update Orchestrator
*
* Copyright (c) 2025, Google LLC.
* Pasha Tatashin <pasha.tatashin@soleen.com>
*/
#ifndef _UAPI_LIVEUPDATE_H
#define _UAPI_LIVEUPDATE_H
#include <linux/ioctl.h>
#include <linux/types.h>
/**
* DOC: General ioctl format
*
* The ioctl interface follows a general format to allow for extensibility. Each
* ioctl is passed in a structure pointer as the argument providing the size of
* the structure in the first u32. The kernel checks that any structure space
* beyond what it understands is 0. This allows userspace to use the backward
* compatible portion while consistently using the newer, larger, structures.
*
* ioctls use a standard meaning for common errnos:
*
* - ENOTTY: The IOCTL number itself is not supported at all
* - E2BIG: The IOCTL number is supported, but the provided structure has
* non-zero in a part the kernel does not understand.
* - EOPNOTSUPP: The IOCTL number is supported, and the structure is
* understood, however a known field has a value the kernel does not
* understand or support.
* - EINVAL: Everything about the IOCTL was understood, but a field is not
* correct.
* - ENOENT: A provided token does not exist.
* - ENOMEM: Out of memory.
* - EOVERFLOW: Mathematics overflowed.
*
* As well as additional errnos, within specific ioctls.
*/
/* The ioctl type, documented in ioctl-number.rst */
#define LIVEUPDATE_IOCTL_TYPE 0xBA
/* The maximum length of session name including null termination */
#define LIVEUPDATE_SESSION_NAME_LENGTH 64
/* The /dev/liveupdate ioctl commands */
enum {
LIVEUPDATE_CMD_BASE = 0x00,
LIVEUPDATE_CMD_CREATE_SESSION = LIVEUPDATE_CMD_BASE,
LIVEUPDATE_CMD_RETRIEVE_SESSION = 0x01,
};
/* ioctl commands for session file descriptors */
enum {
LIVEUPDATE_CMD_SESSION_BASE = 0x40,
LIVEUPDATE_CMD_SESSION_PRESERVE_FD = LIVEUPDATE_CMD_SESSION_BASE,
LIVEUPDATE_CMD_SESSION_RETRIEVE_FD = 0x41,
LIVEUPDATE_CMD_SESSION_FINISH = 0x42,
};
/**
* struct liveupdate_ioctl_create_session - ioctl(LIVEUPDATE_IOCTL_CREATE_SESSION)
* @size: Input; sizeof(struct liveupdate_ioctl_create_session)
* @fd: Output; The new file descriptor for the created session.
* @name: Input; A null-terminated string for the session name, max
* length %LIVEUPDATE_SESSION_NAME_LENGTH including termination
* character.
*
* Creates a new live update session for managing preserved resources.
* This ioctl can only be called on the main /dev/liveupdate device.
*
* Return: 0 on success, negative error code on failure.
*/
struct liveupdate_ioctl_create_session {
__u32 size;
__s32 fd;
__u8 name[LIVEUPDATE_SESSION_NAME_LENGTH];
};
#define LIVEUPDATE_IOCTL_CREATE_SESSION \
_IO(LIVEUPDATE_IOCTL_TYPE, LIVEUPDATE_CMD_CREATE_SESSION)
/**
* struct liveupdate_ioctl_retrieve_session - ioctl(LIVEUPDATE_IOCTL_RETRIEVE_SESSION)
* @size: Input; sizeof(struct liveupdate_ioctl_retrieve_session)
* @fd: Output; The new file descriptor for the retrieved session.
* @name: Input; A null-terminated string identifying the session to retrieve.
* The name must exactly match the name used when the session was
* created in the previous kernel.
*
* Retrieves a handle (a new file descriptor) for a preserved session by its
* name. This is the primary mechanism for a userspace agent to regain control
* of its preserved resources after a live update.
*
* The userspace application provides the null-terminated `name` of a session
* it created before the live update. If a preserved session with a matching
* name is found, the kernel instantiates it and returns a new file descriptor
* in the `fd` field. This new session FD can then be used for all file-specific
* operations, such as restoring individual file descriptors with
* LIVEUPDATE_SESSION_RETRIEVE_FD.
*
* It is the responsibility of the userspace application to know the names of
* the sessions it needs to retrieve. If no session with the given name is
* found, the ioctl will fail with -ENOENT.
*
* This ioctl can only be called on the main /dev/liveupdate device when the
* system is in the LIVEUPDATE_STATE_UPDATED state.
*/
struct liveupdate_ioctl_retrieve_session {
__u32 size;
__s32 fd;
__u8 name[LIVEUPDATE_SESSION_NAME_LENGTH];
};
#define LIVEUPDATE_IOCTL_RETRIEVE_SESSION \
_IO(LIVEUPDATE_IOCTL_TYPE, LIVEUPDATE_CMD_RETRIEVE_SESSION)
/* Session specific IOCTLs */
/**
* struct liveupdate_session_preserve_fd - ioctl(LIVEUPDATE_SESSION_PRESERVE_FD)
* @size: Input; sizeof(struct liveupdate_session_preserve_fd)
* @fd: Input; The user-space file descriptor to be preserved.
* @token: Input; An opaque, unique token for preserved resource.
*
* Holds parameters for preserving a file descriptor.
*
* User sets the @fd field identifying the file descriptor to preserve
* (e.g., memfd, kvm, iommufd, VFIO). The kernel validates if this FD type
* and its dependencies are supported for preservation. If validation passes,
* the kernel marks the FD internally and *initiates the process* of preparing
* its state for saving. The actual snapshotting of the state typically occurs
* during the subsequent %LIVEUPDATE_IOCTL_PREPARE execution phase, though
* some finalization might occur during freeze.
* On successful validation and initiation, the kernel uses the @token
* field with an opaque identifier representing the resource being preserved.
* This token confirms the FD is targeted for preservation and is required for
* the subsequent %LIVEUPDATE_SESSION_RETRIEVE_FD call after the live update.
*
* Return: 0 on success (validation passed, preservation initiated), negative
* error code on failure (e.g., unsupported FD type, dependency issue,
* validation failed).
*/
struct liveupdate_session_preserve_fd {
__u32 size;
__s32 fd;
__aligned_u64 token;
};
#define LIVEUPDATE_SESSION_PRESERVE_FD \
_IO(LIVEUPDATE_IOCTL_TYPE, LIVEUPDATE_CMD_SESSION_PRESERVE_FD)
/**
* struct liveupdate_session_retrieve_fd - ioctl(LIVEUPDATE_SESSION_RETRIEVE_FD)
* @size: Input; sizeof(struct liveupdate_session_retrieve_fd)
* @fd: Output; The new file descriptor representing the fully restored
* kernel resource.
* @token: Input; An opaque, token that was used to preserve the resource.
*
* Retrieve a previously preserved file descriptor.
*
* User sets the @token field to the value obtained from a successful
* %LIVEUPDATE_IOCTL_FD_PRESERVE call before the live update. On success,
* the kernel restores the state (saved during the PREPARE/FREEZE phases)
* associated with the token and populates the @fd field with a new file
* descriptor referencing the restored resource in the current (new) kernel.
* This operation must be performed *before* signaling completion via
* %LIVEUPDATE_IOCTL_FINISH.
*
* Return: 0 on success, negative error code on failure (e.g., invalid token).
*/
struct liveupdate_session_retrieve_fd {
__u32 size;
__s32 fd;
__aligned_u64 token;
};
#define LIVEUPDATE_SESSION_RETRIEVE_FD \
_IO(LIVEUPDATE_IOCTL_TYPE, LIVEUPDATE_CMD_SESSION_RETRIEVE_FD)
/**
* struct liveupdate_session_finish - ioctl(LIVEUPDATE_SESSION_FINISH)
* @size: Input; sizeof(struct liveupdate_session_finish)
* @reserved: Input; Must be zero. Reserved for future use.
*
* Signals the completion of the restoration process for a retrieved session.
* This is the final operation that should be performed on a session file
* descriptor after a live update.
*
* This ioctl must be called once all required file descriptors for the session
* have been successfully retrieved (using %LIVEUPDATE_SESSION_RETRIEVE_FD) and
* are fully restored from the userspace and kernel perspective.
*
* Upon success, the kernel releases its ownership of the preserved resources
* associated with this session. This allows internal resources to be freed,
* typically by decrementing reference counts on the underlying preserved
* objects.
*
* If this operation fails, the resources remain preserved in memory. Userspace
* may attempt to call finish again. The resources will otherwise be reset
* during the next live update cycle.
*
* Return: 0 on success, negative error code on failure.
*/
struct liveupdate_session_finish {
__u32 size;
__u32 reserved;
};
#define LIVEUPDATE_SESSION_FINISH \
_IO(LIVEUPDATE_IOCTL_TYPE, LIVEUPDATE_CMD_SESSION_FINISH)
#endif /* _UAPI_LIVEUPDATE_H */

View File

@@ -15,4 +15,13 @@ struct vmcoredd_header {
__u8 dump_name[VMCOREDD_MAX_NAME_BYTES]; /* Device dump's name */
};
enum hwerr_error_type {
HWERR_RECOV_CPU,
HWERR_RECOV_MEMORY,
HWERR_RECOV_PCI,
HWERR_RECOV_CXL,
HWERR_RECOV_OTHERS,
HWERR_RECOV_MAX,
};
#endif /* _UAPI_VMCORE_H */

View File

@@ -1519,6 +1519,24 @@ config BOOT_CONFIG_EMBED_FILE
This bootconfig will be used if there is no initrd or no other
bootconfig in the initrd.
config CMDLINE_LOG_WRAP_IDEAL_LEN
int "Length to try to wrap the cmdline when logged at boot"
default 1021
range 0 1021
help
At boot time, the kernel command line is logged to the console.
The log message will start with the prefix "Kernel command line: ".
The log message will attempt to be wrapped (split into multiple log
messages) at spaces based on CMDLINE_LOG_WRAP_IDEAL_LEN characters.
If wrapping happens, each log message will start with the prefix and
all but the last message will end with " \". Messages may exceed the
ideal length if a place to wrap isn't found before the specified
number of characters.
A value of 0 disables wrapping, though be warned that the maximum
length of a log message (1021 characters) may cause the cmdline to
be truncated.
config INITRAMFS_PRESERVE_MTIME
bool "Preserve cpio archive mtimes in initramfs"
depends on BLK_DEV_INITRD
@@ -2171,6 +2189,8 @@ config TRACEPOINTS
source "kernel/Kconfig.kexec"
source "kernel/liveupdate/Kconfig"
endmenu # General setup
source "arch/Kconfig"

View File

@@ -5,19 +5,22 @@
* Copyright (C) 1991, 1992 Linus Torvalds
*/
#include <linux/jiffies.h>
#include <linux/delay.h>
#include <linux/init.h>
#include <linux/timex.h>
#include <linux/smp.h>
#include <linux/jiffies.h>
#include <linux/kstrtox.h>
#include <linux/percpu.h>
#include <linux/printk.h>
#include <linux/smp.h>
#include <linux/stddef.h>
#include <linux/timex.h>
unsigned long lpj_fine;
unsigned long preset_lpj;
static int __init lpj_setup(char *str)
{
preset_lpj = simple_strtoul(str,NULL,0);
return 1;
return kstrtoul(str, 0, &preset_lpj) == 0;
}
__setup("lpj=", lpj_setup);

View File

@@ -906,6 +906,101 @@ static void __init early_numa_node_init(void)
#endif
}
#define KERNEL_CMDLINE_PREFIX "Kernel command line: "
#define KERNEL_CMDLINE_PREFIX_LEN (sizeof(KERNEL_CMDLINE_PREFIX) - 1)
#define KERNEL_CMDLINE_CONTINUATION " \\"
#define KERNEL_CMDLINE_CONTINUATION_LEN (sizeof(KERNEL_CMDLINE_CONTINUATION) - 1)
#define MIN_CMDLINE_LOG_WRAP_IDEAL_LEN (KERNEL_CMDLINE_PREFIX_LEN + \
KERNEL_CMDLINE_CONTINUATION_LEN)
#define CMDLINE_LOG_WRAP_IDEAL_LEN (CONFIG_CMDLINE_LOG_WRAP_IDEAL_LEN > \
MIN_CMDLINE_LOG_WRAP_IDEAL_LEN ? \
CONFIG_CMDLINE_LOG_WRAP_IDEAL_LEN : \
MIN_CMDLINE_LOG_WRAP_IDEAL_LEN)
#define IDEAL_CMDLINE_LEN (CMDLINE_LOG_WRAP_IDEAL_LEN - KERNEL_CMDLINE_PREFIX_LEN)
#define IDEAL_CMDLINE_SPLIT_LEN (IDEAL_CMDLINE_LEN - KERNEL_CMDLINE_CONTINUATION_LEN)
/**
* print_kernel_cmdline() - Print the kernel cmdline with wrapping.
* @cmdline: The cmdline to print.
*
* Print the kernel command line, trying to wrap based on the Kconfig knob
* CONFIG_CMDLINE_LOG_WRAP_IDEAL_LEN.
*
* Wrapping is based on spaces, ignoring quotes. All lines are prefixed
* with "Kernel command line: " and lines that are not the last line have
* a " \" suffix added to them. The prefix and suffix count towards the
* line length for wrapping purposes. The ideal length will be exceeded
* if no appropriate place to wrap is found.
*
* Example output if CONFIG_CMDLINE_LOG_WRAP_IDEAL_LEN is 40:
* Kernel command line: loglevel=7 \
* Kernel command line: init=/sbin/init \
* Kernel command line: root=PARTUUID=8c3efc1a-768b-6642-8d0c-89eb782f19f0/PARTNROFF=1 \
* Kernel command line: rootwait ro \
* Kernel command line: my_quoted_arg="The \
* Kernel command line: quick brown fox \
* Kernel command line: jumps over the \
* Kernel command line: lazy dog."
*/
static void __init print_kernel_cmdline(const char *cmdline)
{
size_t len;
/* Config option of 0 or anything longer than the max disables wrapping */
if (CONFIG_CMDLINE_LOG_WRAP_IDEAL_LEN == 0 ||
IDEAL_CMDLINE_LEN >= COMMAND_LINE_SIZE - 1) {
pr_notice("%s%s\n", KERNEL_CMDLINE_PREFIX, cmdline);
return;
}
len = strlen(cmdline);
while (len > IDEAL_CMDLINE_LEN) {
const char *first_space;
const char *prev_cutoff;
const char *cutoff;
int to_print;
size_t used;
/* Find the last ' ' that wouldn't make the line too long */
prev_cutoff = NULL;
cutoff = cmdline;
while (true) {
cutoff = strchr(cutoff + 1, ' ');
if (!cutoff || cutoff - cmdline > IDEAL_CMDLINE_SPLIT_LEN)
break;
prev_cutoff = cutoff;
}
if (prev_cutoff)
cutoff = prev_cutoff;
else if (!cutoff)
break;
/* Find the beginning and end of the string of spaces */
first_space = cutoff;
while (first_space > cmdline && first_space[-1] == ' ')
first_space--;
to_print = first_space - cmdline;
while (*cutoff == ' ')
cutoff++;
used = cutoff - cmdline;
/* If the whole string is used, break and do the final printout */
if (len == used)
break;
if (to_print)
pr_notice("%s%.*s%s\n", KERNEL_CMDLINE_PREFIX,
to_print, cmdline, KERNEL_CMDLINE_CONTINUATION);
len -= used;
cmdline += used;
}
if (len)
pr_notice("%s%s\n", KERNEL_CMDLINE_PREFIX, cmdline);
}
asmlinkage __visible __init __no_sanitize_address __noreturn __no_stack_protector
void start_kernel(void)
{
@@ -942,7 +1037,7 @@ void start_kernel(void)
early_numa_node_init();
boot_cpu_hotplug_init();
pr_notice("Kernel command line: %s\n", saved_command_line);
print_kernel_cmdline(saved_command_line);
/* parameters may set static keys */
parse_early_param();
after_dashes = parse_args("Booting kernel",

View File

@@ -76,10 +76,10 @@ static struct ipc_namespace *create_ipc_ns(struct user_namespace *user_ns,
err = -ENOMEM;
if (!setup_mq_sysctls(ns))
goto fail_put;
goto fail_mq_mount;
if (!setup_ipc_sysctls(ns))
goto fail_mq;
goto fail_mq_sysctls;
err = msg_init_ns(ns);
if (err)
@@ -93,9 +93,10 @@ static struct ipc_namespace *create_ipc_ns(struct user_namespace *user_ns,
fail_ipc:
retire_ipc_sysctls(ns);
fail_mq:
fail_mq_sysctls:
retire_mq_sysctls(ns);
fail_mq_mount:
mntput(ns->mq_mnt);
fail_put:
put_user_ns(ns->user_ns);
ns_common_free(ns);

View File

@@ -94,30 +94,6 @@ config KEXEC_JUMP
Jump between original kernel and kexeced kernel and invoke
code in physical address mode via KEXEC
config KEXEC_HANDOVER
bool "kexec handover"
depends on ARCH_SUPPORTS_KEXEC_HANDOVER && ARCH_SUPPORTS_KEXEC_FILE
depends on !DEFERRED_STRUCT_PAGE_INIT
select MEMBLOCK_KHO_SCRATCH
select KEXEC_FILE
select DEBUG_FS
select LIBFDT
select CMA
help
Allow kexec to hand over state across kernels by generating and
passing additional metadata to the target kernel. This is useful
to keep data or state alive across the kexec. For this to work,
both source and target kernels need to have this option enabled.
config KEXEC_HANDOVER_DEBUG
bool "Enable Kexec Handover debug checks"
depends on KEXEC_HANDOVER
help
This option enables extra sanity checks for the Kexec Handover
subsystem. Since, KHO performance is crucial in live update
scenarios and the extra code might be adding overhead it is
only optionally enabled.
config CRASH_DUMP
bool "kernel crash dumps"
default ARCH_DEFAULT_CRASH_DUMP

View File

@@ -52,6 +52,7 @@ obj-y += printk/
obj-y += irq/
obj-y += rcu/
obj-y += livepatch/
obj-y += liveupdate/
obj-y += dma/
obj-y += entry/
obj-y += unwind/
@@ -82,8 +83,6 @@ obj-$(CONFIG_CRASH_DUMP_KUNIT_TEST) += crash_core_test.o
obj-$(CONFIG_KEXEC) += kexec.o
obj-$(CONFIG_KEXEC_FILE) += kexec_file.o
obj-$(CONFIG_KEXEC_ELF) += kexec_elf.o
obj-$(CONFIG_KEXEC_HANDOVER) += kexec_handover.o
obj-$(CONFIG_KEXEC_HANDOVER_DEBUG) += kexec_handover_debug.o
obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o
obj-$(CONFIG_COMPAT) += compat.o
obj-$(CONFIG_CGROUPS) += cgroup/

View File

@@ -83,7 +83,7 @@ CONFIG_SLUB_DEBUG_ON=y
#
# Debug Oops, Lockups and Hangs
#
# CONFIG_BOOTPARAM_HUNG_TASK_PANIC is not set
CONFIG_BOOTPARAM_HUNG_TASK_PANIC=0
# CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set
CONFIG_DEBUG_ATOMIC_SLEEP=y
CONFIG_DETECT_HUNG_TASK=y

View File

@@ -524,6 +524,9 @@ void __init reserve_crashkernel_cma(unsigned long long cma_size)
#ifndef HAVE_ARCH_ADD_CRASH_RES_TO_IOMEM_EARLY
static __init int insert_crashkernel_resources(void)
{
if (!arch_add_crash_res_to_iomem())
return 0;
if (crashk_res.start < crashk_res.end)
insert_resource(&iomem_resource, &crashk_res);

View File

@@ -251,10 +251,8 @@ repeat:
memset(&post, 0, sizeof(post));
/* don't need to get the RCU readlock here - the process is dead and
* can't be modifying its own credentials. But shut RCU-lockdep up */
rcu_read_lock();
* can't be modifying its own credentials. */
dec_rlimit_ucounts(task_ucounts(p), UCOUNT_RLIMIT_NPROC, 1);
rcu_read_unlock();
pidfs_exit(p);
cgroup_task_release(p);

View File

@@ -208,15 +208,62 @@ struct vm_stack {
struct vm_struct *stack_vm_area;
};
static struct vm_struct *alloc_thread_stack_node_from_cache(struct task_struct *tsk, int node)
{
struct vm_struct *vm_area;
unsigned int i;
/*
* If the node has memory, we are guaranteed the stacks are backed by local pages.
* Otherwise the pages are arbitrary.
*
* Note that depending on cpuset it is possible we will get migrated to a different
* node immediately after allocating here, so this does *not* guarantee locality for
* arbitrary callers.
*/
scoped_guard(preempt) {
if (node != NUMA_NO_NODE && numa_node_id() != node)
return NULL;
for (i = 0; i < NR_CACHED_STACKS; i++) {
vm_area = this_cpu_xchg(cached_stacks[i], NULL);
if (vm_area)
return vm_area;
}
}
return NULL;
}
static bool try_release_thread_stack_to_cache(struct vm_struct *vm_area)
{
unsigned int i;
int nid;
for (i = 0; i < NR_CACHED_STACKS; i++) {
struct vm_struct *tmp = NULL;
/*
* Don't cache stacks if any of the pages don't match the local domain, unless
* there is no local memory to begin with.
*
* Note that lack of local memory does not automatically mean it makes no difference
* performance-wise which other domain backs the stack. In this case we are merely
* trying to avoid constantly going to vmalloc.
*/
scoped_guard(preempt) {
nid = numa_node_id();
if (node_state(nid, N_MEMORY)) {
for (i = 0; i < vm_area->nr_pages; i++) {
struct page *page = vm_area->pages[i];
if (page_to_nid(page) != nid)
return false;
}
}
if (this_cpu_try_cmpxchg(cached_stacks[i], &tmp, vm_area))
return true;
for (i = 0; i < NR_CACHED_STACKS; i++) {
struct vm_struct *tmp = NULL;
if (this_cpu_try_cmpxchg(cached_stacks[i], &tmp, vm_area))
return true;
}
}
return false;
}
@@ -283,13 +330,9 @@ static int alloc_thread_stack_node(struct task_struct *tsk, int node)
{
struct vm_struct *vm_area;
void *stack;
int i;
for (i = 0; i < NR_CACHED_STACKS; i++) {
vm_area = this_cpu_xchg(cached_stacks[i], NULL);
if (!vm_area)
continue;
vm_area = alloc_thread_stack_node_from_cache(tsk, node);
if (vm_area) {
if (memcg_charge_kernel_stack(vm_area)) {
vfree(vm_area->addr);
return -ENOMEM;

View File

@@ -24,6 +24,7 @@
#include <linux/sched/sysctl.h>
#include <linux/hung_task.h>
#include <linux/rwsem.h>
#include <linux/sys_info.h>
#include <trace/events/sched.h>
@@ -50,7 +51,6 @@ static unsigned long __read_mostly sysctl_hung_task_detect_count;
* Zero means infinite timeout - no checking done:
*/
unsigned long __read_mostly sysctl_hung_task_timeout_secs = CONFIG_DEFAULT_HUNG_TASK_TIMEOUT;
EXPORT_SYMBOL_GPL(sysctl_hung_task_timeout_secs);
/*
* Zero (default value) means use sysctl_hung_task_timeout_secs:
@@ -60,12 +60,17 @@ static unsigned long __read_mostly sysctl_hung_task_check_interval_secs;
static int __read_mostly sysctl_hung_task_warnings = 10;
static int __read_mostly did_panic;
static bool hung_task_show_lock;
static bool hung_task_call_panic;
static bool hung_task_show_all_bt;
static struct task_struct *watchdog_task;
/*
* A bitmask to control what kinds of system info to be printed when
* a hung task is detected, it could be task, memory, lock etc. Refer
* include/linux/sys_info.h for detailed bit definition.
*/
static unsigned long hung_task_si_mask;
#ifdef CONFIG_SMP
/*
* Should we dump all CPUs backtraces in a hung task event?
@@ -81,7 +86,7 @@ static unsigned int __read_mostly sysctl_hung_task_all_cpu_backtrace;
* hung task is detected:
*/
static unsigned int __read_mostly sysctl_hung_task_panic =
IS_ENABLED(CONFIG_BOOTPARAM_HUNG_TASK_PANIC);
CONFIG_BOOTPARAM_HUNG_TASK_PANIC;
static int
hung_task_panic(struct notifier_block *this, unsigned long event, void *ptr)
@@ -218,8 +223,11 @@ static inline void debug_show_blocker(struct task_struct *task, unsigned long ti
}
#endif
static void check_hung_task(struct task_struct *t, unsigned long timeout)
static void check_hung_task(struct task_struct *t, unsigned long timeout,
unsigned long prev_detect_count)
{
unsigned long total_hung_task;
if (!task_is_hung(t, timeout))
return;
@@ -229,11 +237,11 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout)
*/
sysctl_hung_task_detect_count++;
total_hung_task = sysctl_hung_task_detect_count - prev_detect_count;
trace_sched_process_hang(t);
if (sysctl_hung_task_panic) {
if (sysctl_hung_task_panic && total_hung_task >= sysctl_hung_task_panic) {
console_verbose();
hung_task_show_lock = true;
hung_task_call_panic = true;
}
@@ -256,10 +264,7 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout)
" disables this message.\n");
sched_show_task(t);
debug_show_blocker(t, timeout);
hung_task_show_lock = true;
if (sysctl_hung_task_all_cpu_backtrace)
hung_task_show_all_bt = true;
if (!sysctl_hung_task_warnings)
pr_info("Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings\n");
}
@@ -300,6 +305,9 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
int max_count = sysctl_hung_task_check_count;
unsigned long last_break = jiffies;
struct task_struct *g, *t;
unsigned long prev_detect_count = sysctl_hung_task_detect_count;
int need_warning = sysctl_hung_task_warnings;
unsigned long si_mask = hung_task_si_mask;
/*
* If the system crashed already then all bets are off,
@@ -308,7 +316,7 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
if (test_taint(TAINT_DIE) || did_panic)
return;
hung_task_show_lock = false;
rcu_read_lock();
for_each_process_thread(g, t) {
@@ -320,18 +328,23 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
last_break = jiffies;
}
check_hung_task(t, timeout);
check_hung_task(t, timeout, prev_detect_count);
}
unlock:
rcu_read_unlock();
if (hung_task_show_lock)
debug_show_all_locks();
if (hung_task_show_all_bt) {
hung_task_show_all_bt = false;
trigger_all_cpu_backtrace();
if (!(sysctl_hung_task_detect_count - prev_detect_count))
return;
if (need_warning || hung_task_call_panic) {
si_mask |= SYS_INFO_LOCKS;
if (sysctl_hung_task_all_cpu_backtrace)
si_mask |= SYS_INFO_ALL_BT;
}
sys_info(si_mask);
if (hung_task_call_panic)
panic("hung_task: blocked tasks");
}
@@ -389,7 +402,7 @@ static const struct ctl_table hung_task_sysctls[] = {
.mode = 0644,
.proc_handler = proc_dointvec_minmax,
.extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_ONE,
.extra2 = SYSCTL_INT_MAX,
},
{
.procname = "hung_task_check_count",
@@ -430,6 +443,13 @@ static const struct ctl_table hung_task_sysctls[] = {
.mode = 0444,
.proc_handler = proc_doulongvec_minmax,
},
{
.procname = "hung_task_sys_info",
.data = &hung_task_si_mask,
.maxlen = sizeof(hung_task_si_mask),
.mode = 0644,
.proc_handler = sysctl_sys_info_handler,
},
};
static void __init hung_task_sysctl_init(void)

View File

@@ -15,6 +15,7 @@
#include <linux/kexec.h>
#include <linux/mutex.h>
#include <linux/list.h>
#include <linux/liveupdate.h>
#include <linux/highmem.h>
#include <linux/syscalls.h>
#include <linux/reboot.h>
@@ -41,6 +42,7 @@
#include <linux/objtool.h>
#include <linux/kmsg_dump.h>
#include <linux/dma-map-ops.h>
#include <linux/sysfs.h>
#include <asm/page.h>
#include <asm/sections.h>
@@ -742,7 +744,6 @@ static int kimage_load_cma_segment(struct kimage *image, int idx)
struct kexec_segment *segment = &image->segment[idx];
struct page *cma = image->segment_cma[idx];
char *ptr = page_address(cma);
unsigned long maddr;
size_t ubytes, mbytes;
int result = 0;
unsigned char __user *buf = NULL;
@@ -754,15 +755,12 @@ static int kimage_load_cma_segment(struct kimage *image, int idx)
buf = segment->buf;
ubytes = segment->bufsz;
mbytes = segment->memsz;
maddr = segment->mem;
/* Then copy from source buffer to the CMA one */
while (mbytes) {
size_t uchunk, mchunk;
ptr += maddr & ~PAGE_MASK;
mchunk = min_t(size_t, mbytes,
PAGE_SIZE - (maddr & ~PAGE_MASK));
mchunk = min_t(size_t, mbytes, PAGE_SIZE);
uchunk = min(ubytes, mchunk);
if (uchunk) {
@@ -784,7 +782,6 @@ static int kimage_load_cma_segment(struct kimage *image, int idx)
}
ptr += mchunk;
maddr += mchunk;
mbytes -= mchunk;
cond_resched();
@@ -839,9 +836,7 @@ static int kimage_load_normal_segment(struct kimage *image, int idx)
ptr = kmap_local_page(page);
/* Start with a clear page */
clear_page(ptr);
ptr += maddr & ~PAGE_MASK;
mchunk = min_t(size_t, mbytes,
PAGE_SIZE - (maddr & ~PAGE_MASK));
mchunk = min_t(size_t, mbytes, PAGE_SIZE);
uchunk = min(ubytes, mchunk);
if (uchunk) {
@@ -904,9 +899,7 @@ static int kimage_load_crash_segment(struct kimage *image, int idx)
}
arch_kexec_post_alloc_pages(page_address(page), 1, 0);
ptr = kmap_local_page(page);
ptr += maddr & ~PAGE_MASK;
mchunk = min_t(size_t, mbytes,
PAGE_SIZE - (maddr & ~PAGE_MASK));
mchunk = min_t(size_t, mbytes, PAGE_SIZE);
uchunk = min(ubytes, mchunk);
if (mchunk > uchunk) {
/* Zero the trailing part of the page */
@@ -1146,6 +1139,10 @@ int kernel_kexec(void)
goto Unlock;
}
error = liveupdate_reboot();
if (error)
goto Unlock;
#ifdef CONFIG_KEXEC_JUMP
if (kexec_image->preserve_context) {
/*
@@ -1229,3 +1226,143 @@ int kernel_kexec(void)
kexec_unlock();
return error;
}
static ssize_t loaded_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
return sysfs_emit(buf, "%d\n", !!kexec_image);
}
static struct kobj_attribute loaded_attr = __ATTR_RO(loaded);
#ifdef CONFIG_CRASH_DUMP
static ssize_t crash_loaded_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
return sysfs_emit(buf, "%d\n", kexec_crash_loaded());
}
static struct kobj_attribute crash_loaded_attr = __ATTR_RO(crash_loaded);
#ifdef CONFIG_CRASH_RESERVE
static ssize_t crash_cma_ranges_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
ssize_t len = 0;
int i;
for (i = 0; i < crashk_cma_cnt; ++i) {
len += sysfs_emit_at(buf, len, "%08llx-%08llx\n",
crashk_cma_ranges[i].start,
crashk_cma_ranges[i].end);
}
return len;
}
static struct kobj_attribute crash_cma_ranges_attr = __ATTR_RO(crash_cma_ranges);
#endif
static ssize_t crash_size_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
ssize_t size = crash_get_memory_size();
if (size < 0)
return size;
return sysfs_emit(buf, "%zd\n", size);
}
static ssize_t crash_size_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t count)
{
unsigned long cnt;
int ret;
if (kstrtoul(buf, 0, &cnt))
return -EINVAL;
ret = crash_shrink_memory(cnt);
return ret < 0 ? ret : count;
}
static struct kobj_attribute crash_size_attr = __ATTR_RW(crash_size);
#ifdef CONFIG_CRASH_HOTPLUG
static ssize_t crash_elfcorehdr_size_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
unsigned int sz = crash_get_elfcorehdr_size();
return sysfs_emit(buf, "%u\n", sz);
}
static struct kobj_attribute crash_elfcorehdr_size_attr = __ATTR_RO(crash_elfcorehdr_size);
#endif /* CONFIG_CRASH_HOTPLUG */
#endif /* CONFIG_CRASH_DUMP */
static struct attribute *kexec_attrs[] = {
&loaded_attr.attr,
#ifdef CONFIG_CRASH_DUMP
&crash_loaded_attr.attr,
&crash_size_attr.attr,
#ifdef CONFIG_CRASH_RESERVE
&crash_cma_ranges_attr.attr,
#endif
#ifdef CONFIG_CRASH_HOTPLUG
&crash_elfcorehdr_size_attr.attr,
#endif
#endif
NULL
};
struct kexec_link_entry {
const char *target;
const char *name;
};
static struct kexec_link_entry kexec_links[] = {
{ "loaded", "kexec_loaded" },
#ifdef CONFIG_CRASH_DUMP
{ "crash_loaded", "kexec_crash_loaded" },
{ "crash_size", "kexec_crash_size" },
#ifdef CONFIG_CRASH_RESERVE
{"crash_cma_ranges", "kexec_crash_cma_ranges"},
#endif
#ifdef CONFIG_CRASH_HOTPLUG
{ "crash_elfcorehdr_size", "crash_elfcorehdr_size" },
#endif
#endif
};
static struct kobject *kexec_kobj;
ATTRIBUTE_GROUPS(kexec);
static int __init init_kexec_sysctl(void)
{
int error;
int i;
kexec_kobj = kobject_create_and_add("kexec", kernel_kobj);
if (!kexec_kobj) {
pr_err("failed to create kexec kobject\n");
return -ENOMEM;
}
error = sysfs_create_groups(kexec_kobj, kexec_groups);
if (error)
goto kset_exit;
for (i = 0; i < ARRAY_SIZE(kexec_links); i++) {
error = compat_only_sysfs_link_entry_to_kobj(kernel_kobj, kexec_kobj,
kexec_links[i].target,
kexec_links[i].name);
if (error)
pr_err("Unable to create %s symlink (%d)", kexec_links[i].name, error);
}
return 0;
kset_exit:
kobject_put(kexec_kobj);
return error;
}
subsys_initcall(init_kexec_sysctl);

View File

@@ -1,20 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef LINUX_KEXEC_HANDOVER_INTERNAL_H
#define LINUX_KEXEC_HANDOVER_INTERNAL_H
#include <linux/kexec_handover.h>
#include <linux/types.h>
extern struct kho_scratch *kho_scratch;
extern unsigned int kho_scratch_cnt;
#ifdef CONFIG_KEXEC_HANDOVER_DEBUG
bool kho_scratch_overlap(phys_addr_t phys, size_t size);
#else
static inline bool kho_scratch_overlap(phys_addr_t phys, size_t size)
{
return false;
}
#endif /* CONFIG_KEXEC_HANDOVER_DEBUG */
#endif /* LINUX_KEXEC_HANDOVER_INTERNAL_H */

View File

@@ -12,7 +12,7 @@
#include <linux/sysfs.h>
#include <linux/export.h>
#include <linux/init.h>
#include <linux/kexec.h>
#include <linux/vmcore_info.h>
#include <linux/profile.h>
#include <linux/stat.h>
#include <linux/sched.h>
@@ -119,50 +119,6 @@ static ssize_t profiling_store(struct kobject *kobj,
KERNEL_ATTR_RW(profiling);
#endif
#ifdef CONFIG_KEXEC_CORE
static ssize_t kexec_loaded_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
return sysfs_emit(buf, "%d\n", !!kexec_image);
}
KERNEL_ATTR_RO(kexec_loaded);
#ifdef CONFIG_CRASH_DUMP
static ssize_t kexec_crash_loaded_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
return sysfs_emit(buf, "%d\n", kexec_crash_loaded());
}
KERNEL_ATTR_RO(kexec_crash_loaded);
static ssize_t kexec_crash_size_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
ssize_t size = crash_get_memory_size();
if (size < 0)
return size;
return sysfs_emit(buf, "%zd\n", size);
}
static ssize_t kexec_crash_size_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t count)
{
unsigned long cnt;
int ret;
if (kstrtoul(buf, 0, &cnt))
return -EINVAL;
ret = crash_shrink_memory(cnt);
return ret < 0 ? ret : count;
}
KERNEL_ATTR_RW(kexec_crash_size);
#endif /* CONFIG_CRASH_DUMP*/
#endif /* CONFIG_KEXEC_CORE */
#ifdef CONFIG_VMCORE_INFO
static ssize_t vmcoreinfo_show(struct kobject *kobj,
@@ -174,18 +130,6 @@ static ssize_t vmcoreinfo_show(struct kobject *kobj,
}
KERNEL_ATTR_RO(vmcoreinfo);
#ifdef CONFIG_CRASH_HOTPLUG
static ssize_t crash_elfcorehdr_size_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
unsigned int sz = crash_get_elfcorehdr_size();
return sysfs_emit(buf, "%u\n", sz);
}
KERNEL_ATTR_RO(crash_elfcorehdr_size);
#endif
#endif /* CONFIG_VMCORE_INFO */
/* whether file capabilities are enabled */
@@ -255,18 +199,8 @@ static struct attribute * kernel_attrs[] = {
#ifdef CONFIG_PROFILING
&profiling_attr.attr,
#endif
#ifdef CONFIG_KEXEC_CORE
&kexec_loaded_attr.attr,
#ifdef CONFIG_CRASH_DUMP
&kexec_crash_loaded_attr.attr,
&kexec_crash_size_attr.attr,
#endif
#endif
#ifdef CONFIG_VMCORE_INFO
&vmcoreinfo_attr.attr,
#ifdef CONFIG_CRASH_HOTPLUG
&crash_elfcorehdr_size_attr.attr,
#endif
#endif
#ifndef CONFIG_TINY_RCU
&rcu_expedited_attr.attr,

75
kernel/liveupdate/Kconfig Normal file
View File

@@ -0,0 +1,75 @@
# SPDX-License-Identifier: GPL-2.0-only
menu "Live Update and Kexec HandOver"
depends on !DEFERRED_STRUCT_PAGE_INIT
config KEXEC_HANDOVER
bool "kexec handover"
depends on ARCH_SUPPORTS_KEXEC_HANDOVER && ARCH_SUPPORTS_KEXEC_FILE
depends on !DEFERRED_STRUCT_PAGE_INIT
select MEMBLOCK_KHO_SCRATCH
select KEXEC_FILE
select LIBFDT
select CMA
help
Allow kexec to hand over state across kernels by generating and
passing additional metadata to the target kernel. This is useful
to keep data or state alive across the kexec. For this to work,
both source and target kernels need to have this option enabled.
config KEXEC_HANDOVER_DEBUG
bool "Enable Kexec Handover debug checks"
depends on KEXEC_HANDOVER
help
This option enables extra sanity checks for the Kexec Handover
subsystem. Since, KHO performance is crucial in live update
scenarios and the extra code might be adding overhead it is
only optionally enabled.
config KEXEC_HANDOVER_DEBUGFS
bool "kexec handover debugfs interface"
default KEXEC_HANDOVER
depends on KEXEC_HANDOVER
select DEBUG_FS
help
Allow to control kexec handover device tree via debugfs
interface, i.e. finalize the state or aborting the finalization.
Also, enables inspecting the KHO fdt trees with the debugfs binary
blobs.
config KEXEC_HANDOVER_ENABLE_DEFAULT
bool "Enable kexec handover by default"
depends on KEXEC_HANDOVER
help
Enable Kexec Handover by default. This avoids the need to
explicitly pass 'kho=on' on the kernel command line.
This is useful for systems where KHO is a prerequisite for other
features, such as Live Update, ensuring the mechanism is always
active.
The default behavior can still be overridden at boot time by
passing 'kho=off'.
config LIVEUPDATE
bool "Live Update Orchestrator"
depends on KEXEC_HANDOVER
help
Enable the Live Update Orchestrator. Live Update is a mechanism,
typically based on kexec, that allows the kernel to be updated
while keeping selected devices operational across the transition.
These devices are intended to be reclaimed by the new kernel and
re-attached to their original workload without requiring a device
reset.
Ability to handover a device from current to the next kernel depends
on specific support within device drivers and related kernel
subsystems.
This feature primarily targets virtual machine hosts to quickly update
the kernel hypervisor with minimal disruption to the running virtual
machines.
If unsure, say N.
endmenu

View File

@@ -0,0 +1,12 @@
# SPDX-License-Identifier: GPL-2.0
luo-y := \
luo_core.o \
luo_file.o \
luo_session.o
obj-$(CONFIG_KEXEC_HANDOVER) += kexec_handover.o
obj-$(CONFIG_KEXEC_HANDOVER_DEBUG) += kexec_handover_debug.o
obj-$(CONFIG_KEXEC_HANDOVER_DEBUGFS) += kexec_handover_debugfs.o
obj-$(CONFIG_LIVEUPDATE) += luo.o

View File

@@ -0,0 +1,221 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* kexec_handover_debugfs.c - kexec handover debugfs interfaces
* Copyright (C) 2023 Alexander Graf <graf@amazon.com>
* Copyright (C) 2025 Microsoft Corporation, Mike Rapoport <rppt@kernel.org>
* Copyright (C) 2025 Google LLC, Changyuan Lyu <changyuanl@google.com>
* Copyright (C) 2025 Google LLC, Pasha Tatashin <pasha.tatashin@soleen.com>
*/
#define pr_fmt(fmt) "KHO: " fmt
#include <linux/init.h>
#include <linux/io.h>
#include <linux/libfdt.h>
#include <linux/mm.h>
#include "kexec_handover_internal.h"
static struct dentry *debugfs_root;
struct fdt_debugfs {
struct list_head list;
struct debugfs_blob_wrapper wrapper;
struct dentry *file;
};
static int __kho_debugfs_fdt_add(struct list_head *list, struct dentry *dir,
const char *name, const void *fdt)
{
struct fdt_debugfs *f;
struct dentry *file;
f = kmalloc(sizeof(*f), GFP_KERNEL);
if (!f)
return -ENOMEM;
f->wrapper.data = (void *)fdt;
f->wrapper.size = fdt_totalsize(fdt);
file = debugfs_create_blob(name, 0400, dir, &f->wrapper);
if (IS_ERR(file)) {
kfree(f);
return PTR_ERR(file);
}
f->file = file;
list_add(&f->list, list);
return 0;
}
int kho_debugfs_fdt_add(struct kho_debugfs *dbg, const char *name,
const void *fdt, bool root)
{
struct dentry *dir;
if (root)
dir = dbg->dir;
else
dir = dbg->sub_fdt_dir;
return __kho_debugfs_fdt_add(&dbg->fdt_list, dir, name, fdt);
}
void kho_debugfs_fdt_remove(struct kho_debugfs *dbg, void *fdt)
{
struct fdt_debugfs *ff;
list_for_each_entry(ff, &dbg->fdt_list, list) {
if (ff->wrapper.data == fdt) {
debugfs_remove(ff->file);
list_del(&ff->list);
kfree(ff);
break;
}
}
}
static int kho_out_finalize_get(void *data, u64 *val)
{
*val = kho_finalized();
return 0;
}
static int kho_out_finalize_set(void *data, u64 val)
{
if (val)
return kho_finalize();
else
return -EINVAL;
}
DEFINE_DEBUGFS_ATTRIBUTE(kho_out_finalize_fops, kho_out_finalize_get,
kho_out_finalize_set, "%llu\n");
static int scratch_phys_show(struct seq_file *m, void *v)
{
for (int i = 0; i < kho_scratch_cnt; i++)
seq_printf(m, "0x%llx\n", kho_scratch[i].addr);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(scratch_phys);
static int scratch_len_show(struct seq_file *m, void *v)
{
for (int i = 0; i < kho_scratch_cnt; i++)
seq_printf(m, "0x%llx\n", kho_scratch[i].size);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(scratch_len);
__init void kho_in_debugfs_init(struct kho_debugfs *dbg, const void *fdt)
{
struct dentry *dir, *sub_fdt_dir;
int err, child;
INIT_LIST_HEAD(&dbg->fdt_list);
dir = debugfs_create_dir("in", debugfs_root);
if (IS_ERR(dir)) {
err = PTR_ERR(dir);
goto err_out;
}
sub_fdt_dir = debugfs_create_dir("sub_fdts", dir);
if (IS_ERR(sub_fdt_dir)) {
err = PTR_ERR(sub_fdt_dir);
goto err_rmdir;
}
err = __kho_debugfs_fdt_add(&dbg->fdt_list, dir, "fdt", fdt);
if (err)
goto err_rmdir;
fdt_for_each_subnode(child, fdt, 0) {
int len = 0;
const char *name = fdt_get_name(fdt, child, NULL);
const u64 *fdt_phys;
fdt_phys = fdt_getprop(fdt, child, "fdt", &len);
if (!fdt_phys)
continue;
if (len != sizeof(*fdt_phys)) {
pr_warn("node %s prop fdt has invalid length: %d\n",
name, len);
continue;
}
err = __kho_debugfs_fdt_add(&dbg->fdt_list, sub_fdt_dir, name,
phys_to_virt(*fdt_phys));
if (err) {
pr_warn("failed to add fdt %s to debugfs: %pe\n", name,
ERR_PTR(err));
continue;
}
}
dbg->dir = dir;
dbg->sub_fdt_dir = sub_fdt_dir;
return;
err_rmdir:
debugfs_remove_recursive(dir);
err_out:
/*
* Failure to create /sys/kernel/debug/kho/in does not prevent
* reviving state from KHO and setting up KHO for the next
* kexec.
*/
if (err) {
pr_err("failed exposing handover FDT in debugfs: %pe\n",
ERR_PTR(err));
}
}
__init int kho_out_debugfs_init(struct kho_debugfs *dbg)
{
struct dentry *dir, *f, *sub_fdt_dir;
INIT_LIST_HEAD(&dbg->fdt_list);
dir = debugfs_create_dir("out", debugfs_root);
if (IS_ERR(dir))
return -ENOMEM;
sub_fdt_dir = debugfs_create_dir("sub_fdts", dir);
if (IS_ERR(sub_fdt_dir))
goto err_rmdir;
f = debugfs_create_file("scratch_phys", 0400, dir, NULL,
&scratch_phys_fops);
if (IS_ERR(f))
goto err_rmdir;
f = debugfs_create_file("scratch_len", 0400, dir, NULL,
&scratch_len_fops);
if (IS_ERR(f))
goto err_rmdir;
f = debugfs_create_file("finalize", 0600, dir, NULL,
&kho_out_finalize_fops);
if (IS_ERR(f))
goto err_rmdir;
dbg->dir = dir;
dbg->sub_fdt_dir = sub_fdt_dir;
return 0;
err_rmdir:
debugfs_remove_recursive(dir);
return -ENOENT;
}
__init int kho_debugfs_init(void)
{
debugfs_root = debugfs_create_dir("kho", NULL);
if (IS_ERR(debugfs_root))
return -ENOENT;
return 0;
}

View File

@@ -0,0 +1,55 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef LINUX_KEXEC_HANDOVER_INTERNAL_H
#define LINUX_KEXEC_HANDOVER_INTERNAL_H
#include <linux/kexec_handover.h>
#include <linux/list.h>
#include <linux/types.h>
#ifdef CONFIG_KEXEC_HANDOVER_DEBUGFS
#include <linux/debugfs.h>
struct kho_debugfs {
struct dentry *dir;
struct dentry *sub_fdt_dir;
struct list_head fdt_list;
};
#else
struct kho_debugfs {};
#endif
extern struct kho_scratch *kho_scratch;
extern unsigned int kho_scratch_cnt;
bool kho_finalized(void);
int kho_finalize(void);
#ifdef CONFIG_KEXEC_HANDOVER_DEBUGFS
int kho_debugfs_init(void);
void kho_in_debugfs_init(struct kho_debugfs *dbg, const void *fdt);
int kho_out_debugfs_init(struct kho_debugfs *dbg);
int kho_debugfs_fdt_add(struct kho_debugfs *dbg, const char *name,
const void *fdt, bool root);
void kho_debugfs_fdt_remove(struct kho_debugfs *dbg, void *fdt);
#else
static inline int kho_debugfs_init(void) { return 0; }
static inline void kho_in_debugfs_init(struct kho_debugfs *dbg,
const void *fdt) { }
static inline int kho_out_debugfs_init(struct kho_debugfs *dbg) { return 0; }
static inline int kho_debugfs_fdt_add(struct kho_debugfs *dbg, const char *name,
const void *fdt, bool root) { return 0; }
static inline void kho_debugfs_fdt_remove(struct kho_debugfs *dbg,
void *fdt) { }
#endif /* CONFIG_KEXEC_HANDOVER_DEBUGFS */
#ifdef CONFIG_KEXEC_HANDOVER_DEBUG
bool kho_scratch_overlap(phys_addr_t phys, size_t size);
#else
static inline bool kho_scratch_overlap(phys_addr_t phys, size_t size)
{
return false;
}
#endif /* CONFIG_KEXEC_HANDOVER_DEBUG */
#endif /* LINUX_KEXEC_HANDOVER_INTERNAL_H */

View File

@@ -0,0 +1,450 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2025, Google LLC.
* Pasha Tatashin <pasha.tatashin@soleen.com>
*/
/**
* DOC: Live Update Orchestrator (LUO)
*
* Live Update is a specialized, kexec-based reboot process that allows a
* running kernel to be updated from one version to another while preserving
* the state of selected resources and keeping designated hardware devices
* operational. For these devices, DMA activity may continue throughout the
* kernel transition.
*
* While the primary use case driving this work is supporting live updates of
* the Linux kernel when it is used as a hypervisor in cloud environments, the
* LUO framework itself is designed to be workload-agnostic. Live Update
* facilitates a full kernel version upgrade for any type of system.
*
* For example, a non-hypervisor system running an in-memory cache like
* memcached with many gigabytes of data can use LUO. The userspace service
* can place its cache into a memfd, have its state preserved by LUO, and
* restore it immediately after the kernel kexec.
*
* Whether the system is running virtual machines, containers, a
* high-performance database, or networking services, LUO's primary goal is to
* enable a full kernel update by preserving critical userspace state and
* keeping essential devices operational.
*
* The core of LUO is a mechanism that tracks the progress of a live update,
* along with a callback API that allows other kernel subsystems to participate
* in the process. Example subsystems that can hook into LUO include: kvm,
* iommu, interrupts, vfio, participating filesystems, and memory management.
*
* LUO uses Kexec Handover to transfer memory state from the current kernel to
* the next kernel. For more details see
* Documentation/core-api/kho/concepts.rst.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/atomic.h>
#include <linux/errno.h>
#include <linux/file.h>
#include <linux/fs.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/kexec_handover.h>
#include <linux/kho/abi/luo.h>
#include <linux/kobject.h>
#include <linux/libfdt.h>
#include <linux/liveupdate.h>
#include <linux/miscdevice.h>
#include <linux/mm.h>
#include <linux/sizes.h>
#include <linux/string.h>
#include <linux/unaligned.h>
#include "kexec_handover_internal.h"
#include "luo_internal.h"
static struct {
bool enabled;
void *fdt_out;
void *fdt_in;
u64 liveupdate_num;
} luo_global;
static int __init early_liveupdate_param(char *buf)
{
return kstrtobool(buf, &luo_global.enabled);
}
early_param("liveupdate", early_liveupdate_param);
static int __init luo_early_startup(void)
{
phys_addr_t fdt_phys;
int err, ln_size;
const void *ptr;
if (!kho_is_enabled()) {
if (liveupdate_enabled())
pr_warn("Disabling liveupdate because KHO is disabled\n");
luo_global.enabled = false;
return 0;
}
/* Retrieve LUO subtree, and verify its format. */
err = kho_retrieve_subtree(LUO_FDT_KHO_ENTRY_NAME, &fdt_phys);
if (err) {
if (err != -ENOENT) {
pr_err("failed to retrieve FDT '%s' from KHO: %pe\n",
LUO_FDT_KHO_ENTRY_NAME, ERR_PTR(err));
return err;
}
return 0;
}
luo_global.fdt_in = phys_to_virt(fdt_phys);
err = fdt_node_check_compatible(luo_global.fdt_in, 0,
LUO_FDT_COMPATIBLE);
if (err) {
pr_err("FDT '%s' is incompatible with '%s' [%d]\n",
LUO_FDT_KHO_ENTRY_NAME, LUO_FDT_COMPATIBLE, err);
return -EINVAL;
}
ln_size = 0;
ptr = fdt_getprop(luo_global.fdt_in, 0, LUO_FDT_LIVEUPDATE_NUM,
&ln_size);
if (!ptr || ln_size != sizeof(luo_global.liveupdate_num)) {
pr_err("Unable to get live update number '%s' [%d]\n",
LUO_FDT_LIVEUPDATE_NUM, ln_size);
return -EINVAL;
}
luo_global.liveupdate_num = get_unaligned((u64 *)ptr);
pr_info("Retrieved live update data, liveupdate number: %lld\n",
luo_global.liveupdate_num);
err = luo_session_setup_incoming(luo_global.fdt_in);
if (err)
return err;
return 0;
}
static int __init liveupdate_early_init(void)
{
int err;
err = luo_early_startup();
if (err) {
luo_global.enabled = false;
luo_restore_fail("The incoming tree failed to initialize properly [%pe], disabling live update\n",
ERR_PTR(err));
}
return err;
}
early_initcall(liveupdate_early_init);
/* Called during boot to create outgoing LUO fdt tree */
static int __init luo_fdt_setup(void)
{
const u64 ln = luo_global.liveupdate_num + 1;
void *fdt_out;
int err;
fdt_out = kho_alloc_preserve(LUO_FDT_SIZE);
if (IS_ERR(fdt_out)) {
pr_err("failed to allocate/preserve FDT memory\n");
return PTR_ERR(fdt_out);
}
err = fdt_create(fdt_out, LUO_FDT_SIZE);
err |= fdt_finish_reservemap(fdt_out);
err |= fdt_begin_node(fdt_out, "");
err |= fdt_property_string(fdt_out, "compatible", LUO_FDT_COMPATIBLE);
err |= fdt_property(fdt_out, LUO_FDT_LIVEUPDATE_NUM, &ln, sizeof(ln));
err |= luo_session_setup_outgoing(fdt_out);
err |= fdt_end_node(fdt_out);
err |= fdt_finish(fdt_out);
if (err)
goto exit_free;
err = kho_add_subtree(LUO_FDT_KHO_ENTRY_NAME, fdt_out);
if (err)
goto exit_free;
luo_global.fdt_out = fdt_out;
return 0;
exit_free:
kho_unpreserve_free(fdt_out);
pr_err("failed to prepare LUO FDT: %d\n", err);
return err;
}
/*
* late initcall because it initializes the outgoing tree that is needed only
* once userspace starts using /dev/liveupdate.
*/
static int __init luo_late_startup(void)
{
int err;
if (!liveupdate_enabled())
return 0;
err = luo_fdt_setup();
if (err)
luo_global.enabled = false;
return err;
}
late_initcall(luo_late_startup);
/* Public Functions */
/**
* liveupdate_reboot() - Kernel reboot notifier for live update final
* serialization.
*
* This function is invoked directly from the reboot() syscall pathway
* if kexec is in progress.
*
* If any callback fails, this function aborts KHO, undoes the freeze()
* callbacks, and returns an error.
*/
int liveupdate_reboot(void)
{
int err;
if (!liveupdate_enabled())
return 0;
err = luo_session_serialize();
if (err)
return err;
err = kho_finalize();
if (err) {
pr_err("kho_finalize failed %d\n", err);
/*
* kho_finalize() may return libfdt errors, to aboid passing to
* userspace unknown errors, change this to EAGAIN.
*/
err = -EAGAIN;
}
return err;
}
/**
* liveupdate_enabled - Check if the live update feature is enabled.
*
* This function returns the state of the live update feature flag, which
* can be controlled via the ``liveupdate`` kernel command-line parameter.
*
* @return true if live update is enabled, false otherwise.
*/
bool liveupdate_enabled(void)
{
return luo_global.enabled;
}
/**
* DOC: LUO ioctl Interface
*
* The IOCTL user-space control interface for the LUO subsystem.
* It registers a character device, typically found at ``/dev/liveupdate``,
* which allows a userspace agent to manage the LUO state machine and its
* associated resources, such as preservable file descriptors.
*
* To ensure that the state machine is controlled by a single entity, access
* to this device is exclusive: only one process is permitted to have
* ``/dev/liveupdate`` open at any given time. Subsequent open attempts will
* fail with -EBUSY until the first process closes its file descriptor.
* This singleton model simplifies state management by preventing conflicting
* commands from multiple userspace agents.
*/
struct luo_device_state {
struct miscdevice miscdev;
atomic_t in_use;
};
static int luo_ioctl_create_session(struct luo_ucmd *ucmd)
{
struct liveupdate_ioctl_create_session *argp = ucmd->cmd;
struct file *file;
int err;
argp->fd = get_unused_fd_flags(O_CLOEXEC);
if (argp->fd < 0)
return argp->fd;
err = luo_session_create(argp->name, &file);
if (err)
goto err_put_fd;
err = luo_ucmd_respond(ucmd, sizeof(*argp));
if (err)
goto err_put_file;
fd_install(argp->fd, file);
return 0;
err_put_file:
fput(file);
err_put_fd:
put_unused_fd(argp->fd);
return err;
}
static int luo_ioctl_retrieve_session(struct luo_ucmd *ucmd)
{
struct liveupdate_ioctl_retrieve_session *argp = ucmd->cmd;
struct file *file;
int err;
argp->fd = get_unused_fd_flags(O_CLOEXEC);
if (argp->fd < 0)
return argp->fd;
err = luo_session_retrieve(argp->name, &file);
if (err < 0)
goto err_put_fd;
err = luo_ucmd_respond(ucmd, sizeof(*argp));
if (err)
goto err_put_file;
fd_install(argp->fd, file);
return 0;
err_put_file:
fput(file);
err_put_fd:
put_unused_fd(argp->fd);
return err;
}
static int luo_open(struct inode *inodep, struct file *filep)
{
struct luo_device_state *ldev = container_of(filep->private_data,
struct luo_device_state,
miscdev);
if (atomic_cmpxchg(&ldev->in_use, 0, 1))
return -EBUSY;
/* Always return -EIO to user if deserialization fail */
if (luo_session_deserialize()) {
atomic_set(&ldev->in_use, 0);
return -EIO;
}
return 0;
}
static int luo_release(struct inode *inodep, struct file *filep)
{
struct luo_device_state *ldev = container_of(filep->private_data,
struct luo_device_state,
miscdev);
atomic_set(&ldev->in_use, 0);
return 0;
}
union ucmd_buffer {
struct liveupdate_ioctl_create_session create;
struct liveupdate_ioctl_retrieve_session retrieve;
};
struct luo_ioctl_op {
unsigned int size;
unsigned int min_size;
unsigned int ioctl_num;
int (*execute)(struct luo_ucmd *ucmd);
};
#define IOCTL_OP(_ioctl, _fn, _struct, _last) \
[_IOC_NR(_ioctl) - LIVEUPDATE_CMD_BASE] = { \
.size = sizeof(_struct) + \
BUILD_BUG_ON_ZERO(sizeof(union ucmd_buffer) < \
sizeof(_struct)), \
.min_size = offsetofend(_struct, _last), \
.ioctl_num = _ioctl, \
.execute = _fn, \
}
static const struct luo_ioctl_op luo_ioctl_ops[] = {
IOCTL_OP(LIVEUPDATE_IOCTL_CREATE_SESSION, luo_ioctl_create_session,
struct liveupdate_ioctl_create_session, name),
IOCTL_OP(LIVEUPDATE_IOCTL_RETRIEVE_SESSION, luo_ioctl_retrieve_session,
struct liveupdate_ioctl_retrieve_session, name),
};
static long luo_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
{
const struct luo_ioctl_op *op;
struct luo_ucmd ucmd = {};
union ucmd_buffer buf;
unsigned int nr;
int err;
nr = _IOC_NR(cmd);
if (nr < LIVEUPDATE_CMD_BASE ||
(nr - LIVEUPDATE_CMD_BASE) >= ARRAY_SIZE(luo_ioctl_ops)) {
return -EINVAL;
}
ucmd.ubuffer = (void __user *)arg;
err = get_user(ucmd.user_size, (u32 __user *)ucmd.ubuffer);
if (err)
return err;
op = &luo_ioctl_ops[nr - LIVEUPDATE_CMD_BASE];
if (op->ioctl_num != cmd)
return -ENOIOCTLCMD;
if (ucmd.user_size < op->min_size)
return -EINVAL;
ucmd.cmd = &buf;
err = copy_struct_from_user(ucmd.cmd, op->size, ucmd.ubuffer,
ucmd.user_size);
if (err)
return err;
return op->execute(&ucmd);
}
static const struct file_operations luo_fops = {
.owner = THIS_MODULE,
.open = luo_open,
.release = luo_release,
.unlocked_ioctl = luo_ioctl,
};
static struct luo_device_state luo_dev = {
.miscdev = {
.minor = MISC_DYNAMIC_MINOR,
.name = "liveupdate",
.fops = &luo_fops,
},
.in_use = ATOMIC_INIT(0),
};
static int __init liveupdate_ioctl_init(void)
{
if (!liveupdate_enabled())
return 0;
return misc_register(&luo_dev.miscdev);
}
late_initcall(liveupdate_ioctl_init);

View File

@@ -0,0 +1,889 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2025, Google LLC.
* Pasha Tatashin <pasha.tatashin@soleen.com>
*/
/**
* DOC: LUO File Descriptors
*
* LUO provides the infrastructure to preserve specific, stateful file
* descriptors across a kexec-based live update. The primary goal is to allow
* workloads, such as virtual machines using vfio, memfd, or iommufd, to
* retain access to their essential resources without interruption.
*
* The framework is built around a callback-based handler model and a well-
* defined lifecycle for each preserved file.
*
* Handler Registration:
* Kernel modules responsible for a specific file type (e.g., memfd, vfio)
* register a &struct liveupdate_file_handler. This handler provides a set of
* callbacks that LUO invokes at different stages of the update process, most
* notably:
*
* - can_preserve(): A lightweight check to determine if the handler is
* compatible with a given 'struct file'.
* - preserve(): The heavyweight operation that saves the file's state and
* returns an opaque u64 handle. This is typically performed while the
* workload is still active to minimize the downtime during the
* actual reboot transition.
* - unpreserve(): Cleans up any resources allocated by .preserve(), called
* if the preservation process is aborted before the reboot (i.e. session is
* closed).
* - freeze(): A final pre-reboot opportunity to prepare the state for kexec.
* We are already in reboot syscall, and therefore userspace cannot mutate
* the file anymore.
* - unfreeze(): Undoes the actions of .freeze(), called if the live update
* is aborted after the freeze phase.
* - retrieve(): Reconstructs the file in the new kernel from the preserved
* handle.
* - finish(): Performs final check and cleanup in the new kernel. After
* succesul finish call, LUO gives up ownership to this file.
*
* File Preservation Lifecycle happy path:
*
* 1. Preserve (Normal Operation): A userspace agent preserves files one by one
* via an ioctl. For each file, luo_preserve_file() finds a compatible
* handler, calls its .preserve() operation, and creates an internal &struct
* luo_file to track the live state.
*
* 2. Freeze (Pre-Reboot): Just before the kexec, luo_file_freeze() is called.
* It iterates through all preserved files, calls their respective .freeze()
* operation, and serializes their final metadata (compatible string, token,
* and data handle) into a contiguous memory block for KHO.
*
* 3. Deserialize: After kexec, luo_file_deserialize() runs when session gets
* deserialized (which is when /dev/liveupdate is first opened). It reads the
* serialized data from the KHO memory region and reconstructs the in-memory
* list of &struct luo_file instances for the new kernel, linking them to
* their corresponding handlers.
*
* 4. Retrieve (New Kernel - Userspace Ready): The userspace agent can now
* restore file descriptors by providing a token. luo_retrieve_file()
* searches for the matching token, calls the handler's .retrieve() op to
* re-create the 'struct file', and returns a new FD. Files can be
* retrieved in ANY order.
*
* 5. Finish (New Kernel - Cleanup): Once a session retrival is complete,
* luo_file_finish() is called. It iterates through all files, invokes their
* .finish() operations for final cleanup, and releases all associated kernel
* resources.
*
* File Preservation Lifecycle unhappy paths:
*
* 1. Abort Before Reboot: If the userspace agent aborts the live update
* process before calling reboot (e.g., by closing the session file
* descriptor), the session's release handler calls
* luo_file_unpreserve_files(). This invokes the .unpreserve() callback on
* all preserved files, ensuring all allocated resources are cleaned up and
* returning the system to a clean state.
*
* 2. Freeze Failure: During the reboot() syscall, if any handler's .freeze()
* op fails, the .unfreeze() op is invoked on all previously *successful*
* freezes to roll back their state. The reboot() syscall then returns an
* error to userspace, canceling the live update.
*
* 3. Finish Failure: In the new kernel, if a handler's .finish() op fails,
* the luo_file_finish() operation is aborted. LUO retains ownership of
* all files within that session, including those that were not yet
* processed. The userspace agent can attempt to call the finish operation
* again later. If the issue cannot be resolved, these resources will be held
* by LUO until the next live update cycle, at which point they will be
* discarded.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/cleanup.h>
#include <linux/compiler.h>
#include <linux/err.h>
#include <linux/errno.h>
#include <linux/file.h>
#include <linux/fs.h>
#include <linux/io.h>
#include <linux/kexec_handover.h>
#include <linux/kho/abi/luo.h>
#include <linux/liveupdate.h>
#include <linux/module.h>
#include <linux/sizes.h>
#include <linux/slab.h>
#include <linux/string.h>
#include "luo_internal.h"
static LIST_HEAD(luo_file_handler_list);
/* 2 4K pages, give space for 128 files per file_set */
#define LUO_FILE_PGCNT 2ul
#define LUO_FILE_MAX \
((LUO_FILE_PGCNT << PAGE_SHIFT) / sizeof(struct luo_file_ser))
/**
* struct luo_file - Represents a single preserved file instance.
* @fh: Pointer to the &struct liveupdate_file_handler that manages
* this type of file.
* @file: Pointer to the kernel's &struct file that is being preserved.
* This is NULL in the new kernel until the file is successfully
* retrieved.
* @serialized_data: The opaque u64 handle to the serialized state of the file.
* This handle is passed back to the handler's .freeze(),
* .retrieve(), and .finish() callbacks, allowing it to track
* and update its serialized state across phases.
* @private_data: Pointer to the private data for the file used to hold runtime
* state that is not preserved. Set by the handler's .preserve()
* callback, and must be freed in the handler's .unpreserve()
* callback.
* @retrieved: A flag indicating whether a user/kernel in the new kernel has
* successfully called retrieve() on this file. This prevents
* multiple retrieval attempts.
* @mutex: A mutex that protects the fields of this specific instance
* (e.g., @retrieved, @file), ensuring that operations like
* retrieving or finishing a file are atomic.
* @list: The list_head linking this instance into its parent
* file_set's list of preserved files.
* @token: The user-provided unique token used to identify this file.
*
* This structure is the core in-kernel representation of a single file being
* managed through a live update. An instance is created by luo_preserve_file()
* to link a 'struct file' to its corresponding handler, a user-provided token,
* and the serialized state handle returned by the handler's .preserve()
* operation.
*
* These instances are tracked in a per-file_set list. The @serialized_data
* field, which holds a handle to the file's serialized state, may be updated
* during the .freeze() callback before being serialized for the next kernel.
* After reboot, these structures are recreated by luo_file_deserialize() and
* are finally cleaned up by luo_file_finish().
*/
struct luo_file {
struct liveupdate_file_handler *fh;
struct file *file;
u64 serialized_data;
void *private_data;
bool retrieved;
struct mutex mutex;
struct list_head list;
u64 token;
};
static int luo_alloc_files_mem(struct luo_file_set *file_set)
{
size_t size;
void *mem;
if (file_set->files)
return 0;
WARN_ON_ONCE(file_set->count);
size = LUO_FILE_PGCNT << PAGE_SHIFT;
mem = kho_alloc_preserve(size);
if (IS_ERR(mem))
return PTR_ERR(mem);
file_set->files = mem;
return 0;
}
static void luo_free_files_mem(struct luo_file_set *file_set)
{
/* If file_set has files, no need to free preservation memory */
if (file_set->count)
return;
if (!file_set->files)
return;
kho_unpreserve_free(file_set->files);
file_set->files = NULL;
}
static bool luo_token_is_used(struct luo_file_set *file_set, u64 token)
{
struct luo_file *iter;
list_for_each_entry(iter, &file_set->files_list, list) {
if (iter->token == token)
return true;
}
return false;
}
/**
* luo_preserve_file - Initiate the preservation of a file descriptor.
* @file_set: The file_set to which the preserved file will be added.
* @token: A unique, user-provided identifier for the file.
* @fd: The file descriptor to be preserved.
*
* This function orchestrates the first phase of preserving a file. Upon entry,
* it takes a reference to the 'struct file' via fget(), effectively making LUO
* a co-owner of the file. This reference is held until the file is either
* unpreserved or successfully finished in the next kernel, preventing the file
* from being prematurely destroyed.
*
* This function orchestrates the first phase of preserving a file. It performs
* the following steps:
*
* 1. Validates that the @token is not already in use within the file_set.
* 2. Ensures the file_set's memory for files serialization is allocated
* (allocates if needed).
* 3. Iterates through registered handlers, calling can_preserve() to find one
* compatible with the given @fd.
* 4. Calls the handler's .preserve() operation, which saves the file's state
* and returns an opaque private data handle.
* 5. Adds the new instance to the file_set's internal list.
*
* On success, LUO takes a reference to the 'struct file' and considers it
* under its management until it is unpreserved or finished.
*
* In case of any failure, all intermediate allocations (file reference, memory
* for the 'luo_file' struct, etc.) are cleaned up before returning an error.
*
* Context: Can be called from an ioctl handler during normal system operation.
* Return: 0 on success. Returns a negative errno on failure:
* -EEXIST if the token is already used.
* -EBADF if the file descriptor is invalid.
* -ENOSPC if the file_set is full.
* -ENOENT if no compatible handler is found.
* -ENOMEM on memory allocation failure.
* Other erros might be returned by .preserve().
*/
int luo_preserve_file(struct luo_file_set *file_set, u64 token, int fd)
{
struct liveupdate_file_op_args args = {0};
struct liveupdate_file_handler *fh;
struct luo_file *luo_file;
struct file *file;
int err;
if (luo_token_is_used(file_set, token))
return -EEXIST;
if (file_set->count == LUO_FILE_MAX)
return -ENOSPC;
file = fget(fd);
if (!file)
return -EBADF;
err = luo_alloc_files_mem(file_set);
if (err)
goto err_fput;
err = -ENOENT;
luo_list_for_each_private(fh, &luo_file_handler_list, list) {
if (fh->ops->can_preserve(fh, file)) {
err = 0;
break;
}
}
/* err is still -ENOENT if no handler was found */
if (err)
goto err_free_files_mem;
luo_file = kzalloc(sizeof(*luo_file), GFP_KERNEL);
if (!luo_file) {
err = -ENOMEM;
goto err_free_files_mem;
}
luo_file->file = file;
luo_file->fh = fh;
luo_file->token = token;
luo_file->retrieved = false;
mutex_init(&luo_file->mutex);
args.handler = fh;
args.file = file;
err = fh->ops->preserve(&args);
if (err)
goto err_kfree;
luo_file->serialized_data = args.serialized_data;
luo_file->private_data = args.private_data;
list_add_tail(&luo_file->list, &file_set->files_list);
file_set->count++;
return 0;
err_kfree:
kfree(luo_file);
err_free_files_mem:
luo_free_files_mem(file_set);
err_fput:
fput(file);
return err;
}
/**
* luo_file_unpreserve_files - Unpreserves all files from a file_set.
* @file_set: The files to be cleaned up.
*
* This function serves as the primary cleanup path for a file_set. It is
* invoked when the userspace agent closes the file_set's file descriptor.
*
* For each file, it performs the following cleanup actions:
* 1. Calls the handler's .unpreserve() callback to allow the handler to
* release any resources it allocated.
* 2. Removes the file from the file_set's internal tracking list.
* 3. Releases the reference to the 'struct file' that was taken by
* luo_preserve_file() via fput(), returning ownership.
* 4. Frees the memory associated with the internal 'struct luo_file'.
*
* After all individual files are unpreserved, it frees the contiguous memory
* block that was allocated to hold their serialization data.
*/
void luo_file_unpreserve_files(struct luo_file_set *file_set)
{
struct luo_file *luo_file;
while (!list_empty(&file_set->files_list)) {
struct liveupdate_file_op_args args = {0};
luo_file = list_last_entry(&file_set->files_list,
struct luo_file, list);
args.handler = luo_file->fh;
args.file = luo_file->file;
args.serialized_data = luo_file->serialized_data;
args.private_data = luo_file->private_data;
luo_file->fh->ops->unpreserve(&args);
list_del(&luo_file->list);
file_set->count--;
fput(luo_file->file);
mutex_destroy(&luo_file->mutex);
kfree(luo_file);
}
luo_free_files_mem(file_set);
}
static int luo_file_freeze_one(struct luo_file_set *file_set,
struct luo_file *luo_file)
{
int err = 0;
guard(mutex)(&luo_file->mutex);
if (luo_file->fh->ops->freeze) {
struct liveupdate_file_op_args args = {0};
args.handler = luo_file->fh;
args.file = luo_file->file;
args.serialized_data = luo_file->serialized_data;
args.private_data = luo_file->private_data;
err = luo_file->fh->ops->freeze(&args);
if (!err)
luo_file->serialized_data = args.serialized_data;
}
return err;
}
static void luo_file_unfreeze_one(struct luo_file_set *file_set,
struct luo_file *luo_file)
{
guard(mutex)(&luo_file->mutex);
if (luo_file->fh->ops->unfreeze) {
struct liveupdate_file_op_args args = {0};
args.handler = luo_file->fh;
args.file = luo_file->file;
args.serialized_data = luo_file->serialized_data;
args.private_data = luo_file->private_data;
luo_file->fh->ops->unfreeze(&args);
}
luo_file->serialized_data = 0;
}
static void __luo_file_unfreeze(struct luo_file_set *file_set,
struct luo_file *failed_entry)
{
struct list_head *files_list = &file_set->files_list;
struct luo_file *luo_file;
list_for_each_entry(luo_file, files_list, list) {
if (luo_file == failed_entry)
break;
luo_file_unfreeze_one(file_set, luo_file);
}
memset(file_set->files, 0, LUO_FILE_PGCNT << PAGE_SHIFT);
}
/**
* luo_file_freeze - Freezes all preserved files and serializes their metadata.
* @file_set: The file_set whose files are to be frozen.
* @file_set_ser: Where to put the serialized file_set.
*
* This function is called from the reboot() syscall path, just before the
* kernel transitions to the new image via kexec. Its purpose is to perform the
* final preparation and serialization of all preserved files in the file_set.
*
* It iterates through each preserved file in FIFO order (the order of
* preservation) and performs two main actions:
*
* 1. Freezes the File: It calls the handler's .freeze() callback for each
* file. This gives the handler a final opportunity to quiesce the device or
* prepare its state for the upcoming reboot. The handler may update its
* private data handle during this step.
*
* 2. Serializes Metadata: After a successful freeze, it copies the final file
* metadatathe handler's compatible string, the user token, and the final
* private data handleinto the pre-allocated contiguous memory buffer
* (file_set->files) that will be handed over to the next kernel via KHO.
*
* Error Handling (Rollback):
* This function is atomic. If any handler's .freeze() operation fails, the
* entire live update is aborted. The __luo_file_unfreeze() helper is
* immediately called to invoke the .unfreeze() op on all files that were
* successfully frozen before the point of failure, rolling them back to a
* running state. The function then returns an error, causing the reboot()
* syscall to fail.
*
* Context: Called only from the liveupdate_reboot() path.
* Return: 0 on success, or a negative errno on failure.
*/
int luo_file_freeze(struct luo_file_set *file_set,
struct luo_file_set_ser *file_set_ser)
{
struct luo_file_ser *file_ser = file_set->files;
struct luo_file *luo_file;
int err;
int i;
if (!file_set->count)
return 0;
if (WARN_ON(!file_ser))
return -EINVAL;
i = 0;
list_for_each_entry(luo_file, &file_set->files_list, list) {
err = luo_file_freeze_one(file_set, luo_file);
if (err < 0) {
pr_warn("Freeze failed for token[%#0llx] handler[%s] err[%pe]\n",
luo_file->token, luo_file->fh->compatible,
ERR_PTR(err));
goto err_unfreeze;
}
strscpy(file_ser[i].compatible, luo_file->fh->compatible,
sizeof(file_ser[i].compatible));
file_ser[i].data = luo_file->serialized_data;
file_ser[i].token = luo_file->token;
i++;
}
file_set_ser->count = file_set->count;
if (file_set->files)
file_set_ser->files = virt_to_phys(file_set->files);
return 0;
err_unfreeze:
__luo_file_unfreeze(file_set, luo_file);
return err;
}
/**
* luo_file_unfreeze - Unfreezes all files in a file_set and clear serialization
* @file_set: The file_set whose files are to be unfrozen.
* @file_set_ser: Serialized file_set.
*
* This function rolls back the state of all files in a file_set after the
* freeze phase has begun but must be aborted. It is the counterpart to
* luo_file_freeze().
*
* It invokes the __luo_file_unfreeze() helper with a NULL argument, which
* signals the helper to iterate through all files in the file_set and call
* their respective .unfreeze() handler callbacks.
*
* Context: This is called when the live update is aborted during
* the reboot() syscall, after luo_file_freeze() has been called.
*/
void luo_file_unfreeze(struct luo_file_set *file_set,
struct luo_file_set_ser *file_set_ser)
{
if (!file_set->count)
return;
__luo_file_unfreeze(file_set, NULL);
memset(file_set_ser, 0, sizeof(*file_set_ser));
}
/**
* luo_retrieve_file - Restores a preserved file from a file_set by its token.
* @file_set: The file_set from which to retrieve the file.
* @token: The unique token identifying the file to be restored.
* @filep: Output parameter; on success, this is populated with a pointer
* to the newly retrieved 'struct file'.
*
* This function is the primary mechanism for recreating a file in the new
* kernel after a live update. It searches the file_set's list of deserialized
* files for an entry matching the provided @token.
*
* The operation is idempotent: if a file has already been successfully
* retrieved, this function will simply return a pointer to the existing
* 'struct file' and report success without re-executing the retrieve
* operation. This is handled by checking the 'retrieved' flag under a lock.
*
* File retrieval can happen in any order; it is not bound by the order of
* preservation.
*
* Context: Can be called from an ioctl or other in-kernel code in the new
* kernel.
* Return: 0 on success. Returns a negative errno on failure:
* -ENOENT if no file with the matching token is found.
* Any error code returned by the handler's .retrieve() op.
*/
int luo_retrieve_file(struct luo_file_set *file_set, u64 token,
struct file **filep)
{
struct liveupdate_file_op_args args = {0};
struct luo_file *luo_file;
int err;
if (list_empty(&file_set->files_list))
return -ENOENT;
list_for_each_entry(luo_file, &file_set->files_list, list) {
if (luo_file->token == token)
break;
}
if (luo_file->token != token)
return -ENOENT;
guard(mutex)(&luo_file->mutex);
if (luo_file->retrieved) {
/*
* Someone is asking for this file again, so get a reference
* for them.
*/
get_file(luo_file->file);
*filep = luo_file->file;
return 0;
}
args.handler = luo_file->fh;
args.serialized_data = luo_file->serialized_data;
err = luo_file->fh->ops->retrieve(&args);
if (!err) {
luo_file->file = args.file;
/* Get reference so we can keep this file in LUO until finish */
get_file(luo_file->file);
*filep = luo_file->file;
luo_file->retrieved = true;
}
return err;
}
static int luo_file_can_finish_one(struct luo_file_set *file_set,
struct luo_file *luo_file)
{
bool can_finish = true;
guard(mutex)(&luo_file->mutex);
if (luo_file->fh->ops->can_finish) {
struct liveupdate_file_op_args args = {0};
args.handler = luo_file->fh;
args.file = luo_file->file;
args.serialized_data = luo_file->serialized_data;
args.retrieved = luo_file->retrieved;
can_finish = luo_file->fh->ops->can_finish(&args);
}
return can_finish ? 0 : -EBUSY;
}
static void luo_file_finish_one(struct luo_file_set *file_set,
struct luo_file *luo_file)
{
struct liveupdate_file_op_args args = {0};
guard(mutex)(&luo_file->mutex);
args.handler = luo_file->fh;
args.file = luo_file->file;
args.serialized_data = luo_file->serialized_data;
args.retrieved = luo_file->retrieved;
luo_file->fh->ops->finish(&args);
}
/**
* luo_file_finish - Completes the lifecycle for all files in a file_set.
* @file_set: The file_set to be finalized.
*
* This function orchestrates the final teardown of a live update file_set in
* the new kernel. It should be called after all necessary files have been
* retrieved and the userspace agent is ready to release the preserved state.
*
* The function iterates through all tracked files. For each file, it performs
* the following sequence of cleanup actions:
*
* 1. If file is not yet retrieved, retrieves it, and calls can_finish() on
* every file in the file_set. If all can_finish return true, continue to
* finish.
* 2. Calls the handler's .finish() callback (via luo_file_finish_one) to
* allow for final resource cleanup within the handler.
* 3. Releases LUO's ownership reference on the 'struct file' via fput(). This
* is the counterpart to the get_file() call in luo_retrieve_file().
* 4. Removes the 'struct luo_file' from the file_set's internal list.
* 5. Frees the memory for the 'struct luo_file' instance itself.
*
* After successfully finishing all individual files, it frees the
* contiguous memory block that was used to transfer the serialized metadata
* from the previous kernel.
*
* Error Handling (Atomic Failure):
* This operation is atomic. If any handler's .can_finish() op fails, the entire
* function aborts immediately and returns an error.
*
* Context: Can be called from an ioctl handler in the new kernel.
* Return: 0 on success, or a negative errno on failure.
*/
int luo_file_finish(struct luo_file_set *file_set)
{
struct list_head *files_list = &file_set->files_list;
struct luo_file *luo_file;
int err;
if (!file_set->count)
return 0;
list_for_each_entry(luo_file, files_list, list) {
err = luo_file_can_finish_one(file_set, luo_file);
if (err)
return err;
}
while (!list_empty(&file_set->files_list)) {
luo_file = list_last_entry(&file_set->files_list,
struct luo_file, list);
luo_file_finish_one(file_set, luo_file);
if (luo_file->file)
fput(luo_file->file);
list_del(&luo_file->list);
file_set->count--;
mutex_destroy(&luo_file->mutex);
kfree(luo_file);
}
if (file_set->files) {
kho_restore_free(file_set->files);
file_set->files = NULL;
}
return 0;
}
/**
* luo_file_deserialize - Reconstructs the list of preserved files in the new kernel.
* @file_set: The incoming file_set to fill with deserialized data.
* @file_set_ser: Serialized KHO file_set data from the previous kernel.
*
* This function is called during the early boot process of the new kernel. It
* takes the raw, contiguous memory block of 'struct luo_file_ser' entries,
* provided by the previous kernel, and transforms it back into a live,
* in-memory linked list of 'struct luo_file' instances.
*
* For each serialized entry, it performs the following steps:
* 1. Reads the 'compatible' string.
* 2. Searches the global list of registered file handlers for one that
* matches the compatible string.
* 3. Allocates a new 'struct luo_file'.
* 4. Populates the new structure with the deserialized data (token, private
* data handle) and links it to the found handler. The 'file' pointer is
* initialized to NULL, as the file has not been retrieved yet.
* 5. Adds the new 'struct luo_file' to the file_set's files_list.
*
* This prepares the file_set for userspace, which can later call
* luo_retrieve_file() to restore the actual file descriptors.
*
* Context: Called from session deserialization.
*/
int luo_file_deserialize(struct luo_file_set *file_set,
struct luo_file_set_ser *file_set_ser)
{
struct luo_file_ser *file_ser;
u64 i;
if (!file_set_ser->files) {
WARN_ON(file_set_ser->count);
return 0;
}
file_set->count = file_set_ser->count;
file_set->files = phys_to_virt(file_set_ser->files);
/*
* Note on error handling:
*
* If deserialization fails (e.g., allocation failure or corrupt data),
* we intentionally skip cleanup of files that were already restored.
*
* A partial failure leaves the preserved state inconsistent.
* Implementing a safe "undo" to unwind complex dependencies (sessions,
* files, hardware state) is error-prone and provides little value, as
* the system is effectively in a broken state.
*
* We treat these resources as leaked. The expected recovery path is for
* userspace to detect the failure and trigger a reboot, which will
* reliably reset devices and reclaim memory.
*/
file_ser = file_set->files;
for (i = 0; i < file_set->count; i++) {
struct liveupdate_file_handler *fh;
bool handler_found = false;
struct luo_file *luo_file;
luo_list_for_each_private(fh, &luo_file_handler_list, list) {
if (!strcmp(fh->compatible, file_ser[i].compatible)) {
handler_found = true;
break;
}
}
if (!handler_found) {
pr_warn("No registered handler for compatible '%s'\n",
file_ser[i].compatible);
return -ENOENT;
}
luo_file = kzalloc(sizeof(*luo_file), GFP_KERNEL);
if (!luo_file)
return -ENOMEM;
luo_file->fh = fh;
luo_file->file = NULL;
luo_file->serialized_data = file_ser[i].data;
luo_file->token = file_ser[i].token;
luo_file->retrieved = false;
mutex_init(&luo_file->mutex);
list_add_tail(&luo_file->list, &file_set->files_list);
}
return 0;
}
void luo_file_set_init(struct luo_file_set *file_set)
{
INIT_LIST_HEAD(&file_set->files_list);
}
void luo_file_set_destroy(struct luo_file_set *file_set)
{
WARN_ON(file_set->count);
WARN_ON(!list_empty(&file_set->files_list));
}
/**
* liveupdate_register_file_handler - Register a file handler with LUO.
* @fh: Pointer to a caller-allocated &struct liveupdate_file_handler.
* The caller must initialize this structure, including a unique
* 'compatible' string and a valid 'fh' callbacks. This function adds the
* handler to the global list of supported file handlers.
*
* Context: Typically called during module initialization for file types that
* support live update preservation.
*
* Return: 0 on success. Negative errno on failure.
*/
int liveupdate_register_file_handler(struct liveupdate_file_handler *fh)
{
struct liveupdate_file_handler *fh_iter;
int err;
if (!liveupdate_enabled())
return -EOPNOTSUPP;
/* Sanity check that all required callbacks are set */
if (!fh->ops->preserve || !fh->ops->unpreserve || !fh->ops->retrieve ||
!fh->ops->finish || !fh->ops->can_preserve) {
return -EINVAL;
}
/*
* Ensure the system is quiescent (no active sessions).
* This prevents registering new handlers while sessions are active or
* while deserialization is in progress.
*/
if (!luo_session_quiesce())
return -EBUSY;
/* Check for duplicate compatible strings */
luo_list_for_each_private(fh_iter, &luo_file_handler_list, list) {
if (!strcmp(fh_iter->compatible, fh->compatible)) {
pr_err("File handler registration failed: Compatible string '%s' already registered.\n",
fh->compatible);
err = -EEXIST;
goto err_resume;
}
}
/* Pin the module implementing the handler */
if (!try_module_get(fh->ops->owner)) {
err = -EAGAIN;
goto err_resume;
}
INIT_LIST_HEAD(&ACCESS_PRIVATE(fh, list));
list_add_tail(&ACCESS_PRIVATE(fh, list), &luo_file_handler_list);
luo_session_resume();
return 0;
err_resume:
luo_session_resume();
return err;
}
/**
* liveupdate_unregister_file_handler - Unregister a liveupdate file handler
* @fh: The file handler to unregister
*
* Unregisters the file handler from the liveupdate core. This function
* reverses the operations of liveupdate_register_file_handler().
*
* It ensures safe removal by checking that:
* No live update session is currently in progress.
*
* If the unregistration fails, the internal test state is reverted.
*
* Return: 0 Success. -EOPNOTSUPP when live update is not enabled. -EBUSY A live
* update is in progress, can't quiesce live update.
*/
int liveupdate_unregister_file_handler(struct liveupdate_file_handler *fh)
{
if (!liveupdate_enabled())
return -EOPNOTSUPP;
if (!luo_session_quiesce())
return -EBUSY;
list_del(&ACCESS_PRIVATE(fh, list));
module_put(fh->ops->owner);
luo_session_resume();
return 0;
}

View File

@@ -0,0 +1,110 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2025, Google LLC.
* Pasha Tatashin <pasha.tatashin@soleen.com>
*/
#ifndef _LINUX_LUO_INTERNAL_H
#define _LINUX_LUO_INTERNAL_H
#include <linux/liveupdate.h>
#include <linux/uaccess.h>
struct luo_ucmd {
void __user *ubuffer;
u32 user_size;
void *cmd;
};
static inline int luo_ucmd_respond(struct luo_ucmd *ucmd,
size_t kernel_cmd_size)
{
/*
* Copy the minimum of what the user provided and what we actually
* have.
*/
if (copy_to_user(ucmd->ubuffer, ucmd->cmd,
min_t(size_t, ucmd->user_size, kernel_cmd_size))) {
return -EFAULT;
}
return 0;
}
/*
* Handles a deserialization failure: devices and memory is in unpredictable
* state.
*
* Continuing the boot process after a failure is dangerous because it could
* lead to leaks of private data.
*/
#define luo_restore_fail(__fmt, ...) panic(__fmt, ##__VA_ARGS__)
/* Mimics list_for_each_entry() but for private list head entries */
#define luo_list_for_each_private(pos, head, member) \
for (struct list_head *__iter = (head)->next; \
__iter != (head) && \
({ pos = container_of(__iter, typeof(*(pos)), member); 1; }); \
__iter = __iter->next)
/**
* struct luo_file_set - A set of files that belong to the same sessions.
* @files_list: An ordered list of files associated with this session, it is
* ordered by preservation time.
* @files: The physically contiguous memory block that holds the serialized
* state of files.
* @count: A counter tracking the number of files currently stored in the
* @files_list for this session.
*/
struct luo_file_set {
struct list_head files_list;
struct luo_file_ser *files;
long count;
};
/**
* struct luo_session - Represents an active or incoming Live Update session.
* @name: A unique name for this session, used for identification and
* retrieval.
* @ser: Pointer to the serialized data for this session.
* @list: A list_head member used to link this session into a global list
* of either outgoing (to be preserved) or incoming (restored from
* previous kernel) sessions.
* @retrieved: A boolean flag indicating whether this session has been
* retrieved by a consumer in the new kernel.
* @file_set: A set of files that belong to this session.
* @mutex: protects fields in the luo_session.
*/
struct luo_session {
char name[LIVEUPDATE_SESSION_NAME_LENGTH];
struct luo_session_ser *ser;
struct list_head list;
bool retrieved;
struct luo_file_set file_set;
struct mutex mutex;
};
int luo_session_create(const char *name, struct file **filep);
int luo_session_retrieve(const char *name, struct file **filep);
int __init luo_session_setup_outgoing(void *fdt);
int __init luo_session_setup_incoming(void *fdt);
int luo_session_serialize(void);
int luo_session_deserialize(void);
bool luo_session_quiesce(void);
void luo_session_resume(void);
int luo_preserve_file(struct luo_file_set *file_set, u64 token, int fd);
void luo_file_unpreserve_files(struct luo_file_set *file_set);
int luo_file_freeze(struct luo_file_set *file_set,
struct luo_file_set_ser *file_set_ser);
void luo_file_unfreeze(struct luo_file_set *file_set,
struct luo_file_set_ser *file_set_ser);
int luo_retrieve_file(struct luo_file_set *file_set, u64 token,
struct file **filep);
int luo_file_finish(struct luo_file_set *file_set);
int luo_file_deserialize(struct luo_file_set *file_set,
struct luo_file_set_ser *file_set_ser);
void luo_file_set_init(struct luo_file_set *file_set);
void luo_file_set_destroy(struct luo_file_set *file_set);
#endif /* _LINUX_LUO_INTERNAL_H */

View File

@@ -0,0 +1,646 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2025, Google LLC.
* Pasha Tatashin <pasha.tatashin@soleen.com>
*/
/**
* DOC: LUO Sessions
*
* LUO Sessions provide the core mechanism for grouping and managing `struct
* file *` instances that need to be preserved across a kexec-based live
* update. Each session acts as a named container for a set of file objects,
* allowing a userspace agent to manage the lifecycle of resources critical to a
* workload.
*
* Core Concepts:
*
* - Named Containers: Sessions are identified by a unique, user-provided name,
* which is used for both creation in the current kernel and retrieval in the
* next kernel.
*
* - Userspace Interface: Session management is driven from userspace via
* ioctls on /dev/liveupdate.
*
* - Serialization: Session metadata is preserved using the KHO framework. When
* a live update is triggered via kexec, an array of `struct luo_session_ser`
* is populated and placed in a preserved memory region. An FDT node is also
* created, containing the count of sessions and the physical address of this
* array.
*
* Session Lifecycle:
*
* 1. Creation: A userspace agent calls `luo_session_create()` to create a
* new, empty session and receives a file descriptor for it.
*
* 2. Serialization: When the `reboot(LINUX_REBOOT_CMD_KEXEC)` syscall is
* made, `luo_session_serialize()` is called. It iterates through all
* active sessions and writes their metadata into a memory area preserved
* by KHO.
*
* 3. Deserialization (in new kernel): After kexec, `luo_session_deserialize()`
* runs, reading the serialized data and creating a list of `struct
* luo_session` objects representing the preserved sessions.
*
* 4. Retrieval: A userspace agent in the new kernel can then call
* `luo_session_retrieve()` with a session name to get a new file
* descriptor and access the preserved state.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/anon_inodes.h>
#include <linux/cleanup.h>
#include <linux/err.h>
#include <linux/errno.h>
#include <linux/file.h>
#include <linux/fs.h>
#include <linux/io.h>
#include <linux/kexec_handover.h>
#include <linux/kho/abi/luo.h>
#include <linux/libfdt.h>
#include <linux/list.h>
#include <linux/liveupdate.h>
#include <linux/mutex.h>
#include <linux/rwsem.h>
#include <linux/slab.h>
#include <linux/unaligned.h>
#include <uapi/linux/liveupdate.h>
#include "luo_internal.h"
/* 16 4K pages, give space for 744 sessions */
#define LUO_SESSION_PGCNT 16ul
#define LUO_SESSION_MAX (((LUO_SESSION_PGCNT << PAGE_SHIFT) - \
sizeof(struct luo_session_header_ser)) / \
sizeof(struct luo_session_ser))
/**
* struct luo_session_header - Header struct for managing LUO sessions.
* @count: The number of sessions currently tracked in the @list.
* @list: The head of the linked list of `struct luo_session` instances.
* @rwsem: A read-write semaphore providing synchronized access to the
* session list and other fields in this structure.
* @header_ser: The header data of serialization array.
* @ser: The serialized session data (an array of
* `struct luo_session_ser`).
* @active: Set to true when first initialized. If previous kernel did not
* send session data, active stays false for incoming.
*/
struct luo_session_header {
long count;
struct list_head list;
struct rw_semaphore rwsem;
struct luo_session_header_ser *header_ser;
struct luo_session_ser *ser;
bool active;
};
/**
* struct luo_session_global - Global container for managing LUO sessions.
* @incoming: The sessions passed from the previous kernel.
* @outgoing: The sessions that are going to be passed to the next kernel.
*/
struct luo_session_global {
struct luo_session_header incoming;
struct luo_session_header outgoing;
};
static struct luo_session_global luo_session_global = {
.incoming = {
.list = LIST_HEAD_INIT(luo_session_global.incoming.list),
.rwsem = __RWSEM_INITIALIZER(luo_session_global.incoming.rwsem),
},
.outgoing = {
.list = LIST_HEAD_INIT(luo_session_global.outgoing.list),
.rwsem = __RWSEM_INITIALIZER(luo_session_global.outgoing.rwsem),
},
};
static struct luo_session *luo_session_alloc(const char *name)
{
struct luo_session *session = kzalloc(sizeof(*session), GFP_KERNEL);
if (!session)
return ERR_PTR(-ENOMEM);
strscpy(session->name, name, sizeof(session->name));
INIT_LIST_HEAD(&session->file_set.files_list);
luo_file_set_init(&session->file_set);
INIT_LIST_HEAD(&session->list);
mutex_init(&session->mutex);
return session;
}
static void luo_session_free(struct luo_session *session)
{
luo_file_set_destroy(&session->file_set);
mutex_destroy(&session->mutex);
kfree(session);
}
static int luo_session_insert(struct luo_session_header *sh,
struct luo_session *session)
{
struct luo_session *it;
guard(rwsem_write)(&sh->rwsem);
/*
* For outgoing we should make sure there is room in serialization array
* for new session.
*/
if (sh == &luo_session_global.outgoing) {
if (sh->count == LUO_SESSION_MAX)
return -ENOMEM;
}
/*
* For small number of sessions this loop won't hurt performance
* but if we ever start using a lot of sessions, this might
* become a bottle neck during deserialization time, as it would
* cause O(n*n) complexity.
*/
list_for_each_entry(it, &sh->list, list) {
if (!strncmp(it->name, session->name, sizeof(it->name)))
return -EEXIST;
}
list_add_tail(&session->list, &sh->list);
sh->count++;
return 0;
}
static void luo_session_remove(struct luo_session_header *sh,
struct luo_session *session)
{
guard(rwsem_write)(&sh->rwsem);
list_del(&session->list);
sh->count--;
}
static int luo_session_finish_one(struct luo_session *session)
{
guard(mutex)(&session->mutex);
return luo_file_finish(&session->file_set);
}
static void luo_session_unfreeze_one(struct luo_session *session,
struct luo_session_ser *ser)
{
guard(mutex)(&session->mutex);
luo_file_unfreeze(&session->file_set, &ser->file_set_ser);
}
static int luo_session_freeze_one(struct luo_session *session,
struct luo_session_ser *ser)
{
guard(mutex)(&session->mutex);
return luo_file_freeze(&session->file_set, &ser->file_set_ser);
}
static int luo_session_release(struct inode *inodep, struct file *filep)
{
struct luo_session *session = filep->private_data;
struct luo_session_header *sh;
/* If retrieved is set, it means this session is from incoming list */
if (session->retrieved) {
int err = luo_session_finish_one(session);
if (err) {
pr_warn("Unable to finish session [%s] on release\n",
session->name);
return err;
}
sh = &luo_session_global.incoming;
} else {
scoped_guard(mutex, &session->mutex)
luo_file_unpreserve_files(&session->file_set);
sh = &luo_session_global.outgoing;
}
luo_session_remove(sh, session);
luo_session_free(session);
return 0;
}
static int luo_session_preserve_fd(struct luo_session *session,
struct luo_ucmd *ucmd)
{
struct liveupdate_session_preserve_fd *argp = ucmd->cmd;
int err;
guard(mutex)(&session->mutex);
err = luo_preserve_file(&session->file_set, argp->token, argp->fd);
if (err)
return err;
err = luo_ucmd_respond(ucmd, sizeof(*argp));
if (err)
pr_warn("The file was successfully preserved, but response to user failed\n");
return err;
}
static int luo_session_retrieve_fd(struct luo_session *session,
struct luo_ucmd *ucmd)
{
struct liveupdate_session_retrieve_fd *argp = ucmd->cmd;
struct file *file;
int err;
argp->fd = get_unused_fd_flags(O_CLOEXEC);
if (argp->fd < 0)
return argp->fd;
guard(mutex)(&session->mutex);
err = luo_retrieve_file(&session->file_set, argp->token, &file);
if (err < 0)
goto err_put_fd;
err = luo_ucmd_respond(ucmd, sizeof(*argp));
if (err)
goto err_put_file;
fd_install(argp->fd, file);
return 0;
err_put_file:
fput(file);
err_put_fd:
put_unused_fd(argp->fd);
return err;
}
static int luo_session_finish(struct luo_session *session,
struct luo_ucmd *ucmd)
{
struct liveupdate_session_finish *argp = ucmd->cmd;
int err = luo_session_finish_one(session);
if (err)
return err;
return luo_ucmd_respond(ucmd, sizeof(*argp));
}
union ucmd_buffer {
struct liveupdate_session_finish finish;
struct liveupdate_session_preserve_fd preserve;
struct liveupdate_session_retrieve_fd retrieve;
};
struct luo_ioctl_op {
unsigned int size;
unsigned int min_size;
unsigned int ioctl_num;
int (*execute)(struct luo_session *session, struct luo_ucmd *ucmd);
};
#define IOCTL_OP(_ioctl, _fn, _struct, _last) \
[_IOC_NR(_ioctl) - LIVEUPDATE_CMD_SESSION_BASE] = { \
.size = sizeof(_struct) + \
BUILD_BUG_ON_ZERO(sizeof(union ucmd_buffer) < \
sizeof(_struct)), \
.min_size = offsetofend(_struct, _last), \
.ioctl_num = _ioctl, \
.execute = _fn, \
}
static const struct luo_ioctl_op luo_session_ioctl_ops[] = {
IOCTL_OP(LIVEUPDATE_SESSION_FINISH, luo_session_finish,
struct liveupdate_session_finish, reserved),
IOCTL_OP(LIVEUPDATE_SESSION_PRESERVE_FD, luo_session_preserve_fd,
struct liveupdate_session_preserve_fd, token),
IOCTL_OP(LIVEUPDATE_SESSION_RETRIEVE_FD, luo_session_retrieve_fd,
struct liveupdate_session_retrieve_fd, token),
};
static long luo_session_ioctl(struct file *filep, unsigned int cmd,
unsigned long arg)
{
struct luo_session *session = filep->private_data;
const struct luo_ioctl_op *op;
struct luo_ucmd ucmd = {};
union ucmd_buffer buf;
unsigned int nr;
int ret;
nr = _IOC_NR(cmd);
if (nr < LIVEUPDATE_CMD_SESSION_BASE || (nr - LIVEUPDATE_CMD_SESSION_BASE) >=
ARRAY_SIZE(luo_session_ioctl_ops)) {
return -EINVAL;
}
ucmd.ubuffer = (void __user *)arg;
ret = get_user(ucmd.user_size, (u32 __user *)ucmd.ubuffer);
if (ret)
return ret;
op = &luo_session_ioctl_ops[nr - LIVEUPDATE_CMD_SESSION_BASE];
if (op->ioctl_num != cmd)
return -ENOIOCTLCMD;
if (ucmd.user_size < op->min_size)
return -EINVAL;
ucmd.cmd = &buf;
ret = copy_struct_from_user(ucmd.cmd, op->size, ucmd.ubuffer,
ucmd.user_size);
if (ret)
return ret;
return op->execute(session, &ucmd);
}
static const struct file_operations luo_session_fops = {
.owner = THIS_MODULE,
.release = luo_session_release,
.unlocked_ioctl = luo_session_ioctl,
};
/* Create a "struct file" for session */
static int luo_session_getfile(struct luo_session *session, struct file **filep)
{
char name_buf[128];
struct file *file;
lockdep_assert_held(&session->mutex);
snprintf(name_buf, sizeof(name_buf), "[luo_session] %s", session->name);
file = anon_inode_getfile(name_buf, &luo_session_fops, session, O_RDWR);
if (IS_ERR(file))
return PTR_ERR(file);
*filep = file;
return 0;
}
int luo_session_create(const char *name, struct file **filep)
{
struct luo_session *session;
int err;
session = luo_session_alloc(name);
if (IS_ERR(session))
return PTR_ERR(session);
err = luo_session_insert(&luo_session_global.outgoing, session);
if (err)
goto err_free;
scoped_guard(mutex, &session->mutex)
err = luo_session_getfile(session, filep);
if (err)
goto err_remove;
return 0;
err_remove:
luo_session_remove(&luo_session_global.outgoing, session);
err_free:
luo_session_free(session);
return err;
}
int luo_session_retrieve(const char *name, struct file **filep)
{
struct luo_session_header *sh = &luo_session_global.incoming;
struct luo_session *session = NULL;
struct luo_session *it;
int err;
scoped_guard(rwsem_read, &sh->rwsem) {
list_for_each_entry(it, &sh->list, list) {
if (!strncmp(it->name, name, sizeof(it->name))) {
session = it;
break;
}
}
}
if (!session)
return -ENOENT;
guard(mutex)(&session->mutex);
if (session->retrieved)
return -EINVAL;
err = luo_session_getfile(session, filep);
if (!err)
session->retrieved = true;
return err;
}
int __init luo_session_setup_outgoing(void *fdt_out)
{
struct luo_session_header_ser *header_ser;
u64 header_ser_pa;
int err;
header_ser = kho_alloc_preserve(LUO_SESSION_PGCNT << PAGE_SHIFT);
if (IS_ERR(header_ser))
return PTR_ERR(header_ser);
header_ser_pa = virt_to_phys(header_ser);
err = fdt_begin_node(fdt_out, LUO_FDT_SESSION_NODE_NAME);
err |= fdt_property_string(fdt_out, "compatible",
LUO_FDT_SESSION_COMPATIBLE);
err |= fdt_property(fdt_out, LUO_FDT_SESSION_HEADER, &header_ser_pa,
sizeof(header_ser_pa));
err |= fdt_end_node(fdt_out);
if (err)
goto err_unpreserve;
luo_session_global.outgoing.header_ser = header_ser;
luo_session_global.outgoing.ser = (void *)(header_ser + 1);
luo_session_global.outgoing.active = true;
return 0;
err_unpreserve:
kho_unpreserve_free(header_ser);
return err;
}
int __init luo_session_setup_incoming(void *fdt_in)
{
struct luo_session_header_ser *header_ser;
int err, header_size, offset;
u64 header_ser_pa;
const void *ptr;
offset = fdt_subnode_offset(fdt_in, 0, LUO_FDT_SESSION_NODE_NAME);
if (offset < 0) {
pr_err("Unable to get session node: [%s]\n",
LUO_FDT_SESSION_NODE_NAME);
return -EINVAL;
}
err = fdt_node_check_compatible(fdt_in, offset,
LUO_FDT_SESSION_COMPATIBLE);
if (err) {
pr_err("Session node incompatible [%s]\n",
LUO_FDT_SESSION_COMPATIBLE);
return -EINVAL;
}
header_size = 0;
ptr = fdt_getprop(fdt_in, offset, LUO_FDT_SESSION_HEADER, &header_size);
if (!ptr || header_size != sizeof(u64)) {
pr_err("Unable to get session header '%s' [%d]\n",
LUO_FDT_SESSION_HEADER, header_size);
return -EINVAL;
}
header_ser_pa = get_unaligned((u64 *)ptr);
header_ser = phys_to_virt(header_ser_pa);
luo_session_global.incoming.header_ser = header_ser;
luo_session_global.incoming.ser = (void *)(header_ser + 1);
luo_session_global.incoming.active = true;
return 0;
}
int luo_session_deserialize(void)
{
struct luo_session_header *sh = &luo_session_global.incoming;
static bool is_deserialized;
static int err;
/* If has been deserialized, always return the same error code */
if (is_deserialized)
return err;
is_deserialized = true;
if (!sh->active)
return 0;
/*
* Note on error handling:
*
* If deserialization fails (e.g., allocation failure or corrupt data),
* we intentionally skip cleanup of sessions that were already restored.
*
* A partial failure leaves the preserved state inconsistent.
* Implementing a safe "undo" to unwind complex dependencies (sessions,
* files, hardware state) is error-prone and provides little value, as
* the system is effectively in a broken state.
*
* We treat these resources as leaked. The expected recovery path is for
* userspace to detect the failure and trigger a reboot, which will
* reliably reset devices and reclaim memory.
*/
for (int i = 0; i < sh->header_ser->count; i++) {
struct luo_session *session;
session = luo_session_alloc(sh->ser[i].name);
if (IS_ERR(session)) {
pr_warn("Failed to allocate session [%s] during deserialization %pe\n",
sh->ser[i].name, session);
return PTR_ERR(session);
}
err = luo_session_insert(sh, session);
if (err) {
pr_warn("Failed to insert session [%s] %pe\n",
session->name, ERR_PTR(err));
luo_session_free(session);
return err;
}
scoped_guard(mutex, &session->mutex) {
luo_file_deserialize(&session->file_set,
&sh->ser[i].file_set_ser);
}
}
kho_restore_free(sh->header_ser);
sh->header_ser = NULL;
sh->ser = NULL;
return 0;
}
int luo_session_serialize(void)
{
struct luo_session_header *sh = &luo_session_global.outgoing;
struct luo_session *session;
int i = 0;
int err;
guard(rwsem_write)(&sh->rwsem);
list_for_each_entry(session, &sh->list, list) {
err = luo_session_freeze_one(session, &sh->ser[i]);
if (err)
goto err_undo;
strscpy(sh->ser[i].name, session->name,
sizeof(sh->ser[i].name));
i++;
}
sh->header_ser->count = sh->count;
return 0;
err_undo:
list_for_each_entry_continue_reverse(session, &sh->list, list) {
i--;
luo_session_unfreeze_one(session, &sh->ser[i]);
memset(sh->ser[i].name, 0, sizeof(sh->ser[i].name));
}
return err;
}
/**
* luo_session_quiesce - Ensure no active sessions exist and lock session lists.
*
* Acquires exclusive write locks on both incoming and outgoing session lists.
* It then validates no sessions exist in either list.
*
* This mechanism is used during file handler un/registration to ensure that no
* sessions are currently using the handler, and no new sessions can be created
* while un/registration is in progress.
*
* This prevents registering new handlers while sessions are active or
* while deserialization is in progress.
*
* Return:
* true - System is quiescent (0 sessions) and locked.
* false - Active sessions exist. The locks are released internally.
*/
bool luo_session_quiesce(void)
{
down_write(&luo_session_global.incoming.rwsem);
down_write(&luo_session_global.outgoing.rwsem);
if (luo_session_global.incoming.count ||
luo_session_global.outgoing.count) {
up_write(&luo_session_global.outgoing.rwsem);
up_write(&luo_session_global.incoming.rwsem);
return false;
}
return true;
}
/**
* luo_session_resume - Unlock session lists and resume normal activity.
*
* Releases the exclusive locks acquired by a successful call to
* luo_session_quiesce().
*/
void luo_session_resume(void)
{
up_write(&luo_session_global.outgoing.rwsem);
up_write(&luo_session_global.incoming.rwsem);
}

View File

@@ -954,7 +954,7 @@ size_t module_flags_taint(unsigned long taints, char *buf)
int i;
for (i = 0; i < TAINT_FLAGS_COUNT; i++) {
if (taint_flags[i].module && test_bit(i, &taints))
if (test_bit(i, &taints))
buf[l++] = taint_flags[i].c_true;
}

View File

@@ -401,7 +401,7 @@ static void panic_trigger_all_cpu_backtrace(void)
*/
static void panic_other_cpus_shutdown(bool crash_kexec)
{
if (panic_print & SYS_INFO_ALL_CPU_BT)
if (panic_print & SYS_INFO_ALL_BT)
panic_trigger_all_cpu_backtrace();
/*
@@ -628,38 +628,40 @@ void panic(const char *fmt, ...)
}
EXPORT_SYMBOL(panic);
#define TAINT_FLAG(taint, _c_true, _c_false, _module) \
#define TAINT_FLAG(taint, _c_true, _c_false) \
[ TAINT_##taint ] = { \
.c_true = _c_true, .c_false = _c_false, \
.module = _module, \
.desc = #taint, \
}
/*
* TAINT_FORCED_RMMOD could be a per-module flag but the module
* is being removed anyway.
* NOTE: if you modify the taint_flags or TAINT_FLAGS_COUNT,
* please also modify tools/debugging/kernel-chktaint and
* Documentation/admin-guide/tainted-kernels.rst, including its
* small shell script that prints the TAINT_FLAGS_COUNT bits of
* /proc/sys/kernel/tainted.
*/
const struct taint_flag taint_flags[TAINT_FLAGS_COUNT] = {
TAINT_FLAG(PROPRIETARY_MODULE, 'P', 'G', true),
TAINT_FLAG(FORCED_MODULE, 'F', ' ', true),
TAINT_FLAG(CPU_OUT_OF_SPEC, 'S', ' ', false),
TAINT_FLAG(FORCED_RMMOD, 'R', ' ', false),
TAINT_FLAG(MACHINE_CHECK, 'M', ' ', false),
TAINT_FLAG(BAD_PAGE, 'B', ' ', false),
TAINT_FLAG(USER, 'U', ' ', false),
TAINT_FLAG(DIE, 'D', ' ', false),
TAINT_FLAG(OVERRIDDEN_ACPI_TABLE, 'A', ' ', false),
TAINT_FLAG(WARN, 'W', ' ', false),
TAINT_FLAG(CRAP, 'C', ' ', true),
TAINT_FLAG(FIRMWARE_WORKAROUND, 'I', ' ', false),
TAINT_FLAG(OOT_MODULE, 'O', ' ', true),
TAINT_FLAG(UNSIGNED_MODULE, 'E', ' ', true),
TAINT_FLAG(SOFTLOCKUP, 'L', ' ', false),
TAINT_FLAG(LIVEPATCH, 'K', ' ', true),
TAINT_FLAG(AUX, 'X', ' ', true),
TAINT_FLAG(RANDSTRUCT, 'T', ' ', true),
TAINT_FLAG(TEST, 'N', ' ', true),
TAINT_FLAG(FWCTL, 'J', ' ', true),
TAINT_FLAG(PROPRIETARY_MODULE, 'P', 'G'),
TAINT_FLAG(FORCED_MODULE, 'F', ' '),
TAINT_FLAG(CPU_OUT_OF_SPEC, 'S', ' '),
TAINT_FLAG(FORCED_RMMOD, 'R', ' '),
TAINT_FLAG(MACHINE_CHECK, 'M', ' '),
TAINT_FLAG(BAD_PAGE, 'B', ' '),
TAINT_FLAG(USER, 'U', ' '),
TAINT_FLAG(DIE, 'D', ' '),
TAINT_FLAG(OVERRIDDEN_ACPI_TABLE, 'A', ' '),
TAINT_FLAG(WARN, 'W', ' '),
TAINT_FLAG(CRAP, 'C', ' '),
TAINT_FLAG(FIRMWARE_WORKAROUND, 'I', ' '),
TAINT_FLAG(OOT_MODULE, 'O', ' '),
TAINT_FLAG(UNSIGNED_MODULE, 'E', ' '),
TAINT_FLAG(SOFTLOCKUP, 'L', ' '),
TAINT_FLAG(LIVEPATCH, 'K', ' '),
TAINT_FLAG(AUX, 'X', ' '),
TAINT_FLAG(RANDSTRUCT, 'T', ' '),
TAINT_FLAG(TEST, 'N', ' '),
TAINT_FLAG(FWCTL, 'J', ' '),
};
#undef TAINT_FLAG

View File

@@ -341,6 +341,8 @@ static int find_next_iomem_res(resource_size_t start, resource_size_t end,
unsigned long flags, unsigned long desc,
struct resource *res)
{
/* Skip children until we find a top level range that matches */
bool skip_children = true;
struct resource *p;
if (!res)
@@ -351,7 +353,7 @@ static int find_next_iomem_res(resource_size_t start, resource_size_t end,
read_lock(&resource_lock);
for_each_resource(&iomem_resource, p, false) {
for_each_resource(&iomem_resource, p, skip_children) {
/* If we passed the resource we are looking for, stop */
if (p->start > end) {
p = NULL;
@@ -362,6 +364,12 @@ static int find_next_iomem_res(resource_size_t start, resource_size_t end,
if (p->end < start)
continue;
/*
* We found a top level range that matches what we are looking
* for. Time to start checking children too.
*/
skip_children = false;
/* Found a match, break */
if (is_type_match(p, flags, desc))
break;

View File

@@ -135,7 +135,7 @@ static void scs_check_usage(struct task_struct *tsk)
if (!IS_ENABLED(CONFIG_DEBUG_STACK_USAGE))
return;
for (p = task_scs(tsk); p < __scs_magic(tsk); ++p) {
for (p = task_scs(tsk); p < __scs_magic(task_scs(tsk)); ++p) {
if (!READ_ONCE_NOCHECK(*p))
break;
used += sizeof(*p);

View File

@@ -31,6 +31,13 @@ u32 *vmcoreinfo_note;
/* trusted vmcoreinfo, e.g. we can make a copy in the crash memory */
static unsigned char *vmcoreinfo_data_safecopy;
struct hwerr_info {
atomic_t count;
time64_t timestamp;
};
static struct hwerr_info hwerr_data[HWERR_RECOV_MAX];
Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type,
void *data, size_t data_len)
{
@@ -118,6 +125,16 @@ phys_addr_t __weak paddr_vmcoreinfo_note(void)
}
EXPORT_SYMBOL(paddr_vmcoreinfo_note);
void hwerr_log_error_type(enum hwerr_error_type src)
{
if (src < 0 || src >= HWERR_RECOV_MAX)
return;
atomic_inc(&hwerr_data[src].count);
WRITE_ONCE(hwerr_data[src].timestamp, ktime_get_real_seconds());
}
EXPORT_SYMBOL_GPL(hwerr_log_error_type);
static int __init crash_save_vmcoreinfo_init(void)
{
vmcoreinfo_data = (unsigned char *)get_zeroed_page(GFP_KERNEL);

View File

@@ -25,6 +25,7 @@
#include <linux/stop_machine.h>
#include <linux/sysctl.h>
#include <linux/tick.h>
#include <linux/sys_info.h>
#include <linux/sched/clock.h>
#include <linux/sched/debug.h>
@@ -65,6 +66,13 @@ int __read_mostly sysctl_hardlockup_all_cpu_backtrace;
unsigned int __read_mostly hardlockup_panic =
IS_ENABLED(CONFIG_BOOTPARAM_HARDLOCKUP_PANIC);
/*
* bitmasks to control what kinds of system info to be printed when
* hard lockup is detected, it could be task, memory, lock etc.
* Refer include/linux/sys_info.h for detailed bit definition.
*/
static unsigned long hardlockup_si_mask;
#ifdef CONFIG_SYSFS
static unsigned int hardlockup_count;
@@ -178,11 +186,15 @@ static void watchdog_hardlockup_kick(void)
void watchdog_hardlockup_check(unsigned int cpu, struct pt_regs *regs)
{
int hardlockup_all_cpu_backtrace;
if (per_cpu(watchdog_hardlockup_touched, cpu)) {
per_cpu(watchdog_hardlockup_touched, cpu) = false;
return;
}
hardlockup_all_cpu_backtrace = (hardlockup_si_mask & SYS_INFO_ALL_BT) ?
1 : sysctl_hardlockup_all_cpu_backtrace;
/*
* Check for a hardlockup by making sure the CPU's timer
* interrupt is incrementing. The timer interrupt should have
@@ -214,7 +226,7 @@ void watchdog_hardlockup_check(unsigned int cpu, struct pt_regs *regs)
* Prevent multiple hard-lockup reports if one cpu is already
* engaged in dumping all cpu back traces.
*/
if (sysctl_hardlockup_all_cpu_backtrace) {
if (hardlockup_all_cpu_backtrace) {
if (test_and_set_bit_lock(0, &hard_lockup_nmi_warn))
return;
}
@@ -243,12 +255,13 @@ void watchdog_hardlockup_check(unsigned int cpu, struct pt_regs *regs)
trigger_single_cpu_backtrace(cpu);
}
if (sysctl_hardlockup_all_cpu_backtrace) {
if (hardlockup_all_cpu_backtrace) {
trigger_allbutcpu_cpu_backtrace(cpu);
if (!hardlockup_panic)
clear_bit_unlock(0, &hard_lockup_nmi_warn);
}
sys_info(hardlockup_si_mask & ~SYS_INFO_ALL_BT);
if (hardlockup_panic)
nmi_panic(regs, "Hard LOCKUP");
@@ -339,6 +352,13 @@ static void lockup_detector_update_enable(void)
int __read_mostly sysctl_softlockup_all_cpu_backtrace;
#endif
/*
* bitmasks to control what kinds of system info to be printed when
* soft lockup is detected, it could be task, memory, lock etc.
* Refer include/linux/sys_info.h for detailed bit definition.
*/
static unsigned long softlockup_si_mask;
static struct cpumask watchdog_allowed_mask __read_mostly;
/* Global variables, exported for sysctl */
@@ -755,7 +775,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
unsigned long touch_ts, period_ts, now;
struct pt_regs *regs = get_irq_regs();
int duration;
int softlockup_all_cpu_backtrace = sysctl_softlockup_all_cpu_backtrace;
int softlockup_all_cpu_backtrace;
unsigned long flags;
if (!watchdog_enabled)
@@ -767,6 +787,9 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
if (panic_in_progress())
return HRTIMER_NORESTART;
softlockup_all_cpu_backtrace = (softlockup_si_mask & SYS_INFO_ALL_BT) ?
1 : sysctl_softlockup_all_cpu_backtrace;
watchdog_hardlockup_kick();
/* kick the softlockup detector */
@@ -855,6 +878,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
}
add_taint(TAINT_SOFTLOCKUP, LOCKDEP_STILL_OK);
sys_info(softlockup_si_mask & ~SYS_INFO_ALL_BT);
if (softlockup_panic)
panic("softlockup: hung tasks");
}
@@ -1206,6 +1230,13 @@ static const struct ctl_table watchdog_sysctls[] = {
.extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_ONE,
},
{
.procname = "softlockup_sys_info",
.data = &softlockup_si_mask,
.maxlen = sizeof(softlockup_si_mask),
.mode = 0644,
.proc_handler = sysctl_sys_info_handler,
},
#ifdef CONFIG_SMP
{
.procname = "softlockup_all_cpu_backtrace",
@@ -1228,6 +1259,13 @@ static const struct ctl_table watchdog_sysctls[] = {
.extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_ONE,
},
{
.procname = "hardlockup_sys_info",
.data = &hardlockup_si_mask,
.maxlen = sizeof(hardlockup_si_mask),
.mode = 0644,
.proc_handler = sysctl_sys_info_handler,
},
#ifdef CONFIG_SMP
{
.procname = "hardlockup_all_cpu_backtrace",

View File

@@ -342,8 +342,7 @@ config DEBUG_INFO_COMPRESSED_ZLIB
depends on $(cc-option,-gz=zlib)
depends on $(ld-option,--compress-debug-sections=zlib)
help
Compress the debug information using zlib. Requires GCC 5.0+ or Clang
5.0+, binutils 2.26+, and zlib.
Compress the debug information using zlib.
Users of dpkg-deb via debian/rules may find an increase in
size of their debug .deb packages with this config set, due to the
@@ -493,23 +492,23 @@ config DEBUG_SECTION_MISMATCH
bool "Enable full Section mismatch analysis"
depends on CC_IS_GCC
help
The section mismatch analysis checks if there are illegal
references from one section to another section.
During linktime or runtime, some sections are dropped;
any use of code/data previously in these sections would
most likely result in an oops.
In the code, functions and variables are annotated with
__init,, etc. (see the full list in include/linux/init.h),
which results in the code/data being placed in specific sections.
The section mismatch analysis checks if there are illegal references
from one section to another. During linktime or runtime, some
sections are dropped; any use of code/data previously in these
sections would most likely result in an oops.
In the code, functions and variables are annotated with __init,
__initdata, and so on (see the full list in include/linux/init.h).
This directs the toolchain to place code/data in specific sections.
The section mismatch analysis is always performed after a full
kernel build, and enabling this option causes the following
additional step to occur:
- Add the option -fno-inline-functions-called-once to gcc commands.
When inlining a function annotated with __init in a non-init
function, we would lose the section information and thus
the analysis would not catch the illegal reference.
This option tells gcc to inline less (but it does result in
a larger kernel).
kernel build, and enabling this option causes the option
-fno-inline-functions-called-once to be added to gcc commands.
However, when inlining a function annotated with __init in
a non-init function, we would lose the section information and thus
the analysis would not catch the illegal reference. This option
tells gcc to inline less (but it does result in a larger kernel).
config SECTION_MISMATCH_WARN_ONLY
bool "Make section mismatch errors non-fatal"
@@ -1260,12 +1259,13 @@ config DEFAULT_HUNG_TASK_TIMEOUT
Keeping the default should be fine in most cases.
config BOOTPARAM_HUNG_TASK_PANIC
bool "Panic (Reboot) On Hung Tasks"
int "Number of hung tasks to trigger kernel panic"
depends on DETECT_HUNG_TASK
default 0
help
Say Y here to enable the kernel to panic on "hung tasks",
which are bugs that cause the kernel to leave a task stuck
in uninterruptible "D" state.
When set to a non-zero value, a kernel panic will be triggered
if the number of hung tasks found during a single scan reaches
this value.
The panic can be used in combination with panic_timeout,
to cause the system to reboot automatically after a
@@ -2817,8 +2817,25 @@ config CMDLINE_KUNIT_TEST
If unsure, say N.
config BASE64_KUNIT
tristate "KUnit test for base64 decoding and encoding" if !KUNIT_ALL_TESTS
depends on KUNIT
default KUNIT_ALL_TESTS
help
This builds the base64 unit tests.
The tests cover the encoding and decoding logic of Base64 functions
in the kernel.
In addition to correctness checks, simple performance benchmarks
for both encoding and decoding are also included.
For more information on KUnit and unit tests in general please refer
to the KUnit documentation in Documentation/dev-tools/kunit/.
If unsure, say N.
config BITS_TEST
tristate "KUnit test for bits.h" if !KUNIT_ALL_TESTS
tristate "KUnit test for bit functions and macros" if !KUNIT_ALL_TESTS
depends on KUNIT
default KUNIT_ALL_TESTS
help

View File

@@ -1,12 +1,12 @@
// SPDX-License-Identifier: GPL-2.0
/*
* base64.c - RFC4648-compliant base64 encoding
* base64.c - Base64 with support for multiple variants
*
* Copyright (c) 2020 Hannes Reinecke, SUSE
*
* Based on the base64url routines from fs/crypto/fname.c
* (which are using the URL-safe base64 encoding),
* modified to use the standard coding table from RFC4648 section 4.
* (which are using the URL-safe Base64 encoding),
* modified to support multiple Base64 variants.
*/
#include <linux/kernel.h>
@@ -15,89 +15,170 @@
#include <linux/string.h>
#include <linux/base64.h>
static const char base64_table[65] =
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
static const char base64_tables[][65] = {
[BASE64_STD] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",
[BASE64_URLSAFE] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_",
[BASE64_IMAP] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+,",
};
/*
* Initialize the base64 reverse mapping for a single character
* This macro maps a character to its corresponding base64 value,
* returning -1 if the character is invalid.
* char 'A'-'Z' maps to 0-25, 'a'-'z' maps to 26-51, '0'-'9' maps to 52-61,
* ch_62 maps to 62, ch_63 maps to 63, and other characters return -1
*/
#define INIT_1(v, ch_62, ch_63) \
[v] = (v) >= 'A' && (v) <= 'Z' ? (v) - 'A' \
: (v) >= 'a' && (v) <= 'z' ? (v) - 'a' + 26 \
: (v) >= '0' && (v) <= '9' ? (v) - '0' + 52 \
: (v) == (ch_62) ? 62 : (v) == (ch_63) ? 63 : -1
/*
* Recursive macros to generate multiple Base64 reverse mapping table entries.
* Each macro generates a sequence of entries in the lookup table:
* INIT_2 generates 2 entries, INIT_4 generates 4, INIT_8 generates 8, and so on up to INIT_32.
*/
#define INIT_2(v, ...) INIT_1(v, __VA_ARGS__), INIT_1((v) + 1, __VA_ARGS__)
#define INIT_4(v, ...) INIT_2(v, __VA_ARGS__), INIT_2((v) + 2, __VA_ARGS__)
#define INIT_8(v, ...) INIT_4(v, __VA_ARGS__), INIT_4((v) + 4, __VA_ARGS__)
#define INIT_16(v, ...) INIT_8(v, __VA_ARGS__), INIT_8((v) + 8, __VA_ARGS__)
#define INIT_32(v, ...) INIT_16(v, __VA_ARGS__), INIT_16((v) + 16, __VA_ARGS__)
#define BASE64_REV_INIT(ch_62, ch_63) { \
[0 ... 0x1f] = -1, \
INIT_32(0x20, ch_62, ch_63), \
INIT_32(0x40, ch_62, ch_63), \
INIT_32(0x60, ch_62, ch_63), \
[0x80 ... 0xff] = -1 }
static const s8 base64_rev_maps[][256] = {
[BASE64_STD] = BASE64_REV_INIT('+', '/'),
[BASE64_URLSAFE] = BASE64_REV_INIT('-', '_'),
[BASE64_IMAP] = BASE64_REV_INIT('+', ',')
};
#undef BASE64_REV_INIT
#undef INIT_32
#undef INIT_16
#undef INIT_8
#undef INIT_4
#undef INIT_2
#undef INIT_1
/**
* base64_encode() - base64-encode some binary data
* base64_encode() - Base64-encode some binary data
* @src: the binary data to encode
* @srclen: the length of @src in bytes
* @dst: (output) the base64-encoded string. Not NUL-terminated.
* @dst: (output) the Base64-encoded string. Not NUL-terminated.
* @padding: whether to append '=' padding characters
* @variant: which base64 variant to use
*
* Encodes data using base64 encoding, i.e. the "Base 64 Encoding" specified
* by RFC 4648, including the '='-padding.
* Encodes data using the selected Base64 variant.
*
* Return: the length of the resulting base64-encoded string in bytes.
* Return: the length of the resulting Base64-encoded string in bytes.
*/
int base64_encode(const u8 *src, int srclen, char *dst)
int base64_encode(const u8 *src, int srclen, char *dst, bool padding, enum base64_variant variant)
{
u32 ac = 0;
int bits = 0;
int i;
char *cp = dst;
const char *base64_table = base64_tables[variant];
for (i = 0; i < srclen; i++) {
ac = (ac << 8) | src[i];
bits += 8;
do {
bits -= 6;
*cp++ = base64_table[(ac >> bits) & 0x3f];
} while (bits >= 6);
while (srclen >= 3) {
ac = src[0] << 16 | src[1] << 8 | src[2];
*cp++ = base64_table[ac >> 18];
*cp++ = base64_table[(ac >> 12) & 0x3f];
*cp++ = base64_table[(ac >> 6) & 0x3f];
*cp++ = base64_table[ac & 0x3f];
src += 3;
srclen -= 3;
}
if (bits) {
*cp++ = base64_table[(ac << (6 - bits)) & 0x3f];
bits -= 6;
}
while (bits < 0) {
*cp++ = '=';
bits += 2;
switch (srclen) {
case 2:
ac = src[0] << 16 | src[1] << 8;
*cp++ = base64_table[ac >> 18];
*cp++ = base64_table[(ac >> 12) & 0x3f];
*cp++ = base64_table[(ac >> 6) & 0x3f];
if (padding)
*cp++ = '=';
break;
case 1:
ac = src[0] << 16;
*cp++ = base64_table[ac >> 18];
*cp++ = base64_table[(ac >> 12) & 0x3f];
if (padding) {
*cp++ = '=';
*cp++ = '=';
}
break;
}
return cp - dst;
}
EXPORT_SYMBOL_GPL(base64_encode);
/**
* base64_decode() - base64-decode a string
* base64_decode() - Base64-decode a string
* @src: the string to decode. Doesn't need to be NUL-terminated.
* @srclen: the length of @src in bytes
* @dst: (output) the decoded binary data
* @padding: whether to append '=' padding characters
* @variant: which base64 variant to use
*
* Decodes a string using base64 encoding, i.e. the "Base 64 Encoding"
* specified by RFC 4648, including the '='-padding.
*
* This implementation hasn't been optimized for performance.
* Decodes a string using the selected Base64 variant.
*
* Return: the length of the resulting decoded binary data in bytes,
* or -1 if the string isn't a valid base64 string.
* or -1 if the string isn't a valid Base64 string.
*/
int base64_decode(const char *src, int srclen, u8 *dst)
int base64_decode(const char *src, int srclen, u8 *dst, bool padding, enum base64_variant variant)
{
u32 ac = 0;
int bits = 0;
int i;
u8 *bp = dst;
s8 input[4];
s32 val;
const u8 *s = (const u8 *)src;
const s8 *base64_rev_tables = base64_rev_maps[variant];
for (i = 0; i < srclen; i++) {
const char *p = strchr(base64_table, src[i]);
while (srclen >= 4) {
input[0] = base64_rev_tables[s[0]];
input[1] = base64_rev_tables[s[1]];
input[2] = base64_rev_tables[s[2]];
input[3] = base64_rev_tables[s[3]];
if (src[i] == '=') {
ac = (ac << 6);
bits += 6;
if (bits >= 8)
bits -= 8;
continue;
}
if (p == NULL || src[i] == 0)
return -1;
ac = (ac << 6) | (p - base64_table);
bits += 6;
if (bits >= 8) {
bits -= 8;
*bp++ = (u8)(ac >> bits);
val = input[0] << 18 | input[1] << 12 | input[2] << 6 | input[3];
if (unlikely(val < 0)) {
if (!padding || srclen != 4 || s[3] != '=')
return -1;
padding = 0;
srclen = s[2] == '=' ? 2 : 3;
break;
}
*bp++ = val >> 16;
*bp++ = val >> 8;
*bp++ = val;
s += 4;
srclen -= 4;
}
if (ac & ((1 << bits) - 1))
if (likely(!srclen))
return bp - dst;
if (padding || srclen == 1)
return -1;
val = (base64_rev_tables[s[0]] << 12) | (base64_rev_tables[s[1]] << 6);
*bp++ = val >> 10;
if (srclen == 2) {
if (val & 0x800003ff)
return -1;
} else {
val |= base64_rev_tables[s[2]];
if (val & 0x80000003)
return -1;
*bp++ = val >> 2;
}
return bp - dst;
}
EXPORT_SYMBOL_GPL(base64_decode);

View File

@@ -95,6 +95,7 @@ static const struct { unsigned flag:8; char opt_char; } opt_array[] = {
{ _DPRINTK_FLAGS_INCL_SOURCENAME, 's' },
{ _DPRINTK_FLAGS_INCL_LINENO, 'l' },
{ _DPRINTK_FLAGS_INCL_TID, 't' },
{ _DPRINTK_FLAGS_INCL_STACK, 'd' },
{ _DPRINTK_FLAGS_NONE, '_' },
};

View File

@@ -177,94 +177,157 @@ EXPORT_SYMBOL(div64_s64);
* Iterative div/mod for use when dividend is not expected to be much
* bigger than divisor.
*/
#ifndef iter_div_u64_rem
u32 iter_div_u64_rem(u64 dividend, u32 divisor, u64 *remainder)
{
return __iter_div_u64_rem(dividend, divisor, remainder);
}
EXPORT_SYMBOL(iter_div_u64_rem);
#endif
#ifndef mul_u64_u64_div_u64
u64 mul_u64_u64_div_u64(u64 a, u64 b, u64 c)
#if !defined(mul_u64_add_u64_div_u64) || defined(test_mul_u64_add_u64_div_u64)
#define mul_add(a, b, c) add_u64_u32(mul_u32_u32(a, b), c)
#if defined(__SIZEOF_INT128__) && !defined(test_mul_u64_add_u64_div_u64)
static inline u64 mul_u64_u64_add_u64(u64 *p_lo, u64 a, u64 b, u64 c)
{
if (ilog2(a) + ilog2(b) <= 62)
return div64_u64(a * b, c);
#if defined(__SIZEOF_INT128__)
/* native 64x64=128 bits multiplication */
u128 prod = (u128)a * b;
u64 n_lo = prod, n_hi = prod >> 64;
u128 prod = (u128)a * b + c;
*p_lo = prod;
return prod >> 64;
}
#else
/* perform a 64x64=128 bits multiplication manually */
u32 a_lo = a, a_hi = a >> 32, b_lo = b, b_hi = b >> 32;
static inline u64 mul_u64_u64_add_u64(u64 *p_lo, u64 a, u64 b, u64 c)
{
/* perform a 64x64=128 bits multiplication in 32bit chunks */
u64 x, y, z;
x = (u64)a_lo * b_lo;
y = (u64)a_lo * b_hi + (u32)(x >> 32);
z = (u64)a_hi * b_hi + (u32)(y >> 32);
y = (u64)a_hi * b_lo + (u32)y;
z += (u32)(y >> 32);
x = (y << 32) + (u32)x;
u64 n_lo = x, n_hi = z;
#endif
/* make sure c is not zero, trigger runtime exception otherwise */
if (unlikely(c == 0)) {
unsigned long zero = 0;
OPTIMIZER_HIDE_VAR(zero);
return ~0UL/zero;
}
int shift = __builtin_ctzll(c);
/* try reducing the fraction in case the dividend becomes <= 64 bits */
if ((n_hi >> shift) == 0) {
u64 n = shift ? (n_lo >> shift) | (n_hi << (64 - shift)) : n_lo;
return div64_u64(n, c >> shift);
/*
* The remainder value if needed would be:
* res = div64_u64_rem(n, c >> shift, &rem);
* rem = (rem << shift) + (n_lo - (n << shift));
*/
}
if (n_hi >= c) {
/* overflow: result is unrepresentable in a u64 */
return -1;
}
/* Do the full 128 by 64 bits division */
shift = __builtin_clzll(c);
c <<= shift;
int p = 64 + shift;
u64 res = 0;
bool carry;
do {
carry = n_hi >> 63;
shift = carry ? 1 : __builtin_clzll(n_hi);
if (p < shift)
break;
p -= shift;
n_hi <<= shift;
n_hi |= n_lo >> (64 - shift);
n_lo <<= shift;
if (carry || (n_hi >= c)) {
n_hi -= c;
res |= 1ULL << p;
}
} while (n_hi);
/* The remainder value if needed would be n_hi << p */
return res;
/* Since (x-1)(x-1) + 2(x-1) == x.x - 1 two u32 can be added to a u64 */
x = mul_add(a, b, c);
y = mul_add(a, b >> 32, c >> 32);
y = add_u64_u32(y, x >> 32);
z = mul_add(a >> 32, b >> 32, y >> 32);
y = mul_add(a >> 32, b, y);
*p_lo = (y << 32) + (u32)x;
return add_u64_u32(z, y >> 32);
}
EXPORT_SYMBOL(mul_u64_u64_div_u64);
#endif
#ifndef BITS_PER_ITER
#define BITS_PER_ITER (__LONG_WIDTH__ >= 64 ? 32 : 16)
#endif
#if BITS_PER_ITER == 32
#define mul_u64_long_add_u64(p_lo, a, b, c) mul_u64_u64_add_u64(p_lo, a, b, c)
#define add_u64_long(a, b) ((a) + (b))
#else
#undef BITS_PER_ITER
#define BITS_PER_ITER 16
static inline u32 mul_u64_long_add_u64(u64 *p_lo, u64 a, u32 b, u64 c)
{
u64 n_lo = mul_add(a, b, c);
u64 n_med = mul_add(a >> 32, b, c >> 32);
n_med = add_u64_u32(n_med, n_lo >> 32);
*p_lo = n_med << 32 | (u32)n_lo;
return n_med >> 32;
}
#define add_u64_long(a, b) add_u64_u32(a, b)
#endif
u64 mul_u64_add_u64_div_u64(u64 a, u64 b, u64 c, u64 d)
{
unsigned long d_msig, q_digit;
unsigned int reps, d_z_hi;
u64 quotient, n_lo, n_hi;
u32 overflow;
n_hi = mul_u64_u64_add_u64(&n_lo, a, b, c);
if (!n_hi)
return div64_u64(n_lo, d);
if (unlikely(n_hi >= d)) {
/* trigger runtime exception if divisor is zero */
if (d == 0) {
unsigned long zero = 0;
OPTIMIZER_HIDE_VAR(zero);
return ~0UL/zero;
}
/* overflow: result is unrepresentable in a u64 */
return ~0ULL;
}
/* Left align the divisor, shifting the dividend to match */
d_z_hi = __builtin_clzll(d);
if (d_z_hi) {
d <<= d_z_hi;
n_hi = n_hi << d_z_hi | n_lo >> (64 - d_z_hi);
n_lo <<= d_z_hi;
}
reps = 64 / BITS_PER_ITER;
/* Optimise loop count for small dividends */
if (!(u32)(n_hi >> 32)) {
reps -= 32 / BITS_PER_ITER;
n_hi = n_hi << 32 | n_lo >> 32;
n_lo <<= 32;
}
#if BITS_PER_ITER == 16
if (!(u32)(n_hi >> 48)) {
reps--;
n_hi = add_u64_u32(n_hi << 16, n_lo >> 48);
n_lo <<= 16;
}
#endif
/* Invert the dividend so we can use add instead of subtract. */
n_lo = ~n_lo;
n_hi = ~n_hi;
/*
* Get the most significant BITS_PER_ITER bits of the divisor.
* This is used to get a low 'guestimate' of the quotient digit.
*/
d_msig = (d >> (64 - BITS_PER_ITER)) + 1;
/*
* Now do a 'long division' with BITS_PER_ITER bit 'digits'.
* The 'guess' quotient digit can be low and BITS_PER_ITER+1 bits.
* The worst case is dividing ~0 by 0x8000 which requires two subtracts.
*/
quotient = 0;
while (reps--) {
q_digit = (unsigned long)(~n_hi >> (64 - 2 * BITS_PER_ITER)) / d_msig;
/* Shift 'n' left to align with the product q_digit * d */
overflow = n_hi >> (64 - BITS_PER_ITER);
n_hi = add_u64_u32(n_hi << BITS_PER_ITER, n_lo >> (64 - BITS_PER_ITER));
n_lo <<= BITS_PER_ITER;
/* Add product to negated divisor */
overflow += mul_u64_long_add_u64(&n_hi, d, q_digit, n_hi);
/* Adjust for the q_digit 'guestimate' being low */
while (overflow < 0xffffffff >> (32 - BITS_PER_ITER)) {
q_digit++;
n_hi += d;
overflow += n_hi < d;
}
quotient = add_u64_long(quotient << BITS_PER_ITER, q_digit);
}
/*
* The above only ensures the remainder doesn't overflow,
* it can still be possible to add (aka subtract) another copy
* of the divisor.
*/
if ((n_hi + d) > n_hi)
quotient++;
return quotient;
}
#if !defined(test_mul_u64_add_u64_div_u64)
EXPORT_SYMBOL(mul_u64_add_u64_div_u64);
#endif
#endif

View File

@@ -10,80 +10,141 @@
#include <linux/printk.h>
#include <linux/math64.h>
typedef struct { u64 a; u64 b; u64 c; u64 result; } test_params;
typedef struct { u64 a; u64 b; u64 d; u64 result; uint round_up;} test_params;
static test_params test_values[] = {
/* this contains many edge values followed by a couple random values */
{ 0xb, 0x7, 0x3, 0x19 },
{ 0xffff0000, 0xffff0000, 0xf, 0x1110eeef00000000 },
{ 0xffffffff, 0xffffffff, 0x1, 0xfffffffe00000001 },
{ 0xffffffff, 0xffffffff, 0x2, 0x7fffffff00000000 },
{ 0x1ffffffff, 0xffffffff, 0x2, 0xfffffffe80000000 },
{ 0x1ffffffff, 0xffffffff, 0x3, 0xaaaaaaa9aaaaaaab },
{ 0x1ffffffff, 0x1ffffffff, 0x4, 0xffffffff00000000 },
{ 0xffff000000000000, 0xffff000000000000, 0xffff000000000001, 0xfffeffffffffffff },
{ 0x3333333333333333, 0x3333333333333333, 0x5555555555555555, 0x1eb851eb851eb851 },
{ 0x7fffffffffffffff, 0x2, 0x3, 0x5555555555555554 },
{ 0xffffffffffffffff, 0x2, 0x8000000000000000, 0x3 },
{ 0xffffffffffffffff, 0x2, 0xc000000000000000, 0x2 },
{ 0xffffffffffffffff, 0x4000000000000004, 0x8000000000000000, 0x8000000000000007 },
{ 0xffffffffffffffff, 0x4000000000000001, 0x8000000000000000, 0x8000000000000001 },
{ 0xffffffffffffffff, 0x8000000000000001, 0xffffffffffffffff, 0x8000000000000001 },
{ 0xfffffffffffffffe, 0x8000000000000001, 0xffffffffffffffff, 0x8000000000000000 },
{ 0xffffffffffffffff, 0x8000000000000001, 0xfffffffffffffffe, 0x8000000000000001 },
{ 0xffffffffffffffff, 0x8000000000000001, 0xfffffffffffffffd, 0x8000000000000002 },
{ 0x7fffffffffffffff, 0xffffffffffffffff, 0xc000000000000000, 0xaaaaaaaaaaaaaaa8 },
{ 0xffffffffffffffff, 0x7fffffffffffffff, 0xa000000000000000, 0xccccccccccccccca },
{ 0xffffffffffffffff, 0x7fffffffffffffff, 0x9000000000000000, 0xe38e38e38e38e38b },
{ 0x7fffffffffffffff, 0x7fffffffffffffff, 0x5000000000000000, 0xccccccccccccccc9 },
{ 0xffffffffffffffff, 0xfffffffffffffffe, 0xffffffffffffffff, 0xfffffffffffffffe },
{ 0xe6102d256d7ea3ae, 0x70a77d0be4c31201, 0xd63ec35ab3220357, 0x78f8bf8cc86c6e18 },
{ 0xf53bae05cb86c6e1, 0x3847b32d2f8d32e0, 0xcfd4f55a647f403c, 0x42687f79d8998d35 },
{ 0x9951c5498f941092, 0x1f8c8bfdf287a251, 0xa3c8dc5f81ea3fe2, 0x1d887cb25900091f },
{ 0x374fee9daa1bb2bb, 0x0d0bfbff7b8ae3ef, 0xc169337bd42d5179, 0x03bb2dbaffcbb961 },
{ 0xeac0d03ac10eeaf0, 0x89be05dfa162ed9b, 0x92bb1679a41f0e4b, 0xdc5f5cc9e270d216 },
{ 0xb, 0x7, 0x3, 0x19, 1 },
{ 0xffff0000, 0xffff0000, 0xf, 0x1110eeef00000000, 0 },
{ 0xffffffff, 0xffffffff, 0x1, 0xfffffffe00000001, 0 },
{ 0xffffffff, 0xffffffff, 0x2, 0x7fffffff00000000, 1 },
{ 0x1ffffffff, 0xffffffff, 0x2, 0xfffffffe80000000, 1 },
{ 0x1ffffffff, 0xffffffff, 0x3, 0xaaaaaaa9aaaaaaab, 0 },
{ 0x1ffffffff, 0x1ffffffff, 0x4, 0xffffffff00000000, 1 },
{ 0xffff000000000000, 0xffff000000000000, 0xffff000000000001, 0xfffeffffffffffff, 1 },
{ 0x3333333333333333, 0x3333333333333333, 0x5555555555555555, 0x1eb851eb851eb851, 1 },
{ 0x7fffffffffffffff, 0x2, 0x3, 0x5555555555555554, 1 },
{ 0xffffffffffffffff, 0x2, 0x8000000000000000, 0x3, 1 },
{ 0xffffffffffffffff, 0x2, 0xc000000000000000, 0x2, 1 },
{ 0xffffffffffffffff, 0x4000000000000004, 0x8000000000000000, 0x8000000000000007, 1 },
{ 0xffffffffffffffff, 0x4000000000000001, 0x8000000000000000, 0x8000000000000001, 1 },
{ 0xffffffffffffffff, 0x8000000000000001, 0xffffffffffffffff, 0x8000000000000001, 0 },
{ 0xfffffffffffffffe, 0x8000000000000001, 0xffffffffffffffff, 0x8000000000000000, 1 },
{ 0xffffffffffffffff, 0x8000000000000001, 0xfffffffffffffffe, 0x8000000000000001, 1 },
{ 0xffffffffffffffff, 0x8000000000000001, 0xfffffffffffffffd, 0x8000000000000002, 1 },
{ 0x7fffffffffffffff, 0xffffffffffffffff, 0xc000000000000000, 0xaaaaaaaaaaaaaaa8, 1 },
{ 0xffffffffffffffff, 0x7fffffffffffffff, 0xa000000000000000, 0xccccccccccccccca, 1 },
{ 0xffffffffffffffff, 0x7fffffffffffffff, 0x9000000000000000, 0xe38e38e38e38e38b, 1 },
{ 0x7fffffffffffffff, 0x7fffffffffffffff, 0x5000000000000000, 0xccccccccccccccc9, 1 },
{ 0xffffffffffffffff, 0xfffffffffffffffe, 0xffffffffffffffff, 0xfffffffffffffffe, 0 },
{ 0xe6102d256d7ea3ae, 0x70a77d0be4c31201, 0xd63ec35ab3220357, 0x78f8bf8cc86c6e18, 1 },
{ 0xf53bae05cb86c6e1, 0x3847b32d2f8d32e0, 0xcfd4f55a647f403c, 0x42687f79d8998d35, 1 },
{ 0x9951c5498f941092, 0x1f8c8bfdf287a251, 0xa3c8dc5f81ea3fe2, 0x1d887cb25900091f, 1 },
{ 0x374fee9daa1bb2bb, 0x0d0bfbff7b8ae3ef, 0xc169337bd42d5179, 0x03bb2dbaffcbb961, 1 },
{ 0xeac0d03ac10eeaf0, 0x89be05dfa162ed9b, 0x92bb1679a41f0e4b, 0xdc5f5cc9e270d216, 1 },
};
/*
* The above table can be verified with the following shell script:
*
* #!/bin/sh
* sed -ne 's/^{ \+\(.*\), \+\(.*\), \+\(.*\), \+\(.*\) },$/\1 \2 \3 \4/p' \
* lib/math/test_mul_u64_u64_div_u64.c |
* while read a b c r; do
* expected=$( printf "obase=16; ibase=16; %X * %X / %X\n" $a $b $c | bc )
* given=$( printf "%X\n" $r )
* if [ "$expected" = "$given" ]; then
* echo "$a * $b / $c = $r OK"
* else
* echo "$a * $b / $c = $r is wrong" >&2
* echo "should be equivalent to 0x$expected" >&2
* exit 1
* fi
* done
#!/bin/sh
sed -ne 's/^{ \+\(.*\), \+\(.*\), \+\(.*\), \+\(.*\), \+\(.*\) },$/\1 \2 \3 \4 \5/p' \
lib/math/test_mul_u64_u64_div_u64.c |
while read a b d r e; do
expected=$( printf "obase=16; ibase=16; %X * %X / %X\n" $a $b $d | bc )
given=$( printf "%X\n" $r )
if [ "$expected" = "$given" ]; then
echo "$a * $b / $d = $r OK"
else
echo "$a * $b / $d = $r is wrong" >&2
echo "should be equivalent to 0x$expected" >&2
exit 1
fi
expected=$( printf "obase=16; ibase=16; (%X * %X + %X) / %X\n" $a $b $((d-1)) $d | bc )
given=$( printf "%X\n" $((r + e)) )
if [ "$expected" = "$given" ]; then
echo "$a * $b +/ $d = $(printf '%#x' $((r + e))) OK"
else
echo "$a * $b +/ $d = $(printf '%#x' $((r + e))) is wrong" >&2
echo "should be equivalent to 0x$expected" >&2
exit 1
fi
done
*/
static int __init test_init(void)
static u64 test_mul_u64_add_u64_div_u64(u64 a, u64 b, u64 c, u64 d);
#if __LONG_WIDTH__ >= 64
#define TEST_32BIT_DIV
static u64 test_mul_u64_add_u64_div_u64_32bit(u64 a, u64 b, u64 c, u64 d);
#endif
static int __init test_run(unsigned int fn_no, const char *fn_name)
{
u64 start_time;
int errors = 0;
int tests = 0;
int i;
pr_info("Starting mul_u64_u64_div_u64() test\n");
start_time = ktime_get_ns();
for (i = 0; i < ARRAY_SIZE(test_values); i++) {
u64 a = test_values[i].a;
u64 b = test_values[i].b;
u64 c = test_values[i].c;
u64 d = test_values[i].d;
u64 expected_result = test_values[i].result;
u64 result = mul_u64_u64_div_u64(a, b, c);
u64 result, result_up;
switch (fn_no) {
default:
result = mul_u64_u64_div_u64(a, b, d);
result_up = mul_u64_u64_div_u64_roundup(a, b, d);
break;
case 1:
result = test_mul_u64_add_u64_div_u64(a, b, 0, d);
result_up = test_mul_u64_add_u64_div_u64(a, b, d - 1, d);
break;
#ifdef TEST_32BIT_DIV
case 2:
result = test_mul_u64_add_u64_div_u64_32bit(a, b, 0, d);
result_up = test_mul_u64_add_u64_div_u64_32bit(a, b, d - 1, d);
break;
#endif
}
tests += 2;
if (result != expected_result) {
pr_err("ERROR: 0x%016llx * 0x%016llx / 0x%016llx\n", a, b, c);
pr_err("ERROR: 0x%016llx * 0x%016llx / 0x%016llx\n", a, b, d);
pr_err("ERROR: expected result: %016llx\n", expected_result);
pr_err("ERROR: obtained result: %016llx\n", result);
errors++;
}
expected_result += test_values[i].round_up;
if (result_up != expected_result) {
pr_err("ERROR: 0x%016llx * 0x%016llx +/ 0x%016llx\n", a, b, d);
pr_err("ERROR: expected result: %016llx\n", expected_result);
pr_err("ERROR: obtained result: %016llx\n", result_up);
errors++;
}
}
pr_info("Completed mul_u64_u64_div_u64() test\n");
pr_info("Completed %s() test, %d tests, %d errors, %llu ns\n",
fn_name, tests, errors, ktime_get_ns() - start_time);
return errors;
}
static int __init test_init(void)
{
pr_info("Starting mul_u64_u64_div_u64() test\n");
if (test_run(0, "mul_u64_u64_div_u64"))
return -EINVAL;
if (test_run(1, "test_mul_u64_u64_div_u64"))
return -EINVAL;
#ifdef TEST_32BIT_DIV
if (test_run(2, "test_mul_u64_u64_div_u64_32bit"))
return -EINVAL;
#endif
return 0;
}
@@ -91,6 +152,36 @@ static void __exit test_exit(void)
{
}
/* Compile the generic mul_u64_add_u64_div_u64() code */
#undef __div64_32
#define __div64_32 __div64_32
#define div_s64_rem div_s64_rem
#define div64_u64_rem div64_u64_rem
#define div64_u64 div64_u64
#define div64_s64 div64_s64
#define iter_div_u64_rem iter_div_u64_rem
#undef mul_u64_add_u64_div_u64
#define mul_u64_add_u64_div_u64 test_mul_u64_add_u64_div_u64
#define test_mul_u64_add_u64_div_u64 test_mul_u64_add_u64_div_u64
#include "div64.c"
#ifdef TEST_32BIT_DIV
/* Recompile the generic code for 32bit long */
#undef test_mul_u64_add_u64_div_u64
#define test_mul_u64_add_u64_div_u64 test_mul_u64_add_u64_div_u64_32bit
#undef BITS_PER_ITER
#define BITS_PER_ITER 16
#define mul_u64_u64_add_u64 mul_u64_u64_add_u64_32bit
#undef mul_u64_long_add_u64
#undef add_u64_long
#undef mul_add
#include "div64.c"
#endif
module_init(test_init);
module_exit(test_exit);

View File

@@ -47,8 +47,8 @@ static void plist_check_list(struct list_head *top)
plist_check_prev_next(top, prev, next);
while (next != top) {
WRITE_ONCE(prev, next);
WRITE_ONCE(next, prev->next);
prev = next;
next = prev->next;
plist_check_prev_next(top, prev, next);
}
}

View File

@@ -27,7 +27,7 @@
int ___ratelimit(struct ratelimit_state *rs, const char *func)
{
/* Paired with WRITE_ONCE() in .proc_handler().
* Changing two values seperately could be inconsistent
* Changing two values separately could be inconsistent
* and some message could be lost. (See: net_ratelimit_state).
*/
int interval = READ_ONCE(rs->interval);

View File

@@ -460,35 +460,6 @@ void __rb_insert_augmented(struct rb_node *node, struct rb_root *root,
}
EXPORT_SYMBOL(__rb_insert_augmented);
/*
* This function returns the first node (in sort order) of the tree.
*/
struct rb_node *rb_first(const struct rb_root *root)
{
struct rb_node *n;
n = root->rb_node;
if (!n)
return NULL;
while (n->rb_left)
n = n->rb_left;
return n;
}
EXPORT_SYMBOL(rb_first);
struct rb_node *rb_last(const struct rb_root *root)
{
struct rb_node *n;
n = root->rb_node;
if (!n)
return NULL;
while (n->rb_right)
n = n->rb_right;
return n;
}
EXPORT_SYMBOL(rb_last);
struct rb_node *rb_next(const struct rb_node *node)
{
struct rb_node *parent;

Some files were not shown because too many files have changed in this diff Show More