Files
linux/fs/xfs/libxfs/xfs_zones.h
Damien Le Moal ff3d90903f xfs: improve default maximum number of open zones
For regular block devices using the zoned allocator, the default
maximum number of open zones is set to 1/4 of the number of realtime
groups. For a large capacity device, this leads to a very large limit.
E.g. with a 26 TB HDD:

mount /dev/sdb /mnt
...
XFS (sdb): 95836 zones of 65536 blocks size (23959 max open)

In turn such large limit on the number of open zones can lead, depending
on the workload, on a very large number of concurrent write streams
which devices generally do not handle well, leading to poor performance.

Introduce the default limit XFS_DEFAULT_MAX_OPEN_ZONES, defined as 128
to match the hardware limit of most SMR HDDs available today, and use
this limit to set mp->m_max_open_zones in xfs_calc_open_zones() instead
of calling xfs_max_open_zones(), when the user did not specify a limit
with the max_open_zones mount option.

For the 26 TB HDD example, we now get:

mount /dev/sdb /mnt
...
XFS (sdb): 95836 zones of 65536 blocks (128 max open zones)

This change does not prevent the user from specifying a lareger number
for the open zones limit. E.g.

mount -o max_open_zones=4096 /dev/sdb /mnt
...
XFS (sdb): 95836 zones of 65536 blocks (4096 max open zones)

Finally, since xfs_calc_open_zones() checks and caps the
mp->m_max_open_zones limit against the value calculated by
xfs_max_open_zones() for any type of device, this new default limit does
not increase m_max_open_zones for small capacity devices.

Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Signed-off-by: Carlos Maiolino <cem@kernel.org>
2025-09-18 17:32:39 +02:00

43 lines
1.4 KiB
C

/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _LIBXFS_ZONES_H
#define _LIBXFS_ZONES_H
struct xfs_rtgroup;
/*
* In order to guarantee forward progress for GC we need to reserve at least
* two zones: one that will be used for moving data into and one spare zone
* making sure that we have enough space to relocate a nearly-full zone.
* To allow for slightly sloppy accounting for when we need to reserve the
* second zone, we actually reserve three as that is easier than doing fully
* accurate bookkeeping.
*/
#define XFS_GC_ZONES 3U
/*
* In addition we need two zones for user writes, one open zone for writing
* and one to still have available blocks without resetting the open zone
* when data in the open zone has been freed.
*/
#define XFS_RESERVED_ZONES (XFS_GC_ZONES + 1)
#define XFS_MIN_ZONES (XFS_RESERVED_ZONES + 1)
/*
* Always keep one zone out of the general open zone pool to allow for GC to
* happen while other writers are waiting for free space.
*/
#define XFS_OPEN_GC_ZONES 1U
#define XFS_MIN_OPEN_ZONES (XFS_OPEN_GC_ZONES + 1U)
/*
* For zoned devices that do not have a limit on the number of open zones, and
* for regular devices using the zoned allocator, use the most common SMR disks
* limit (128) as the default limit on the number of open zones.
*/
#define XFS_DEFAULT_MAX_OPEN_ZONES 128
bool xfs_zone_validate(struct blk_zone *zone, struct xfs_rtgroup *rtg,
xfs_rgblock_t *write_pointer);
#endif /* _LIBXFS_ZONES_H */