Merge tag 'for-6.18/block-20250929' of git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux

Pull block updates from Jens Axboe:

 - NVMe pull request via Keith:
     - FC target fixes (Daniel)
     - Authentication fixes and updates (Martin, Chris)
     - Admin controller handling (Kamaljit)
     - Target lockdep assertions (Max)
     - Keep-alive updates for discovery (Alastair)
     - Suspend quirk (Georg)

 - MD pull request via Yu:
     - Add support for a lockless bitmap.

       A key feature for the new bitmap are that the IO fastpath is
       lockless. If a user issues lots of write IO to the same bitmap
       bit in a short time, only the first write has additional overhead
       to update bitmap bit, no additional overhead for the following
       writes.

       By supporting only resync or recover written data, means in the
       case creating new array or replacing with a new disk, there is no
       need to do a full disk resync/recovery.

 - Switch ->getgeo() and ->bios_param() to using struct gendisk rather
   than struct block_device.

 - Rust block changes via Andreas. This series adds configuration via
   configfs and remote completion to the rnull driver. The series also
   includes a set of changes to the rust block device driver API: a few
   cleanup patches, and a few features supporting the rnull changes.

   The series removes the raw buffer formatting logic from
   `kernel::block` and improves the logic available in `kernel::string`
   to support the same use as the removed logic.

 - floppy arch cleanups

 - Reduce the number of dereferencing needed for ublk commands

 - Restrict supported sockets for nbd. Mostly done to eliminate a class
   of issues perpetually reported by syzbot, by using nonsensical socket
   setups.

 - A few s390 dasd block fixes

 - Fix a few issues around atomic writes

 - Improve DMA interation for integrity requests

 - Improve how iovecs are treated with regards to O_DIRECT aligment
   constraints.

   We used to require each segment to adhere to the constraints, now
   only the request as a whole needs to.

 - Clean up and improve p2p support, enabling use of p2p for metadata
   payloads

 - Improve locking of request lookup, using SRCU where appropriate

 - Use page references properly for brd, avoiding very long RCU sections

 - Fix ordering of recursively submitted IOs

 - Clean up and improve updating nr_requests for a live device

 - Various fixes and cleanups

* tag 'for-6.18/block-20250929' of git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux: (164 commits)
  s390/dasd: enforce dma_alignment to ensure proper buffer validation
  s390/dasd: Return BLK_STS_INVAL for EINVAL from do_dasd_request
  ublk: remove redundant zone op check in ublk_setup_iod()
  nvme: Use non zero KATO for persistent discovery connections
  nvmet: add safety check for subsys lock
  nvme-core: use nvme_is_io_ctrl() for I/O controller check
  nvme-core: do ioccsz/iorcsz validation only for I/O controllers
  nvme-core: add method to check for an I/O controller
  blk-cgroup: fix possible deadlock while configuring policy
  blk-mq: fix null-ptr-deref in blk_mq_free_tags() from error path
  blk-mq: Fix more tag iteration function documentation
  selftests: ublk: fix behavior when fio is not installed
  ublk: don't access ublk_queue in ublk_unmap_io()
  ublk: pass ublk_io to __ublk_complete_rq()
  ublk: don't access ublk_queue in ublk_need_complete_req()
  ublk: don't access ublk_queue in ublk_check_commit_and_fetch()
  ublk: don't pass ublk_queue to ublk_fetch()
  ublk: don't access ublk_queue in ublk_config_io_buf()
  ublk: don't access ublk_queue in ublk_check_fetch_buf()
  ublk: pass q_id and tag to __ublk_check_and_get_req()
  ...
This commit is contained in:
Linus Torvalds
2025-10-02 10:16:56 -07:00
190 changed files with 4541 additions and 1833 deletions

View File

@@ -603,16 +603,10 @@ Date: July 2003
Contact: linux-block@vger.kernel.org
Description:
[RW] This controls how many requests may be allocated in the
block layer for read or write requests. Note that the total
allocated number may be twice this amount, since it applies only
to reads or writes (not the accumulated sum).
To avoid priority inversion through request starvation, a
request queue maintains a separate request pool per each cgroup
when CONFIG_BLK_CGROUP is enabled, and this parameter applies to
each such per-block-cgroup request pool. IOW, if there are N
block cgroups, each request queue may have up to N request
pools, each independently regulated by nr_requests.
block layer. Noted this value only represents the quantity for a
single blk_mq_tags instance. The actual number for the entire
device depends on the hardware queue count, whether elevator is
enabled, and whether tags are shared.
What: /sys/block/<disk>/queue/nr_zones

View File

@@ -347,6 +347,54 @@ All md devices contain:
active-idle
like active, but no writes have been seen for a while (safe_mode_delay).
consistency_policy
This indicates how the array maintains consistency in case of unexpected
shutdown. It can be:
none
Array has no redundancy information, e.g. raid0, linear.
resync
Full resync is performed and all redundancy is regenerated when the
array is started after unclean shutdown.
bitmap
Resync assisted by a write-intent bitmap.
journal
For raid4/5/6, journal device is used to log transactions and replay
after unclean shutdown.
ppl
For raid5 only, Partial Parity Log is used to close the write hole and
eliminate resync.
The accepted values when writing to this file are ``ppl`` and ``resync``,
used to enable and disable PPL.
uuid
This indicates the UUID of the array in the following format:
xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
bitmap_type
[RW] When read, this file will display the current and available
bitmap for this array. The currently active bitmap will be enclosed
in [] brackets. Writing an bitmap name or ID to this file will switch
control of this array to that new bitmap. Note that writing a new
bitmap for created array is forbidden.
none
No bitmap
bitmap
The default internal bitmap
llbitmap
The lockless internal bitmap
If bitmap_type is not none, then additional bitmap attributes bitmap/xxx or
llbitmap/xxx will be created after md device KOBJ_CHANGE event.
If bitmap_type is bitmap, then the md device will also contain:
bitmap/location
This indicates where the write-intent bitmap for the array is
stored.
@@ -401,35 +449,23 @@ All md devices contain:
once the array becomes non-degraded, and this fact has been
recorded in the metadata.
consistency_policy
This indicates how the array maintains consistency in case of unexpected
shutdown. It can be:
If bitmap_type is llbitmap, then the md device will also contain:
none
Array has no redundancy information, e.g. raid0, linear.
llbitmap/bits
This is read-only, show status of bitmap bits, the number of each
value.
resync
Full resync is performed and all redundancy is regenerated when the
array is started after unclean shutdown.
llbitmap/metadata
This is read-only, show bitmap metadata, include chunksize, chunkshift,
chunks, offset and daemon_sleep.
bitmap
Resync assisted by a write-intent bitmap.
journal
For raid4/5/6, journal device is used to log transactions and replay
after unclean shutdown.
ppl
For raid5 only, Partial Parity Log is used to close the write hole and
eliminate resync.
The accepted values when writing to this file are ``ppl`` and ``resync``,
used to enable and disable PPL.
uuid
This indicates the UUID of the array in the following format:
xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
llbitmap/daemon_sleep
This is read-write, time in seconds that daemon function will be
triggered to clear dirty bits.
llbitmap/barrier_idle
This is read-write, time in seconds that page barrier will be idled,
means dirty bits in the page will be cleared.
As component devices are added to an md array, they appear in the ``md``
directory as new directories named::

View File

@@ -443,7 +443,7 @@ prototypes::
int (*direct_access) (struct block_device *, sector_t, void **,
unsigned long *);
void (*unlock_native_capacity) (struct gendisk *);
int (*getgeo)(struct block_device *, struct hd_geometry *);
int (*getgeo)(struct gendisk *, struct hd_geometry *);
void (*swap_slot_free_notify) (struct block_device *, unsigned long);
locking rules:

View File

@@ -380,7 +380,7 @@ Details::
/**
* scsi_bios_ptable - return copy of block device's partition table
* @dev: pointer to block device
* @dev: pointer to gendisk
*
* Returns pointer to partition table, or NULL for failure
*
@@ -390,7 +390,7 @@ Details::
*
* Defined in: drivers/scsi/scsicam.c
**/
unsigned char *scsi_bios_ptable(struct block_device *dev)
unsigned char *scsi_bios_ptable(struct gendisk *dev)
/**
@@ -623,7 +623,7 @@ Details::
* bios_param - fetch head, sector, cylinder info for a disk
* @sdev: pointer to scsi device context (defined in
* include/scsi/scsi_device.h)
* @bdev: pointer to block device context (defined in fs.h)
* @disk: pointer to gendisk (defined in blkdev.h)
* @capacity: device size (in 512 byte sectors)
* @params: three element array to place output:
* params[0] number of heads (max 255)
@@ -643,7 +643,7 @@ Details::
*
* Optionally defined in: LLD
**/
int bios_param(struct scsi_device * sdev, struct block_device *bdev,
int bios_param(struct scsi_device * sdev, struct gendisk *disk,
sector_t capacity, int params[3])

View File

@@ -4382,7 +4382,7 @@ W: https://rust-for-linux.com
B: https://github.com/Rust-for-Linux/linux/issues
C: https://rust-for-linux.zulipchat.com/#narrow/stream/Block
T: git https://github.com/Rust-for-Linux/linux.git rust-block-next
F: drivers/block/rnull.rs
F: drivers/block/rnull/
F: rust/kernel/block.rs
F: rust/kernel/block/

View File

@@ -90,25 +90,6 @@ static int FDC2 = -1;
#define N_FDC 2
#define N_DRIVE 8
/*
* Most Alphas have no problems with floppy DMA crossing 64k borders,
* except for certain ones, like XL and RUFFIAN.
*
* However, the test is simple and fast, and this *is* floppy, after all,
* so we do it for all platforms, just to make sure.
*
* This is advantageous in other circumstances as well, as in moving
* about the PCI DMA windows and forcing the floppy to start doing
* scatter-gather when it never had before, and there *is* a problem
* on that platform... ;-}
*/
static inline unsigned long CROSS_64KB(void *a, unsigned long s)
{
unsigned long p = (unsigned long)a;
return ((p + s - 1) ^ p) & ~0xffffUL;
}
#define EXTRA_FLOPPY_PARAMS
#endif /* __ASM_ALPHA_FLOPPY_H */

View File

@@ -65,8 +65,6 @@ static unsigned char floppy_selects[4] = { 0x10, 0x21, 0x23, 0x33 };
#define N_FDC 1
#define N_DRIVE 4
#define CROSS_64KB(a,s) (0)
/*
* This allows people to reverse the order of
* fd0 and fd1, in case their hardware is

View File

@@ -77,9 +77,9 @@ static void nfhd_submit_bio(struct bio *bio)
bio_endio(bio);
}
static int nfhd_getgeo(struct block_device *bdev, struct hd_geometry *geo)
static int nfhd_getgeo(struct gendisk *disk, struct hd_geometry *geo)
{
struct nfhd_device *dev = bdev->bd_disk->private_data;
struct nfhd_device *dev = disk->private_data;
geo->cylinders = dev->blocks >> (6 - dev->bshift);
geo->heads = 4;

View File

@@ -107,13 +107,9 @@ static void fd_free_irq(void)
#define fd_free_dma() /* nothing */
/* No 64k boundary crossing problems on Q40 - no DMA at all */
#define CROSS_64KB(a,s) (0)
#define DMA_MODE_READ 0x44 /* i386 look-alike */
#define DMA_MODE_WRITE 0x48
static int m68k_floppy_init(void)
{
use_virtual_dma =1;

View File

@@ -34,21 +34,6 @@ static inline void fd_cacheflush(char * addr, long size)
#define N_FDC 1 /* do you *really* want a second controller? */
#define N_DRIVE 8
/*
* The DMA channel used by the floppy controller cannot access data at
* addresses >= 16MB
*
* Went back to the 1MB limit, as some people had problems with the floppy
* driver otherwise. It doesn't matter much for performance anyway, as most
* floppy accesses go through the track buffer.
*
* On MIPSes using vdma, this actually means that *all* transfers go thru
* the * track buffer since 0x1000000 is always smaller than KSEG0/1.
* Actually this needs to be a bit more complicated since the so much different
* hardware available with MIPS CPUs ...
*/
#define CROSS_64KB(a, s) ((unsigned long)(a)/K_64 != ((unsigned long)(a) + (s) - 1) / K_64)
#define EXTRA_FLOPPY_PARAMS
#include <floppy.h>

View File

@@ -8,9 +8,9 @@
#ifndef __ASM_PARISC_FLOPPY_H
#define __ASM_PARISC_FLOPPY_H
#include <linux/sizes.h>
#include <linux/vmalloc.h>
/*
* The DMA channel used by the floppy controller cannot access data at
* addresses >= 16MB
@@ -20,15 +20,12 @@
* floppy accesses go through the track buffer.
*/
#define _CROSS_64KB(a,s,vdma) \
(!(vdma) && ((unsigned long)(a)/K_64 != ((unsigned long)(a) + (s) - 1) / K_64))
#define CROSS_64KB(a,s) _CROSS_64KB(a,s,use_virtual_dma & 1)
(!(vdma) && \
((unsigned long)(a) / SZ_64K != ((unsigned long)(a) + (s) - 1) / SZ_64K))
#define SW fd_routine[use_virtual_dma&1]
#define CSW fd_routine[can_use_virtual_dma & 1]
#define fd_inb(base, reg) readb((base) + (reg))
#define fd_outb(value, base, reg) writeb(value, (base) + (reg))
@@ -206,7 +203,7 @@ static int vdma_dma_setup(char *addr, unsigned long size, int mode, int io)
static int hard_dma_setup(char *addr, unsigned long size, int mode, int io)
{
#ifdef FLOPPY_SANITY_CHECK
if (CROSS_64KB(addr, size)) {
if (_CROSS_64KB(addr, size, use_virtual_dma & 1)) {
printk("DMA crossing 64-K boundary %p-%p\n", addr, addr+size);
return -1;
}

View File

@@ -206,11 +206,6 @@ static int FDC2 = -1;
#define N_FDC 2 /* Don't change this! */
#define N_DRIVE 8
/*
* The PowerPC has no problems with floppy DMA crossing 64k borders.
*/
#define CROSS_64KB(a,s) (0)
#define EXTRA_FLOPPY_PARAMS
#endif /* __KERNEL__ */

View File

@@ -96,9 +96,6 @@ static struct sun_floppy_ops sun_fdops;
#define N_FDC 1
#define N_DRIVE 8
/* No 64k boundary crossing problems on the Sparc. */
#define CROSS_64KB(a,s) (0)
/* Routines unique to each controller type on a Sun. */
static void sun_set_dor(unsigned char value, int fdc_82077)
{

View File

@@ -95,9 +95,6 @@ static int sun_floppy_types[2] = { 0, 0 };
#define N_FDC 1
#define N_DRIVE 8
/* No 64k boundary crossing problems on the Sparc. */
#define CROSS_64KB(a,s) (0)
static unsigned char sun_82077_fd_inb(unsigned long base, unsigned int reg)
{
udelay(5);

View File

@@ -108,7 +108,7 @@ static DEFINE_MUTEX(ubd_lock);
static int ubd_ioctl(struct block_device *bdev, blk_mode_t mode,
unsigned int cmd, unsigned long arg);
static int ubd_getgeo(struct block_device *bdev, struct hd_geometry *geo);
static int ubd_getgeo(struct gendisk *disk, struct hd_geometry *geo);
#define MAX_DEV (16)
@@ -1324,9 +1324,9 @@ static blk_status_t ubd_queue_rq(struct blk_mq_hw_ctx *hctx,
return res;
}
static int ubd_getgeo(struct block_device *bdev, struct hd_geometry *geo)
static int ubd_getgeo(struct gendisk *disk, struct hd_geometry *geo)
{
struct ubd *ubd_dev = bdev->bd_disk->private_data;
struct ubd *ubd_dev = disk->private_data;
geo->heads = 128;
geo->sectors = 32;

View File

@@ -10,6 +10,7 @@
#ifndef _ASM_X86_FLOPPY_H
#define _ASM_X86_FLOPPY_H
#include <linux/sizes.h>
#include <linux/vmalloc.h>
/*
@@ -22,10 +23,7 @@
*/
#define _CROSS_64KB(a, s, vdma) \
(!(vdma) && \
((unsigned long)(a)/K_64 != ((unsigned long)(a) + (s) - 1) / K_64))
#define CROSS_64KB(a, s) _CROSS_64KB(a, s, use_virtual_dma & 1)
((unsigned long)(a) / SZ_64K != ((unsigned long)(a) + (s) - 1) / SZ_64K))
#define SW fd_routine[use_virtual_dma & 1]
#define CSW fd_routine[can_use_virtual_dma & 1]
@@ -206,7 +204,7 @@ static int vdma_dma_setup(char *addr, unsigned long size, int mode, int io)
static int hard_dma_setup(char *addr, unsigned long size, int mode, int io)
{
#ifdef FLOPPY_SANITY_CHECK
if (CROSS_64KB(addr, size)) {
if (_CROSS_64KB(addr, size, use_virtual_dma & 1)) {
printk("DMA crossing 64-K boundary %p-%p\n", addr, addr+size);
return -1;
}

View File

@@ -7109,9 +7109,10 @@ void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg)
* See the comments on bfq_limit_depth for the purpose of
* the depths set in the function. Return minimum shallow depth we'll use.
*/
static void bfq_update_depths(struct bfq_data *bfqd, struct sbitmap_queue *bt)
static void bfq_depth_updated(struct request_queue *q)
{
unsigned int nr_requests = bfqd->queue->nr_requests;
struct bfq_data *bfqd = q->elevator->elevator_data;
unsigned int nr_requests = q->nr_requests;
/*
* In-word depths if no bfq_queue is being weight-raised:
@@ -7143,21 +7144,8 @@ static void bfq_update_depths(struct bfq_data *bfqd, struct sbitmap_queue *bt)
bfqd->async_depths[1][0] = max((nr_requests * 3) >> 4, 1U);
/* no more than ~37% of tags for sync writes (~20% extra tags) */
bfqd->async_depths[1][1] = max((nr_requests * 6) >> 4, 1U);
}
static void bfq_depth_updated(struct blk_mq_hw_ctx *hctx)
{
struct bfq_data *bfqd = hctx->queue->elevator->elevator_data;
struct blk_mq_tags *tags = hctx->sched_tags;
bfq_update_depths(bfqd, &tags->bitmap_tags);
sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, 1);
}
static int bfq_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int index)
{
bfq_depth_updated(hctx);
return 0;
blk_mq_set_min_shallow_depth(q, 1);
}
static void bfq_exit_queue(struct elevator_queue *e)
@@ -7369,6 +7357,7 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_queue *eq)
goto out_free;
bfq_init_root_group(bfqd->root_group, bfqd);
bfq_init_entity(&bfqd->oom_bfqq.entity, bfqd->root_group);
bfq_depth_updated(q);
/* We dispatch from request queue wide instead of hw queue */
blk_queue_flag_set(QUEUE_FLAG_SQ_SCHED, q);
@@ -7628,7 +7617,6 @@ static struct elevator_type iosched_bfq_mq = {
.request_merged = bfq_request_merged,
.has_work = bfq_has_work,
.depth_updated = bfq_depth_updated,
.init_hctx = bfq_init_hctx,
.init_sched = bfq_init_queue,
.exit_sched = bfq_exit_queue,
},

View File

@@ -230,7 +230,8 @@ static int bio_integrity_init_user(struct bio *bio, struct bio_vec *bvec,
}
static unsigned int bvec_from_pages(struct bio_vec *bvec, struct page **pages,
int nr_vecs, ssize_t bytes, ssize_t offset)
int nr_vecs, ssize_t bytes, ssize_t offset,
bool *is_p2p)
{
unsigned int nr_bvecs = 0;
int i, j;
@@ -251,6 +252,9 @@ static unsigned int bvec_from_pages(struct bio_vec *bvec, struct page **pages,
bytes -= next;
}
if (is_pci_p2pdma_page(pages[i]))
*is_p2p = true;
bvec_set_page(&bvec[nr_bvecs], pages[i], size, offset);
offset = 0;
nr_bvecs++;
@@ -262,13 +266,13 @@ static unsigned int bvec_from_pages(struct bio_vec *bvec, struct page **pages,
int bio_integrity_map_user(struct bio *bio, struct iov_iter *iter)
{
struct request_queue *q = bdev_get_queue(bio->bi_bdev);
unsigned int align = blk_lim_dma_alignment_and_pad(&q->limits);
struct page *stack_pages[UIO_FASTIOV], **pages = stack_pages;
struct bio_vec stack_vec[UIO_FASTIOV], *bvec = stack_vec;
iov_iter_extraction_t extraction_flags = 0;
size_t offset, bytes = iter->count;
bool copy, is_p2p = false;
unsigned int nr_bvecs;
int ret, nr_vecs;
bool copy;
if (bio_integrity(bio))
return -EINVAL;
@@ -285,16 +289,25 @@ int bio_integrity_map_user(struct bio *bio, struct iov_iter *iter)
pages = NULL;
}
copy = !iov_iter_is_aligned(iter, align, align);
ret = iov_iter_extract_pages(iter, &pages, bytes, nr_vecs, 0, &offset);
copy = iov_iter_alignment(iter) &
blk_lim_dma_alignment_and_pad(&q->limits);
if (blk_queue_pci_p2pdma(q))
extraction_flags |= ITER_ALLOW_P2PDMA;
ret = iov_iter_extract_pages(iter, &pages, bytes, nr_vecs,
extraction_flags, &offset);
if (unlikely(ret < 0))
goto free_bvec;
nr_bvecs = bvec_from_pages(bvec, pages, nr_vecs, bytes, offset);
nr_bvecs = bvec_from_pages(bvec, pages, nr_vecs, bytes, offset,
&is_p2p);
if (pages != stack_pages)
kvfree(pages);
if (nr_bvecs > queue_max_integrity_segments(q))
copy = true;
if (is_p2p)
bio->bi_opf |= REQ_NOMERGE;
if (copy)
ret = bio_integrity_copy_user(bio, bvec, nr_bvecs, bytes);

View File

@@ -261,7 +261,7 @@ void bio_init(struct bio *bio, struct block_device *bdev, struct bio_vec *table,
bio->bi_private = NULL;
#ifdef CONFIG_BLK_CGROUP
bio->bi_blkg = NULL;
bio->bi_issue.value = 0;
bio->issue_time_ns = 0;
if (bdev)
bio_associate_blkg(bio);
#ifdef CONFIG_BLK_CGROUP_IOCOST
@@ -462,7 +462,10 @@ static struct bio *bio_alloc_percpu_cache(struct block_device *bdev,
cache->nr--;
put_cpu();
bio_init(bio, bdev, nr_vecs ? bio->bi_inline_vecs : NULL, nr_vecs, opf);
if (nr_vecs)
bio_init_inline(bio, bdev, nr_vecs, opf);
else
bio_init(bio, bdev, NULL, nr_vecs, opf);
bio->bi_pool = bs;
return bio;
}
@@ -578,7 +581,7 @@ struct bio *bio_alloc_bioset(struct block_device *bdev, unsigned short nr_vecs,
bio_init(bio, bdev, bvl, nr_vecs, opf);
} else if (nr_vecs) {
bio_init(bio, bdev, bio->bi_inline_vecs, BIO_INLINE_VECS, opf);
bio_init_inline(bio, bdev, BIO_INLINE_VECS, opf);
} else {
bio_init(bio, bdev, NULL, 0, opf);
}
@@ -614,7 +617,8 @@ struct bio *bio_kmalloc(unsigned short nr_vecs, gfp_t gfp_mask)
if (nr_vecs > BIO_MAX_INLINE_VECS)
return NULL;
return kmalloc(struct_size(bio, bi_inline_vecs, nr_vecs), gfp_mask);
return kmalloc(sizeof(*bio) + nr_vecs * sizeof(struct bio_vec),
gfp_mask);
}
EXPORT_SYMBOL(bio_kmalloc);
@@ -981,7 +985,7 @@ void __bio_add_page(struct bio *bio, struct page *page,
WARN_ON_ONCE(bio_full(bio, len));
if (is_pci_p2pdma_page(page))
bio->bi_opf |= REQ_P2PDMA | REQ_NOMERGE;
bio->bi_opf |= REQ_NOMERGE;
bvec_set_page(&bio->bi_io_vec[bio->bi_vcnt], page, len, off);
bio->bi_iter.bi_size += len;
@@ -1227,13 +1231,6 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
if (bio->bi_bdev && blk_queue_pci_p2pdma(bio->bi_bdev->bd_disk->queue))
extraction_flags |= ITER_ALLOW_P2PDMA;
/*
* Each segment in the iov is required to be a block size multiple.
* However, we may not be able to get the entire segment if it spans
* more pages than bi_max_vecs allows, so we have to ALIGN_DOWN the
* result to ensure the bio's total size is correct. The remainder of
* the iov data will be picked up in the next bio iteration.
*/
size = iov_iter_extract_pages(iter, &pages,
UINT_MAX - bio->bi_iter.bi_size,
nr_pages, extraction_flags, &offset);
@@ -1241,18 +1238,6 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
return size ? size : -EFAULT;
nr_pages = DIV_ROUND_UP(offset + size, PAGE_SIZE);
if (bio->bi_bdev) {
size_t trim = size & (bdev_logical_block_size(bio->bi_bdev) - 1);
iov_iter_revert(iter, trim);
size -= trim;
}
if (unlikely(!size)) {
ret = -EFAULT;
goto out;
}
for (left = size, i = 0; left > 0; left -= len, i += num_pages) {
struct page *page = pages[i];
struct folio *folio = page_folio(page);
@@ -1297,10 +1282,44 @@ out:
return ret;
}
/*
* Aligns the bio size to the len_align_mask, releasing excessive bio vecs that
* __bio_iov_iter_get_pages may have inserted, and reverts the trimmed length
* for the next iteration.
*/
static int bio_iov_iter_align_down(struct bio *bio, struct iov_iter *iter,
unsigned len_align_mask)
{
size_t nbytes = bio->bi_iter.bi_size & len_align_mask;
if (!nbytes)
return 0;
iov_iter_revert(iter, nbytes);
bio->bi_iter.bi_size -= nbytes;
do {
struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
if (nbytes < bv->bv_len) {
bv->bv_len -= nbytes;
break;
}
bio_release_page(bio, bv->bv_page);
bio->bi_vcnt--;
nbytes -= bv->bv_len;
} while (nbytes);
if (!bio->bi_vcnt)
return -EFAULT;
return 0;
}
/**
* bio_iov_iter_get_pages - add user or kernel pages to a bio
* bio_iov_iter_get_pages_aligned - add user or kernel pages to a bio
* @bio: bio to add pages to
* @iter: iov iterator describing the region to be added
* @len_align_mask: the mask to align the total size to, 0 for any length
*
* This takes either an iterator pointing to user memory, or one pointing to
* kernel pages (BVEC iterator). If we're adding user pages, we pin them and
@@ -1317,7 +1336,8 @@ out:
* MM encounters an error pinning the requested pages, it stops. Error
* is returned only if 0 pages could be pinned.
*/
int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
int bio_iov_iter_get_pages_aligned(struct bio *bio, struct iov_iter *iter,
unsigned len_align_mask)
{
int ret = 0;
@@ -1336,9 +1356,11 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
ret = __bio_iov_iter_get_pages(bio, iter);
} while (!ret && iov_iter_count(iter) && !bio_full(bio, 0));
return bio->bi_vcnt ? 0 : ret;
if (bio->bi_vcnt)
return bio_iov_iter_align_down(bio, iter, len_align_mask);
return ret;
}
EXPORT_SYMBOL_GPL(bio_iov_iter_get_pages);
EXPORT_SYMBOL_GPL(bio_iov_iter_get_pages_aligned);
static void submit_bio_wait_endio(struct bio *bio)
{

View File

@@ -110,12 +110,6 @@ static struct cgroup_subsys_state *blkcg_css(void)
return task_css(current, io_cgrp_id);
}
static bool blkcg_policy_enabled(struct request_queue *q,
const struct blkcg_policy *pol)
{
return pol && test_bit(pol->plid, q->blkcg_pols);
}
static void blkg_free_workfn(struct work_struct *work)
{
struct blkcg_gq *blkg = container_of(work, struct blkcg_gq,
@@ -883,14 +877,8 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
disk = ctx->bdev->bd_disk;
q = disk->queue;
/*
* blkcg_deactivate_policy() requires queue to be frozen, we can grab
* q_usage_counter to prevent concurrent with blkcg_deactivate_policy().
*/
ret = blk_queue_enter(q, 0);
if (ret)
goto fail;
/* Prevent concurrent with blkcg_deactivate_policy() */
mutex_lock(&q->blkcg_mutex);
spin_lock_irq(&q->queue_lock);
if (!blkcg_policy_enabled(q, pol)) {
@@ -920,16 +908,16 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
/* Drop locks to do new blkg allocation with GFP_KERNEL. */
spin_unlock_irq(&q->queue_lock);
new_blkg = blkg_alloc(pos, disk, GFP_KERNEL);
new_blkg = blkg_alloc(pos, disk, GFP_NOIO);
if (unlikely(!new_blkg)) {
ret = -ENOMEM;
goto fail_exit_queue;
goto fail_exit;
}
if (radix_tree_preload(GFP_KERNEL)) {
blkg_free(new_blkg);
ret = -ENOMEM;
goto fail_exit_queue;
goto fail_exit;
}
spin_lock_irq(&q->queue_lock);
@@ -957,7 +945,7 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
goto success;
}
success:
blk_queue_exit(q);
mutex_unlock(&q->blkcg_mutex);
ctx->blkg = blkg;
return 0;
@@ -965,9 +953,8 @@ fail_preloaded:
radix_tree_preload_end();
fail_unlock:
spin_unlock_irq(&q->queue_lock);
fail_exit_queue:
blk_queue_exit(q);
fail:
fail_exit:
mutex_unlock(&q->blkcg_mutex);
/*
* If queue was bypassing, we should retry. Do so after a
* short msleep(). It isn't strictly necessary but queue

View File

@@ -370,11 +370,6 @@ static inline void blkg_put(struct blkcg_gq *blkg)
if (((d_blkg) = blkg_lookup(css_to_blkcg(pos_css), \
(p_blkg)->q)))
static inline void blkcg_bio_issue_init(struct bio *bio)
{
bio_issue_init(&bio->bi_issue, bio_sectors(bio));
}
static inline void blkcg_use_delay(struct blkcg_gq *blkg)
{
if (WARN_ON_ONCE(atomic_read(&blkg->use_delay) < 0))
@@ -459,6 +454,12 @@ static inline bool blk_cgroup_mergeable(struct request *rq, struct bio *bio)
bio_issue_as_root_blkg(rq->bio) == bio_issue_as_root_blkg(bio);
}
static inline bool blkcg_policy_enabled(struct request_queue *q,
const struct blkcg_policy *pol)
{
return pol && test_bit(pol->plid, q->blkcg_pols);
}
void blk_cgroup_bio_start(struct bio *bio);
void blkcg_add_delay(struct blkcg_gq *blkg, u64 now, u64 delta);
#else /* CONFIG_BLK_CGROUP */
@@ -491,7 +492,6 @@ static inline struct blkg_policy_data *blkg_to_pd(struct blkcg_gq *blkg,
static inline struct blkcg_gq *pd_to_blkg(struct blkg_policy_data *pd) { return NULL; }
static inline void blkg_get(struct blkcg_gq *blkg) { }
static inline void blkg_put(struct blkcg_gq *blkg) { }
static inline void blkcg_bio_issue_init(struct bio *bio) { }
static inline void blk_cgroup_bio_start(struct bio *bio) { }
static inline bool blk_cgroup_mergeable(struct request *rq, struct bio *bio) { return true; }

View File

@@ -539,7 +539,7 @@ static inline void bio_check_ro(struct bio *bio)
}
}
static noinline int should_fail_bio(struct bio *bio)
int should_fail_bio(struct bio *bio)
{
if (should_fail_request(bdev_whole(bio->bi_bdev), bio->bi_iter.bi_size))
return -EIO;
@@ -727,10 +727,9 @@ static void __submit_bio_noacct_mq(struct bio *bio)
current->bio_list = NULL;
}
void submit_bio_noacct_nocheck(struct bio *bio)
void submit_bio_noacct_nocheck(struct bio *bio, bool split)
{
blk_cgroup_bio_start(bio);
blkcg_bio_issue_init(bio);
if (!bio_flagged(bio, BIO_TRACE_COMPLETION)) {
trace_block_bio_queue(bio);
@@ -747,12 +746,16 @@ void submit_bio_noacct_nocheck(struct bio *bio)
* to collect a list of requests submited by a ->submit_bio method while
* it is active, and then process them after it returned.
*/
if (current->bio_list)
bio_list_add(&current->bio_list[0], bio);
else if (!bdev_test_flag(bio->bi_bdev, BD_HAS_SUBMIT_BIO))
if (current->bio_list) {
if (split)
bio_list_add_head(&current->bio_list[0], bio);
else
bio_list_add(&current->bio_list[0], bio);
} else if (!bdev_test_flag(bio->bi_bdev, BD_HAS_SUBMIT_BIO)) {
__submit_bio_noacct_mq(bio);
else
} else {
__submit_bio_noacct(bio);
}
}
static blk_status_t blk_validate_atomic_write_op_size(struct request_queue *q,
@@ -873,7 +876,7 @@ void submit_bio_noacct(struct bio *bio)
if (blk_throtl_bio(bio))
return;
submit_bio_noacct_nocheck(bio);
submit_bio_noacct_nocheck(bio, false);
return;
not_supported:

View File

@@ -167,8 +167,7 @@ static struct bio *blk_crypto_fallback_clone_bio(struct bio *bio_src)
bio = bio_kmalloc(nr_segs, GFP_NOIO);
if (!bio)
return NULL;
bio_init(bio, bio_src->bi_bdev, bio->bi_inline_vecs, nr_segs,
bio_src->bi_opf);
bio_init_inline(bio, bio_src->bi_bdev, nr_segs, bio_src->bi_opf);
if (bio_flagged(bio_src, BIO_REMAPPED))
bio_set_flag(bio, BIO_REMAPPED);
bio->bi_ioprio = bio_src->bi_ioprio;
@@ -222,18 +221,14 @@ static bool blk_crypto_fallback_split_bio_if_needed(struct bio **bio_ptr)
if (++i == BIO_MAX_VECS)
break;
}
if (num_sectors < bio_sectors(bio)) {
struct bio *split_bio;
split_bio = bio_split(bio, num_sectors, GFP_NOIO,
&crypto_bio_split);
if (IS_ERR(split_bio)) {
bio->bi_status = BLK_STS_RESOURCE;
if (num_sectors < bio_sectors(bio)) {
bio = bio_submit_split_bioset(bio, num_sectors,
&crypto_bio_split);
if (!bio)
return false;
}
bio_chain(split_bio, bio);
submit_bio_noacct(bio);
*bio_ptr = split_bio;
*bio_ptr = bio;
}
return true;

View File

@@ -120,64 +120,6 @@ out:
NULL);
}
/**
* blk_rq_map_integrity_sg - Map integrity metadata into a scatterlist
* @rq: request to map
* @sglist: target scatterlist
*
* Description: Map the integrity vectors in request into a
* scatterlist. The scatterlist must be big enough to hold all
* elements. I.e. sized using blk_rq_count_integrity_sg() or
* rq->nr_integrity_segments.
*/
int blk_rq_map_integrity_sg(struct request *rq, struct scatterlist *sglist)
{
struct bio_vec iv, ivprv = { NULL };
struct request_queue *q = rq->q;
struct scatterlist *sg = NULL;
struct bio *bio = rq->bio;
unsigned int segments = 0;
struct bvec_iter iter;
int prev = 0;
bio_for_each_integrity_vec(iv, bio, iter) {
if (prev) {
if (!biovec_phys_mergeable(q, &ivprv, &iv))
goto new_segment;
if (sg->length + iv.bv_len > queue_max_segment_size(q))
goto new_segment;
sg->length += iv.bv_len;
} else {
new_segment:
if (!sg)
sg = sglist;
else {
sg_unmark_end(sg);
sg = sg_next(sg);
}
sg_set_page(sg, iv.bv_page, iv.bv_len, iv.bv_offset);
segments++;
}
prev = 1;
ivprv = iv;
}
if (sg)
sg_mark_end(sg);
/*
* Something must have been wrong if the figured number of segment
* is bigger than number of req's physical integrity segments
*/
BUG_ON(segments > rq->nr_integrity_segments);
BUG_ON(segments > queue_max_integrity_segments(q));
return segments;
}
EXPORT_SYMBOL(blk_rq_map_integrity_sg);
int blk_rq_integrity_map_user(struct request *rq, void __user *ubuf,
ssize_t bytes)
{

View File

@@ -485,19 +485,11 @@ static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio)
mod_timer(&blkiolat->timer, jiffies + HZ);
}
static void iolatency_record_time(struct iolatency_grp *iolat,
struct bio_issue *issue, u64 now,
bool issue_as_root)
static void iolatency_record_time(struct iolatency_grp *iolat, u64 start,
u64 now, bool issue_as_root)
{
u64 start = bio_issue_time(issue);
u64 req_time;
/*
* Have to do this so we are truncated to the correct time that our
* issue is truncated to.
*/
now = __bio_issue_time(now);
if (now <= start)
return;
@@ -625,7 +617,7 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio)
* submitted, so do not account for it.
*/
if (iolat->min_lat_nsec && bio->bi_status != BLK_STS_AGAIN) {
iolatency_record_time(iolat, &bio->bi_issue, now,
iolatency_record_time(iolat, bio->issue_time_ns, now,
issue_as_root);
window_start = atomic64_read(&iolat->window_start);
if (now > window_start &&
@@ -750,10 +742,15 @@ static void blkiolatency_enable_work_fn(struct work_struct *work)
*/
enabled = atomic_read(&blkiolat->enable_cnt);
if (enabled != blkiolat->enabled) {
struct request_queue *q = blkiolat->rqos.disk->queue;
unsigned int memflags;
memflags = blk_mq_freeze_queue(blkiolat->rqos.disk->queue);
blkiolat->enabled = enabled;
if (enabled)
blk_queue_flag_set(QUEUE_FLAG_BIO_ISSUE_TIME, q);
else
blk_queue_flag_clear(QUEUE_FLAG_BIO_ISSUE_TIME, q);
blk_mq_unfreeze_queue(blkiolat->rqos.disk->queue, memflags);
}
}

View File

@@ -157,7 +157,7 @@ static int bio_copy_user_iov(struct request *rq, struct rq_map_data *map_data,
bio = bio_kmalloc(nr_pages, gfp_mask);
if (!bio)
goto out_bmd;
bio_init(bio, NULL, bio->bi_inline_vecs, nr_pages, req_op(rq));
bio_init_inline(bio, NULL, nr_pages, req_op(rq));
if (map_data) {
nr_pages = 1U << map_data->page_order;
@@ -253,10 +253,11 @@ static void blk_mq_map_bio_put(struct bio *bio)
static struct bio *blk_rq_map_bio_alloc(struct request *rq,
unsigned int nr_vecs, gfp_t gfp_mask)
{
struct block_device *bdev = rq->q->disk ? rq->q->disk->part0 : NULL;
struct bio *bio;
if (rq->cmd_flags & REQ_ALLOC_CACHE && (nr_vecs <= BIO_INLINE_VECS)) {
bio = bio_alloc_bioset(NULL, nr_vecs, rq->cmd_flags, gfp_mask,
bio = bio_alloc_bioset(bdev, nr_vecs, rq->cmd_flags, gfp_mask,
&fs_bio_set);
if (!bio)
return NULL;
@@ -264,7 +265,7 @@ static struct bio *blk_rq_map_bio_alloc(struct request *rq,
bio = bio_kmalloc(nr_vecs, gfp_mask);
if (!bio)
return NULL;
bio_init(bio, NULL, bio->bi_inline_vecs, nr_vecs, req_op(rq));
bio_init_inline(bio, bdev, nr_vecs, req_op(rq));
}
return bio;
}
@@ -326,7 +327,7 @@ static struct bio *bio_map_kern(void *data, unsigned int len, enum req_op op,
bio = bio_kmalloc(nr_vecs, gfp_mask);
if (!bio)
return ERR_PTR(-ENOMEM);
bio_init(bio, NULL, bio->bi_inline_vecs, nr_vecs, op);
bio_init_inline(bio, NULL, nr_vecs, op);
if (is_vmalloc_addr(data)) {
bio->bi_private = data;
if (!bio_add_vmalloc(bio, data, len)) {
@@ -392,7 +393,7 @@ static struct bio *bio_copy_kern(void *data, unsigned int len, enum req_op op,
bio = bio_kmalloc(nr_pages, gfp_mask);
if (!bio)
return ERR_PTR(-ENOMEM);
bio_init(bio, NULL, bio->bi_inline_vecs, nr_pages, op);
bio_init_inline(bio, NULL, nr_pages, op);
while (len) {
struct page *page;
@@ -443,7 +444,7 @@ int blk_rq_append_bio(struct request *rq, struct bio *bio)
int ret;
/* check that the data layout matches the hardware restrictions */
ret = bio_split_rw_at(bio, lim, &nr_segs, max_bytes);
ret = bio_split_io_at(bio, lim, &nr_segs, max_bytes, 0);
if (ret) {
/* if we would have to split the bio, copy instead */
if (ret > 0)

View File

@@ -104,34 +104,58 @@ static unsigned int bio_allowed_max_sectors(const struct queue_limits *lim)
return round_down(UINT_MAX, lim->logical_block_size) >> SECTOR_SHIFT;
}
/*
* bio_submit_split_bioset - Submit a bio, splitting it at a designated sector
* @bio: the original bio to be submitted and split
* @split_sectors: the sector count at which to split
* @bs: the bio set used for allocating the new split bio
*
* The original bio is modified to contain the remaining sectors and submitted.
* The caller is responsible for submitting the returned bio.
*
* If succeed, the newly allocated bio representing the initial part will be
* returned, on failure NULL will be returned and original bio will fail.
*/
struct bio *bio_submit_split_bioset(struct bio *bio, unsigned int split_sectors,
struct bio_set *bs)
{
struct bio *split = bio_split(bio, split_sectors, GFP_NOIO, bs);
if (IS_ERR(split)) {
bio->bi_status = errno_to_blk_status(PTR_ERR(split));
bio_endio(bio);
return NULL;
}
bio_chain(split, bio);
trace_block_split(split, bio->bi_iter.bi_sector);
WARN_ON_ONCE(bio_zone_write_plugging(bio));
if (should_fail_bio(bio))
bio_io_error(bio);
else if (!blk_throtl_bio(bio))
submit_bio_noacct_nocheck(bio, true);
return split;
}
EXPORT_SYMBOL_GPL(bio_submit_split_bioset);
static struct bio *bio_submit_split(struct bio *bio, int split_sectors)
{
if (unlikely(split_sectors < 0))
goto error;
if (unlikely(split_sectors < 0)) {
bio->bi_status = errno_to_blk_status(split_sectors);
bio_endio(bio);
return NULL;
}
if (split_sectors) {
struct bio *split;
split = bio_split(bio, split_sectors, GFP_NOIO,
bio = bio_submit_split_bioset(bio, split_sectors,
&bio->bi_bdev->bd_disk->bio_split);
if (IS_ERR(split)) {
split_sectors = PTR_ERR(split);
goto error;
}
split->bi_opf |= REQ_NOMERGE;
blkcg_bio_issue_init(split);
bio_chain(split, bio);
trace_block_split(split, bio->bi_iter.bi_sector);
WARN_ON_ONCE(bio_zone_write_plugging(bio));
submit_bio_noacct(bio);
return split;
if (bio)
bio->bi_opf |= REQ_NOMERGE;
}
return bio;
error:
bio->bi_status = errno_to_blk_status(split_sectors);
bio_endio(bio);
return NULL;
}
struct bio *bio_split_discard(struct bio *bio, const struct queue_limits *lim,
@@ -279,25 +303,30 @@ static unsigned int bio_split_alignment(struct bio *bio,
}
/**
* bio_split_rw_at - check if and where to split a read/write bio
* bio_split_io_at - check if and where to split a bio
* @bio: [in] bio to be split
* @lim: [in] queue limits to split based on
* @segs: [out] number of segments in the bio with the first half of the sectors
* @max_bytes: [in] maximum number of bytes per bio
* @len_align_mask: [in] length alignment mask for each vector
*
* Find out if @bio needs to be split to fit the queue limits in @lim and a
* maximum size of @max_bytes. Returns a negative error number if @bio can't be
* split, 0 if the bio doesn't have to be split, or a positive sector offset if
* @bio needs to be split.
*/
int bio_split_rw_at(struct bio *bio, const struct queue_limits *lim,
unsigned *segs, unsigned max_bytes)
int bio_split_io_at(struct bio *bio, const struct queue_limits *lim,
unsigned *segs, unsigned max_bytes, unsigned len_align_mask)
{
struct bio_vec bv, bvprv, *bvprvp = NULL;
struct bvec_iter iter;
unsigned nsegs = 0, bytes = 0;
bio_for_each_bvec(bv, bio, iter) {
if (bv.bv_offset & lim->dma_alignment ||
bv.bv_len & len_align_mask)
return -EINVAL;
/*
* If the queue doesn't support SG gaps and adding this
* offset would create a gap, disallow it.
@@ -339,8 +368,16 @@ split:
* Individual bvecs might not be logical block aligned. Round down the
* split size so that each bio is properly block size aligned, even if
* we do not use the full hardware limits.
*
* It is possible to submit a bio that can't be split into a valid io:
* there may either be too many discontiguous vectors for the max
* segments limit, or contain virtual boundary gaps without having a
* valid block sized split. A zero byte result means one of those
* conditions occured.
*/
bytes = ALIGN_DOWN(bytes, bio_split_alignment(bio, lim));
if (!bytes)
return -EINVAL;
/*
* Bio splitting may cause subtle trouble such as hang when doing sync
@@ -350,7 +387,7 @@ split:
bio_clear_polled(bio);
return bytes >> SECTOR_SHIFT;
}
EXPORT_SYMBOL_GPL(bio_split_rw_at);
EXPORT_SYMBOL_GPL(bio_split_io_at);
struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
unsigned *nr_segs)

View File

@@ -96,6 +96,7 @@ static const char *const blk_queue_flag_name[] = {
QUEUE_FLAG_NAME(DISABLE_WBT_DEF),
QUEUE_FLAG_NAME(NO_ELV_SWITCH),
QUEUE_FLAG_NAME(QOS_ENABLED),
QUEUE_FLAG_NAME(BIO_ISSUE_TIME),
};
#undef QUEUE_FLAG_NAME

View File

@@ -2,6 +2,7 @@
/*
* Copyright (C) 2025 Christoph Hellwig
*/
#include <linux/blk-integrity.h>
#include <linux/blk-mq-dma.h>
#include "blk.h"
@@ -10,29 +11,38 @@ struct phys_vec {
u32 len;
};
static bool blk_map_iter_next(struct request *req, struct req_iterator *iter,
static bool __blk_map_iter_next(struct blk_map_iter *iter)
{
if (iter->iter.bi_size)
return true;
if (!iter->bio || !iter->bio->bi_next)
return false;
iter->bio = iter->bio->bi_next;
if (iter->is_integrity) {
iter->iter = bio_integrity(iter->bio)->bip_iter;
iter->bvecs = bio_integrity(iter->bio)->bip_vec;
} else {
iter->iter = iter->bio->bi_iter;
iter->bvecs = iter->bio->bi_io_vec;
}
return true;
}
static bool blk_map_iter_next(struct request *req, struct blk_map_iter *iter,
struct phys_vec *vec)
{
unsigned int max_size;
struct bio_vec bv;
if (req->rq_flags & RQF_SPECIAL_PAYLOAD) {
if (!iter->bio)
return false;
vec->paddr = bvec_phys(&req->special_vec);
vec->len = req->special_vec.bv_len;
iter->bio = NULL;
return true;
}
if (!iter->iter.bi_size)
return false;
bv = mp_bvec_iter_bvec(iter->bio->bi_io_vec, iter->iter);
bv = mp_bvec_iter_bvec(iter->bvecs, iter->iter);
vec->paddr = bvec_phys(&bv);
max_size = get_max_segment_size(&req->q->limits, vec->paddr, UINT_MAX);
bv.bv_len = min(bv.bv_len, max_size);
bio_advance_iter_single(iter->bio, &iter->iter, bv.bv_len);
bvec_iter_advance_single(iter->bvecs, &iter->iter, bv.bv_len);
/*
* If we are entirely done with this bi_io_vec entry, check if the next
@@ -42,20 +52,16 @@ static bool blk_map_iter_next(struct request *req, struct req_iterator *iter,
while (!iter->iter.bi_size || !iter->iter.bi_bvec_done) {
struct bio_vec next;
if (!iter->iter.bi_size) {
if (!iter->bio->bi_next)
break;
iter->bio = iter->bio->bi_next;
iter->iter = iter->bio->bi_iter;
}
if (!__blk_map_iter_next(iter))
break;
next = mp_bvec_iter_bvec(iter->bio->bi_io_vec, iter->iter);
next = mp_bvec_iter_bvec(iter->bvecs, iter->iter);
if (bv.bv_len + next.bv_len > max_size ||
!biovec_phys_mergeable(req->q, &bv, &next))
break;
bv.bv_len += next.bv_len;
bio_advance_iter_single(iter->bio, &iter->iter, next.bv_len);
bvec_iter_advance_single(iter->bvecs, &iter->iter, next.bv_len);
}
vec->len = bv.bv_len;
@@ -125,6 +131,72 @@ static bool blk_rq_dma_map_iova(struct request *req, struct device *dma_dev,
return true;
}
static inline void blk_rq_map_iter_init(struct request *rq,
struct blk_map_iter *iter)
{
struct bio *bio = rq->bio;
if (rq->rq_flags & RQF_SPECIAL_PAYLOAD) {
*iter = (struct blk_map_iter) {
.bvecs = &rq->special_vec,
.iter = {
.bi_size = rq->special_vec.bv_len,
}
};
} else if (bio) {
*iter = (struct blk_map_iter) {
.bio = bio,
.bvecs = bio->bi_io_vec,
.iter = bio->bi_iter,
};
} else {
/* the internal flush request may not have bio attached */
*iter = (struct blk_map_iter) {};
}
}
static bool blk_dma_map_iter_start(struct request *req, struct device *dma_dev,
struct dma_iova_state *state, struct blk_dma_iter *iter,
unsigned int total_len)
{
struct phys_vec vec;
memset(&iter->p2pdma, 0, sizeof(iter->p2pdma));
iter->status = BLK_STS_OK;
/*
* Grab the first segment ASAP because we'll need it to check for P2P
* transfers.
*/
if (!blk_map_iter_next(req, &iter->iter, &vec))
return false;
switch (pci_p2pdma_state(&iter->p2pdma, dma_dev,
phys_to_page(vec.paddr))) {
case PCI_P2PDMA_MAP_BUS_ADDR:
if (iter->iter.is_integrity)
bio_integrity(req->bio)->bip_flags |= BIP_P2P_DMA;
else
req->cmd_flags |= REQ_P2PDMA;
return blk_dma_map_bus(iter, &vec);
case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
/*
* P2P transfers through the host bridge are treated the
* same as non-P2P transfers below and during unmap.
*/
case PCI_P2PDMA_MAP_NONE:
break;
default:
iter->status = BLK_STS_INVAL;
return false;
}
if (blk_can_dma_map_iova(req, dma_dev) &&
dma_iova_try_alloc(dma_dev, state, vec.paddr, total_len))
return blk_rq_dma_map_iova(req, dma_dev, state, iter, &vec);
return blk_dma_map_direct(req, dma_dev, iter, &vec);
}
/**
* blk_rq_dma_map_iter_start - map the first DMA segment for a request
* @req: request to map
@@ -150,43 +222,9 @@ static bool blk_rq_dma_map_iova(struct request *req, struct device *dma_dev,
bool blk_rq_dma_map_iter_start(struct request *req, struct device *dma_dev,
struct dma_iova_state *state, struct blk_dma_iter *iter)
{
unsigned int total_len = blk_rq_payload_bytes(req);
struct phys_vec vec;
iter->iter.bio = req->bio;
iter->iter.iter = req->bio->bi_iter;
memset(&iter->p2pdma, 0, sizeof(iter->p2pdma));
iter->status = BLK_STS_OK;
/*
* Grab the first segment ASAP because we'll need it to check for P2P
* transfers.
*/
if (!blk_map_iter_next(req, &iter->iter, &vec))
return false;
if (IS_ENABLED(CONFIG_PCI_P2PDMA) && (req->cmd_flags & REQ_P2PDMA)) {
switch (pci_p2pdma_state(&iter->p2pdma, dma_dev,
phys_to_page(vec.paddr))) {
case PCI_P2PDMA_MAP_BUS_ADDR:
return blk_dma_map_bus(iter, &vec);
case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
/*
* P2P transfers through the host bridge are treated the
* same as non-P2P transfers below and during unmap.
*/
req->cmd_flags &= ~REQ_P2PDMA;
break;
default:
iter->status = BLK_STS_INVAL;
return false;
}
}
if (blk_can_dma_map_iova(req, dma_dev) &&
dma_iova_try_alloc(dma_dev, state, vec.paddr, total_len))
return blk_rq_dma_map_iova(req, dma_dev, state, iter, &vec);
return blk_dma_map_direct(req, dma_dev, iter, &vec);
blk_rq_map_iter_init(req, &iter->iter);
return blk_dma_map_iter_start(req, dma_dev, state, iter,
blk_rq_payload_bytes(req));
}
EXPORT_SYMBOL_GPL(blk_rq_dma_map_iter_start);
@@ -246,16 +284,11 @@ blk_next_sg(struct scatterlist **sg, struct scatterlist *sglist)
int __blk_rq_map_sg(struct request *rq, struct scatterlist *sglist,
struct scatterlist **last_sg)
{
struct req_iterator iter = {
.bio = rq->bio,
};
struct blk_map_iter iter;
struct phys_vec vec;
int nsegs = 0;
/* the internal flush request may not have bio attached */
if (iter.bio)
iter.iter = iter.bio->bi_iter;
blk_rq_map_iter_init(rq, &iter);
while (blk_map_iter_next(rq, &iter, &vec)) {
*last_sg = blk_next_sg(last_sg, sglist);
sg_set_page(*last_sg, phys_to_page(vec.paddr), vec.len,
@@ -275,3 +308,124 @@ int __blk_rq_map_sg(struct request *rq, struct scatterlist *sglist,
return nsegs;
}
EXPORT_SYMBOL(__blk_rq_map_sg);
#ifdef CONFIG_BLK_DEV_INTEGRITY
/**
* blk_rq_integrity_dma_map_iter_start - map the first integrity DMA segment
* for a request
* @req: request to map
* @dma_dev: device to map to
* @state: DMA IOVA state
* @iter: block layer DMA iterator
*
* Start DMA mapping @req integrity data to @dma_dev. @state and @iter are
* provided by the caller and don't need to be initialized. @state needs to be
* stored for use at unmap time, @iter is only needed at map time.
*
* Returns %false if there is no segment to map, including due to an error, or
* %true if it did map a segment.
*
* If a segment was mapped, the DMA address for it is returned in @iter.addr
* and the length in @iter.len. If no segment was mapped the status code is
* returned in @iter.status.
*
* The caller can call blk_rq_dma_map_coalesce() to check if further segments
* need to be mapped after this, or go straight to blk_rq_dma_map_iter_next()
* to try to map the following segments.
*/
bool blk_rq_integrity_dma_map_iter_start(struct request *req,
struct device *dma_dev, struct dma_iova_state *state,
struct blk_dma_iter *iter)
{
unsigned len = bio_integrity_bytes(&req->q->limits.integrity,
blk_rq_sectors(req));
struct bio *bio = req->bio;
iter->iter = (struct blk_map_iter) {
.bio = bio,
.iter = bio_integrity(bio)->bip_iter,
.bvecs = bio_integrity(bio)->bip_vec,
.is_integrity = true,
};
return blk_dma_map_iter_start(req, dma_dev, state, iter, len);
}
EXPORT_SYMBOL_GPL(blk_rq_integrity_dma_map_iter_start);
/**
* blk_rq_integrity_dma_map_iter_start - map the next integrity DMA segment for
* a request
* @req: request to map
* @dma_dev: device to map to
* @state: DMA IOVA state
* @iter: block layer DMA iterator
*
* Iterate to the next integrity mapping after a previous call to
* blk_rq_integrity_dma_map_iter_start(). See there for a detailed description
* of the arguments.
*
* Returns %false if there is no segment to map, including due to an error, or
* %true if it did map a segment.
*
* If a segment was mapped, the DMA address for it is returned in @iter.addr and
* the length in @iter.len. If no segment was mapped the status code is
* returned in @iter.status.
*/
bool blk_rq_integrity_dma_map_iter_next(struct request *req,
struct device *dma_dev, struct blk_dma_iter *iter)
{
struct phys_vec vec;
if (!blk_map_iter_next(req, &iter->iter, &vec))
return false;
if (iter->p2pdma.map == PCI_P2PDMA_MAP_BUS_ADDR)
return blk_dma_map_bus(iter, &vec);
return blk_dma_map_direct(req, dma_dev, iter, &vec);
}
EXPORT_SYMBOL_GPL(blk_rq_integrity_dma_map_iter_next);
/**
* blk_rq_map_integrity_sg - Map integrity metadata into a scatterlist
* @rq: request to map
* @sglist: target scatterlist
*
* Description: Map the integrity vectors in request into a
* scatterlist. The scatterlist must be big enough to hold all
* elements. I.e. sized using blk_rq_count_integrity_sg() or
* rq->nr_integrity_segments.
*/
int blk_rq_map_integrity_sg(struct request *rq, struct scatterlist *sglist)
{
struct request_queue *q = rq->q;
struct scatterlist *sg = NULL;
struct bio *bio = rq->bio;
unsigned int segments = 0;
struct phys_vec vec;
struct blk_map_iter iter = {
.bio = bio,
.iter = bio_integrity(bio)->bip_iter,
.bvecs = bio_integrity(bio)->bip_vec,
.is_integrity = true,
};
while (blk_map_iter_next(rq, &iter, &vec)) {
sg = blk_next_sg(&sg, sglist);
sg_set_page(sg, phys_to_page(vec.paddr), vec.len,
offset_in_page(vec.paddr));
segments++;
}
if (sg)
sg_mark_end(sg);
/*
* Something must have been wrong if the figured number of segment
* is bigger than number of req's physical integrity segments
*/
BUG_ON(segments > rq->nr_integrity_segments);
BUG_ON(segments > queue_max_integrity_segments(q));
return segments;
}
EXPORT_SYMBOL(blk_rq_map_integrity_sg);
#endif

View File

@@ -454,7 +454,7 @@ void blk_mq_free_sched_tags_batch(struct xarray *et_table,
}
struct elevator_tags *blk_mq_alloc_sched_tags(struct blk_mq_tag_set *set,
unsigned int nr_hw_queues)
unsigned int nr_hw_queues, unsigned int nr_requests)
{
unsigned int nr_tags;
int i;
@@ -470,13 +470,8 @@ struct elevator_tags *blk_mq_alloc_sched_tags(struct blk_mq_tag_set *set,
nr_tags * sizeof(struct blk_mq_tags *), gfp);
if (!et)
return NULL;
/*
* Default to double of smaller one between hw queue_depth and
* 128, since we don't split into sync/async like the old code
* did. Additionally, this is a per-hw queue depth.
*/
et->nr_requests = 2 * min_t(unsigned int, set->queue_depth,
BLKDEV_DEFAULT_RQ);
et->nr_requests = nr_requests;
et->nr_hw_queues = nr_hw_queues;
if (blk_mq_is_shared_tags(set->flags)) {
@@ -521,7 +516,8 @@ int blk_mq_alloc_sched_tags_batch(struct xarray *et_table,
* concurrently.
*/
if (q->elevator) {
et = blk_mq_alloc_sched_tags(set, nr_hw_queues);
et = blk_mq_alloc_sched_tags(set, nr_hw_queues,
blk_mq_default_nr_requests(set));
if (!et)
goto out_unwind;
if (xa_insert(et_table, q->id, et, gfp))

View File

@@ -24,7 +24,7 @@ void blk_mq_exit_sched(struct request_queue *q, struct elevator_queue *e);
void blk_mq_sched_free_rqs(struct request_queue *q);
struct elevator_tags *blk_mq_alloc_sched_tags(struct blk_mq_tag_set *set,
unsigned int nr_hw_queues);
unsigned int nr_hw_queues, unsigned int nr_requests);
int blk_mq_alloc_sched_tags_batch(struct xarray *et_table,
struct blk_mq_tag_set *set, unsigned int nr_hw_queues);
void blk_mq_free_sched_tags(struct elevator_tags *et,
@@ -92,4 +92,15 @@ static inline bool blk_mq_sched_needs_restart(struct blk_mq_hw_ctx *hctx)
return test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);
}
static inline void blk_mq_set_min_shallow_depth(struct request_queue *q,
unsigned int depth)
{
struct blk_mq_hw_ctx *hctx;
unsigned long i;
queue_for_each_hw_ctx(q, hctx, i)
sbitmap_queue_min_shallow_depth(&hctx->sched_tags->bitmap_tags,
depth);
}
#endif

View File

@@ -34,7 +34,6 @@ static void blk_mq_hw_sysfs_release(struct kobject *kobj)
struct blk_mq_hw_ctx *hctx = container_of(kobj, struct blk_mq_hw_ctx,
kobj);
blk_free_flush_queue(hctx->fq);
sbitmap_free(&hctx->ctx_map);
free_cpumask_var(hctx->cpumask);
kfree(hctx->ctxs);
@@ -150,9 +149,11 @@ static void blk_mq_unregister_hctx(struct blk_mq_hw_ctx *hctx)
return;
hctx_for_each_ctx(hctx, ctx, i)
kobject_del(&ctx->kobj);
if (ctx->kobj.state_in_sysfs)
kobject_del(&ctx->kobj);
kobject_del(&hctx->kobj);
if (hctx->kobj.state_in_sysfs)
kobject_del(&hctx->kobj);
}
static int blk_mq_register_hctx(struct blk_mq_hw_ctx *hctx)

View File

@@ -8,6 +8,9 @@
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/mm.h>
#include <linux/kmemleak.h>
#include <linux/delay.h>
#include "blk.h"
@@ -253,13 +256,10 @@ static struct request *blk_mq_find_and_get_req(struct blk_mq_tags *tags,
unsigned int bitnr)
{
struct request *rq;
unsigned long flags;
spin_lock_irqsave(&tags->lock, flags);
rq = tags->rqs[bitnr];
if (!rq || rq->tag != bitnr || !req_ref_inc_not_zero(rq))
rq = NULL;
spin_unlock_irqrestore(&tags->lock, flags);
return rq;
}
@@ -297,15 +297,15 @@ static bool bt_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
/**
* bt_for_each - iterate over the requests associated with a hardware queue
* @hctx: Hardware queue to examine.
* @q: Request queue to examine.
* @q: Request queue @hctx is associated with (@hctx->queue).
* @bt: sbitmap to examine. This is either the breserved_tags member
* or the bitmap_tags member of struct blk_mq_tags.
* @fn: Pointer to the function that will be called for each request
* associated with @hctx that has been assigned a driver tag.
* @fn will be called as follows: @fn(@hctx, rq, @data, @reserved)
* where rq is a pointer to a request. Return true to continue
* iterating tags, false to stop.
* @data: Will be passed as third argument to @fn.
* @fn will be called as follows: @fn(rq, @data) where rq is a
* pointer to a request. Return %true to continue iterating tags;
* %false to stop.
* @data: Will be passed as second argument to @fn.
* @reserved: Indicates whether @bt is the breserved_tags member or the
* bitmap_tags member of struct blk_mq_tags.
*/
@@ -371,9 +371,9 @@ static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
* @bt: sbitmap to examine. This is either the breserved_tags member
* or the bitmap_tags member of struct blk_mq_tags.
* @fn: Pointer to the function that will be called for each started
* request. @fn will be called as follows: @fn(rq, @data,
* @reserved) where rq is a pointer to a request. Return true
* to continue iterating tags, false to stop.
* request. @fn will be called as follows: @fn(rq, @data) where rq
* is a pointer to a request. Return %true to continue iterating
* tags; %false to stop.
* @data: Will be passed as second argument to @fn.
* @flags: BT_TAG_ITER_*
*/
@@ -406,10 +406,9 @@ static void __blk_mq_all_tag_iter(struct blk_mq_tags *tags,
* blk_mq_all_tag_iter - iterate over all requests in a tag map
* @tags: Tag map to iterate over.
* @fn: Pointer to the function that will be called for each
* request. @fn will be called as follows: @fn(rq, @priv,
* reserved) where rq is a pointer to a request. 'reserved'
* indicates whether or not @rq is a reserved request. Return
* true to continue iterating tags, false to stop.
* request. @fn will be called as follows: @fn(rq, @priv) where rq
* is a pointer to a request. Return %true to continue iterating
* tags; %false to stop.
* @priv: Will be passed as second argument to @fn.
*
* Caller has to pass the tag map from which requests are allocated.
@@ -424,10 +423,9 @@ void blk_mq_all_tag_iter(struct blk_mq_tags *tags, busy_tag_iter_fn *fn,
* blk_mq_tagset_busy_iter - iterate over all started requests in a tag set
* @tagset: Tag set to iterate over.
* @fn: Pointer to the function that will be called for each started
* request. @fn will be called as follows: @fn(rq, @priv,
* reserved) where rq is a pointer to a request. 'reserved'
* indicates whether or not @rq is a reserved request. Return
* true to continue iterating tags, false to stop.
* request. @fn will be called as follows: @fn(rq, @priv) where
* rq is a pointer to a request. Return true to continue iterating
* tags, false to stop.
* @priv: Will be passed as second argument to @fn.
*
* We grab one request reference before calling @fn and release it after
@@ -437,7 +435,9 @@ void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tagset,
busy_tag_iter_fn *fn, void *priv)
{
unsigned int flags = tagset->flags;
int i, nr_tags;
int i, nr_tags, srcu_idx;
srcu_idx = srcu_read_lock(&tagset->tags_srcu);
nr_tags = blk_mq_is_shared_tags(flags) ? 1 : tagset->nr_hw_queues;
@@ -446,6 +446,7 @@ void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tagset,
__blk_mq_all_tag_iter(tagset->tags[i], fn, priv,
BT_TAG_ITER_STARTED);
}
srcu_read_unlock(&tagset->tags_srcu, srcu_idx);
}
EXPORT_SYMBOL(blk_mq_tagset_busy_iter);
@@ -483,11 +484,10 @@ EXPORT_SYMBOL(blk_mq_tagset_wait_completed_request);
* blk_mq_queue_tag_busy_iter - iterate over all requests with a driver tag
* @q: Request queue to examine.
* @fn: Pointer to the function that will be called for each request
* on @q. @fn will be called as follows: @fn(hctx, rq, @priv,
* reserved) where rq is a pointer to a request and hctx points
* to the hardware queue associated with the request. 'reserved'
* indicates whether or not @rq is a reserved request.
* @priv: Will be passed as third argument to @fn.
* on @q. @fn will be called as follows: @fn(rq, @priv) where rq
* is a pointer to a request and hctx points to the hardware queue
* associated with the request.
* @priv: Will be passed as second argument to @fn.
*
* Note: if @q->tag_set is shared with other request queues then @fn will be
* called for all requests on all queues that share that tag set and not only
@@ -496,6 +496,8 @@ EXPORT_SYMBOL(blk_mq_tagset_wait_completed_request);
void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_tag_iter_fn *fn,
void *priv)
{
int srcu_idx;
/*
* __blk_mq_update_nr_hw_queues() updates nr_hw_queues and hctx_table
* while the queue is frozen. So we can use q_usage_counter to avoid
@@ -504,6 +506,7 @@ void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_tag_iter_fn *fn,
if (!percpu_ref_tryget(&q->q_usage_counter))
return;
srcu_idx = srcu_read_lock(&q->tag_set->tags_srcu);
if (blk_mq_is_shared_tags(q->tag_set->flags)) {
struct blk_mq_tags *tags = q->tag_set->shared_tags;
struct sbitmap_queue *bresv = &tags->breserved_tags;
@@ -533,6 +536,7 @@ void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_tag_iter_fn *fn,
bt_for_each(hctx, q, btags, fn, priv, false);
}
}
srcu_read_unlock(&q->tag_set->tags_srcu, srcu_idx);
blk_queue_exit(q);
}
@@ -562,6 +566,8 @@ struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags,
tags->nr_tags = total_tags;
tags->nr_reserved_tags = reserved_tags;
spin_lock_init(&tags->lock);
INIT_LIST_HEAD(&tags->page_list);
if (bt_alloc(&tags->bitmap_tags, depth, round_robin, node))
goto out_free_tags;
if (bt_alloc(&tags->breserved_tags, reserved_tags, round_robin, node))
@@ -576,63 +582,37 @@ out_free_tags:
return NULL;
}
void blk_mq_free_tags(struct blk_mq_tags *tags)
static void blk_mq_free_tags_callback(struct rcu_head *head)
{
sbitmap_queue_free(&tags->bitmap_tags);
sbitmap_queue_free(&tags->breserved_tags);
struct blk_mq_tags *tags = container_of(head, struct blk_mq_tags,
rcu_head);
struct page *page;
while (!list_empty(&tags->page_list)) {
page = list_first_entry(&tags->page_list, struct page, lru);
list_del_init(&page->lru);
/*
* Remove kmemleak object previously allocated in
* blk_mq_alloc_rqs().
*/
kmemleak_free(page_address(page));
__free_pages(page, page->private);
}
kfree(tags);
}
int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
struct blk_mq_tags **tagsptr, unsigned int tdepth,
bool can_grow)
void blk_mq_free_tags(struct blk_mq_tag_set *set, struct blk_mq_tags *tags)
{
struct blk_mq_tags *tags = *tagsptr;
sbitmap_queue_free(&tags->bitmap_tags);
sbitmap_queue_free(&tags->breserved_tags);
if (tdepth <= tags->nr_reserved_tags)
return -EINVAL;
/*
* If we are allowed to grow beyond the original size, allocate
* a new set of tags before freeing the old one.
*/
if (tdepth > tags->nr_tags) {
struct blk_mq_tag_set *set = hctx->queue->tag_set;
struct blk_mq_tags *new;
if (!can_grow)
return -EINVAL;
/*
* We need some sort of upper limit, set it high enough that
* no valid use cases should require more.
*/
if (tdepth > MAX_SCHED_RQ)
return -EINVAL;
/*
* Only the sbitmap needs resizing since we allocated the max
* initially.
*/
if (blk_mq_is_shared_tags(set->flags))
return 0;
new = blk_mq_alloc_map_and_rqs(set, hctx->queue_num, tdepth);
if (!new)
return -ENOMEM;
blk_mq_free_map_and_rqs(set, *tagsptr, hctx->queue_num);
*tagsptr = new;
} else {
/*
* Don't need (or can't) update reserved tags here, they
* remain static and should never need resizing.
*/
sbitmap_queue_resize(&tags->bitmap_tags,
tdepth - tags->nr_reserved_tags);
/* if tags pages is not allocated yet, free tags directly */
if (list_empty(&tags->page_list)) {
kfree(tags);
return;
}
return 0;
call_srcu(&set->tags_srcu, &tags->rcu_head, blk_mq_free_tags_callback);
}
void blk_mq_tag_resize_shared_tags(struct blk_mq_tag_set *set, unsigned int size)

View File

@@ -396,6 +396,15 @@ static inline void blk_mq_rq_time_init(struct request *rq, u64 alloc_time_ns)
#endif
}
static inline void blk_mq_bio_issue_init(struct request_queue *q,
struct bio *bio)
{
#ifdef CONFIG_BLK_CGROUP
if (test_bit(QUEUE_FLAG_BIO_ISSUE_TIME, &q->queue_flags))
bio->issue_time_ns = blk_time_get_ns();
#endif
}
static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
struct blk_mq_tags *tags, unsigned int tag)
{
@@ -3168,6 +3177,7 @@ void blk_mq_submit_bio(struct bio *bio)
if (!bio_integrity_prep(bio))
goto queue_exit;
blk_mq_bio_issue_init(q, bio);
if (blk_mq_attempt_bio_merge(q, bio, nr_segs))
goto queue_exit;
@@ -3415,7 +3425,6 @@ static void blk_mq_clear_rq_mapping(struct blk_mq_tags *drv_tags,
struct blk_mq_tags *tags)
{
struct page *page;
unsigned long flags;
/*
* There is no need to clear mapping if driver tags is not initialized
@@ -3439,22 +3448,12 @@ static void blk_mq_clear_rq_mapping(struct blk_mq_tags *drv_tags,
}
}
}
/*
* Wait until all pending iteration is done.
*
* Request reference is cleared and it is guaranteed to be observed
* after the ->lock is released.
*/
spin_lock_irqsave(&drv_tags->lock, flags);
spin_unlock_irqrestore(&drv_tags->lock, flags);
}
void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
unsigned int hctx_idx)
{
struct blk_mq_tags *drv_tags;
struct page *page;
if (list_empty(&tags->page_list))
return;
@@ -3478,27 +3477,20 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
}
blk_mq_clear_rq_mapping(drv_tags, tags);
while (!list_empty(&tags->page_list)) {
page = list_first_entry(&tags->page_list, struct page, lru);
list_del_init(&page->lru);
/*
* Remove kmemleak object previously allocated in
* blk_mq_alloc_rqs().
*/
kmemleak_free(page_address(page));
__free_pages(page, page->private);
}
/*
* Free request pages in SRCU callback, which is called from
* blk_mq_free_tags().
*/
}
void blk_mq_free_rq_map(struct blk_mq_tags *tags)
void blk_mq_free_rq_map(struct blk_mq_tag_set *set, struct blk_mq_tags *tags)
{
kfree(tags->rqs);
tags->rqs = NULL;
kfree(tags->static_rqs);
tags->static_rqs = NULL;
blk_mq_free_tags(tags);
blk_mq_free_tags(set, tags);
}
static enum hctx_type hctx_idx_to_type(struct blk_mq_tag_set *set,
@@ -3560,7 +3552,7 @@ static struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set,
err_free_rqs:
kfree(tags->rqs);
err_free_tags:
blk_mq_free_tags(tags);
blk_mq_free_tags(set, tags);
return NULL;
}
@@ -3590,8 +3582,6 @@ static int blk_mq_alloc_rqs(struct blk_mq_tag_set *set,
if (node == NUMA_NO_NODE)
node = set->numa_node;
INIT_LIST_HEAD(&tags->page_list);
/*
* rq_size is the size of the request plus driver payload, rounded
* to the cacheline size
@@ -3678,8 +3668,12 @@ static bool blk_mq_hctx_has_requests(struct blk_mq_hw_ctx *hctx)
struct rq_iter_data data = {
.hctx = hctx,
};
int srcu_idx;
srcu_idx = srcu_read_lock(&hctx->queue->tag_set->tags_srcu);
blk_mq_all_tag_iter(tags, blk_mq_has_request, &data);
srcu_read_unlock(&hctx->queue->tag_set->tags_srcu, srcu_idx);
return data.has_rq;
}
@@ -3899,7 +3893,6 @@ static void blk_mq_clear_flush_rq_mapping(struct blk_mq_tags *tags,
unsigned int queue_depth, struct request *flush_rq)
{
int i;
unsigned long flags;
/* The hw queue may not be mapped yet */
if (!tags)
@@ -3909,15 +3902,14 @@ static void blk_mq_clear_flush_rq_mapping(struct blk_mq_tags *tags,
for (i = 0; i < queue_depth; i++)
cmpxchg(&tags->rqs[i], flush_rq, NULL);
}
/*
* Wait until all pending iteration is done.
*
* Request reference is cleared and it is guaranteed to be observed
* after the ->lock is released.
*/
spin_lock_irqsave(&tags->lock, flags);
spin_unlock_irqrestore(&tags->lock, flags);
static void blk_free_flush_queue_callback(struct rcu_head *head)
{
struct blk_flush_queue *fq =
container_of(head, struct blk_flush_queue, rcu_head);
blk_free_flush_queue(fq);
}
/* hctx->ctxs will be freed in queue's release handler */
@@ -3939,6 +3931,10 @@ static void blk_mq_exit_hctx(struct request_queue *q,
if (set->ops->exit_hctx)
set->ops->exit_hctx(hctx, hctx_idx);
call_srcu(&set->tags_srcu, &hctx->fq->rcu_head,
blk_free_flush_queue_callback);
hctx->fq = NULL;
xa_erase(&q->hctx_table, hctx_idx);
spin_lock(&q->unused_hctx_lock);
@@ -3964,13 +3960,19 @@ static int blk_mq_init_hctx(struct request_queue *q,
struct blk_mq_tag_set *set,
struct blk_mq_hw_ctx *hctx, unsigned hctx_idx)
{
gfp_t gfp = GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY;
hctx->fq = blk_alloc_flush_queue(hctx->numa_node, set->cmd_size, gfp);
if (!hctx->fq)
goto fail;
hctx->queue_num = hctx_idx;
hctx->tags = set->tags[hctx_idx];
if (set->ops->init_hctx &&
set->ops->init_hctx(hctx, set->driver_data, hctx_idx))
goto fail;
goto fail_free_fq;
if (blk_mq_init_request(set, hctx->fq->flush_rq, hctx_idx,
hctx->numa_node))
@@ -3987,6 +3989,9 @@ static int blk_mq_init_hctx(struct request_queue *q,
exit_hctx:
if (set->ops->exit_hctx)
set->ops->exit_hctx(hctx, hctx_idx);
fail_free_fq:
blk_free_flush_queue(hctx->fq);
hctx->fq = NULL;
fail:
return -1;
}
@@ -4038,16 +4043,10 @@ blk_mq_alloc_hctx(struct request_queue *q, struct blk_mq_tag_set *set,
init_waitqueue_func_entry(&hctx->dispatch_wait, blk_mq_dispatch_wake);
INIT_LIST_HEAD(&hctx->dispatch_wait.entry);
hctx->fq = blk_alloc_flush_queue(hctx->numa_node, set->cmd_size, gfp);
if (!hctx->fq)
goto free_bitmap;
blk_mq_hctx_kobj_init(hctx);
return hctx;
free_bitmap:
sbitmap_free(&hctx->ctx_map);
free_ctxs:
kfree(hctx->ctxs);
free_cpumask:
@@ -4101,7 +4100,7 @@ struct blk_mq_tags *blk_mq_alloc_map_and_rqs(struct blk_mq_tag_set *set,
ret = blk_mq_alloc_rqs(set, tags, hctx_idx, depth);
if (ret) {
blk_mq_free_rq_map(tags);
blk_mq_free_rq_map(set, tags);
return NULL;
}
@@ -4129,7 +4128,7 @@ void blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set,
{
if (tags) {
blk_mq_free_rqs(set, tags, hctx_idx);
blk_mq_free_rq_map(tags);
blk_mq_free_rq_map(set, tags);
}
}
@@ -4828,6 +4827,9 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set)
if (ret)
goto out_free_srcu;
}
ret = init_srcu_struct(&set->tags_srcu);
if (ret)
goto out_cleanup_srcu;
init_rwsem(&set->update_nr_hwq_lock);
@@ -4836,7 +4838,7 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set)
sizeof(struct blk_mq_tags *), GFP_KERNEL,
set->numa_node);
if (!set->tags)
goto out_cleanup_srcu;
goto out_cleanup_tags_srcu;
for (i = 0; i < set->nr_maps; i++) {
set->map[i].mq_map = kcalloc_node(nr_cpu_ids,
@@ -4865,6 +4867,8 @@ out_free_mq_map:
}
kfree(set->tags);
set->tags = NULL;
out_cleanup_tags_srcu:
cleanup_srcu_struct(&set->tags_srcu);
out_cleanup_srcu:
if (set->flags & BLK_MQ_F_BLOCKING)
cleanup_srcu_struct(set->srcu);
@@ -4910,6 +4914,9 @@ void blk_mq_free_tag_set(struct blk_mq_tag_set *set)
kfree(set->tags);
set->tags = NULL;
srcu_barrier(&set->tags_srcu);
cleanup_srcu_struct(&set->tags_srcu);
if (set->flags & BLK_MQ_F_BLOCKING) {
cleanup_srcu_struct(set->srcu);
kfree(set->srcu);
@@ -4917,57 +4924,59 @@ void blk_mq_free_tag_set(struct blk_mq_tag_set *set)
}
EXPORT_SYMBOL(blk_mq_free_tag_set);
int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr)
struct elevator_tags *blk_mq_update_nr_requests(struct request_queue *q,
struct elevator_tags *et,
unsigned int nr)
{
struct blk_mq_tag_set *set = q->tag_set;
struct elevator_tags *old_et = NULL;
struct blk_mq_hw_ctx *hctx;
int ret;
unsigned long i;
if (WARN_ON_ONCE(!q->mq_freeze_depth))
return -EINVAL;
if (!set)
return -EINVAL;
if (q->nr_requests == nr)
return 0;
blk_mq_quiesce_queue(q);
ret = 0;
queue_for_each_hw_ctx(q, hctx, i) {
if (!hctx->tags)
continue;
if (blk_mq_is_shared_tags(set->flags)) {
/*
* If we're using an MQ scheduler, just update the scheduler
* queue depth. This is similar to what the old code would do.
* Shared tags, for sched tags, we allocate max initially hence
* tags can't grow, see blk_mq_alloc_sched_tags().
*/
if (hctx->sched_tags) {
ret = blk_mq_tag_update_depth(hctx, &hctx->sched_tags,
nr, true);
} else {
ret = blk_mq_tag_update_depth(hctx, &hctx->tags, nr,
false);
if (q->elevator)
blk_mq_tag_update_sched_shared_tags(q);
else
blk_mq_tag_resize_shared_tags(set, nr);
} else if (!q->elevator) {
/*
* Non-shared hardware tags, nr is already checked from
* queue_requests_store() and tags can't grow.
*/
queue_for_each_hw_ctx(q, hctx, i) {
if (!hctx->tags)
continue;
sbitmap_queue_resize(&hctx->tags->bitmap_tags,
nr - hctx->tags->nr_reserved_tags);
}
if (ret)
break;
if (q->elevator && q->elevator->type->ops.depth_updated)
q->elevator->type->ops.depth_updated(hctx);
}
if (!ret) {
q->nr_requests = nr;
if (blk_mq_is_shared_tags(set->flags)) {
if (q->elevator)
blk_mq_tag_update_sched_shared_tags(q);
else
blk_mq_tag_resize_shared_tags(set, nr);
} else if (nr <= q->elevator->et->nr_requests) {
/* Non-shared sched tags, and tags don't grow. */
queue_for_each_hw_ctx(q, hctx, i) {
if (!hctx->sched_tags)
continue;
sbitmap_queue_resize(&hctx->sched_tags->bitmap_tags,
nr - hctx->sched_tags->nr_reserved_tags);
}
} else {
/* Non-shared sched tags, and tags grow */
queue_for_each_hw_ctx(q, hctx, i)
hctx->sched_tags = et->tags[i];
old_et = q->elevator->et;
q->elevator->et = et;
}
q->nr_requests = nr;
if (q->elevator && q->elevator->type->ops.depth_updated)
q->elevator->type->ops.depth_updated(q);
blk_mq_unquiesce_queue(q);
return ret;
return old_et;
}
/*

View File

@@ -6,6 +6,7 @@
#include "blk-stat.h"
struct blk_mq_tag_set;
struct elevator_tags;
struct blk_mq_ctxs {
struct kobject kobj;
@@ -45,7 +46,9 @@ void blk_mq_submit_bio(struct bio *bio);
int blk_mq_poll(struct request_queue *q, blk_qc_t cookie, struct io_comp_batch *iob,
unsigned int flags);
void blk_mq_exit_queue(struct request_queue *q);
int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr);
struct elevator_tags *blk_mq_update_nr_requests(struct request_queue *q,
struct elevator_tags *tags,
unsigned int nr);
void blk_mq_wake_waiters(struct request_queue *q);
bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *,
bool);
@@ -59,7 +62,7 @@ void blk_mq_put_rq_ref(struct request *rq);
*/
void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
unsigned int hctx_idx);
void blk_mq_free_rq_map(struct blk_mq_tags *tags);
void blk_mq_free_rq_map(struct blk_mq_tag_set *set, struct blk_mq_tags *tags);
struct blk_mq_tags *blk_mq_alloc_map_and_rqs(struct blk_mq_tag_set *set,
unsigned int hctx_idx, unsigned int depth);
void blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set,
@@ -109,6 +112,17 @@ static inline struct blk_mq_hw_ctx *blk_mq_map_queue(blk_opf_t opf,
return ctx->hctxs[blk_mq_get_hctx_type(opf)];
}
/*
* Default to double of smaller one between hw queue_depth and
* 128, since we don't split into sync/async like the old code
* did. Additionally, this is a per-hw queue depth.
*/
static inline unsigned int blk_mq_default_nr_requests(
struct blk_mq_tag_set *set)
{
return 2 * min_t(unsigned int, set->queue_depth, BLKDEV_DEFAULT_RQ);
}
/*
* sysfs helpers
*/
@@ -162,7 +176,7 @@ struct blk_mq_alloc_data {
struct blk_mq_tags *blk_mq_init_tags(unsigned int nr_tags,
unsigned int reserved_tags, unsigned int flags, int node);
void blk_mq_free_tags(struct blk_mq_tags *tags);
void blk_mq_free_tags(struct blk_mq_tag_set *set, struct blk_mq_tags *tags);
unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data);
unsigned long blk_mq_get_tags(struct blk_mq_alloc_data *data, int nr_tags,
@@ -170,8 +184,6 @@ unsigned long blk_mq_get_tags(struct blk_mq_alloc_data *data, int nr_tags,
void blk_mq_put_tag(struct blk_mq_tags *tags, struct blk_mq_ctx *ctx,
unsigned int tag);
void blk_mq_put_tags(struct blk_mq_tags *tags, int *tag_array, int nr_tags);
int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
struct blk_mq_tags **tags, unsigned int depth, bool can_grow);
void blk_mq_tag_resize_shared_tags(struct blk_mq_tag_set *set,
unsigned int size);
void blk_mq_tag_update_sched_shared_tags(struct request_queue *q);

View File

@@ -56,6 +56,7 @@ void blk_set_stacking_limits(struct queue_limits *lim)
lim->max_user_wzeroes_unmap_sectors = UINT_MAX;
lim->max_hw_zone_append_sectors = UINT_MAX;
lim->max_user_discard_sectors = UINT_MAX;
lim->atomic_write_hw_max = UINT_MAX;
}
EXPORT_SYMBOL(blk_set_stacking_limits);
@@ -223,6 +224,27 @@ static void blk_atomic_writes_update_limits(struct queue_limits *lim)
lim->atomic_write_hw_boundary >> SECTOR_SHIFT;
}
/*
* Test whether any boundary is aligned with any chunk size. Stacked
* devices store any stripe size in t->chunk_sectors.
*/
static bool blk_valid_atomic_writes_boundary(unsigned int chunk_sectors,
unsigned int boundary_sectors)
{
if (!chunk_sectors || !boundary_sectors)
return true;
if (boundary_sectors > chunk_sectors &&
boundary_sectors % chunk_sectors)
return false;
if (chunk_sectors > boundary_sectors &&
chunk_sectors % boundary_sectors)
return false;
return true;
}
static void blk_validate_atomic_write_limits(struct queue_limits *lim)
{
unsigned int boundary_sectors;
@@ -232,6 +254,10 @@ static void blk_validate_atomic_write_limits(struct queue_limits *lim)
if (!(lim->features & BLK_FEAT_ATOMIC_WRITES))
goto unsupported;
/* UINT_MAX indicates stacked limits in initial state */
if (lim->atomic_write_hw_max == UINT_MAX)
goto unsupported;
if (!lim->atomic_write_hw_max)
goto unsupported;
@@ -259,20 +285,9 @@ static void blk_validate_atomic_write_limits(struct queue_limits *lim)
if (WARN_ON_ONCE(lim->atomic_write_hw_max >
lim->atomic_write_hw_boundary))
goto unsupported;
/*
* A feature of boundary support is that it disallows bios to
* be merged which would result in a merged request which
* crosses either a chunk sector or atomic write HW boundary,
* even though chunk sectors may be just set for performance.
* For simplicity, disallow atomic writes for a chunk sector
* which is non-zero and smaller than atomic write HW boundary.
* Furthermore, chunk sectors must be a multiple of atomic
* write HW boundary. Otherwise boundary support becomes
* complicated.
* Devices which do not conform to these rules can be dealt
* with if and when they show up.
*/
if (WARN_ON_ONCE(lim->chunk_sectors % boundary_sectors))
if (WARN_ON_ONCE(!blk_valid_atomic_writes_boundary(
lim->chunk_sectors, boundary_sectors)))
goto unsupported;
/*
@@ -639,25 +654,6 @@ static bool blk_stack_atomic_writes_tail(struct queue_limits *t,
return true;
}
/* Check for valid boundary of first bottom device */
static bool blk_stack_atomic_writes_boundary_head(struct queue_limits *t,
struct queue_limits *b)
{
/*
* Ensure atomic write boundary is aligned with chunk sectors. Stacked
* devices store chunk sectors in t->io_min.
*/
if (b->atomic_write_hw_boundary > t->io_min &&
b->atomic_write_hw_boundary % t->io_min)
return false;
if (t->io_min > b->atomic_write_hw_boundary &&
t->io_min % b->atomic_write_hw_boundary)
return false;
t->atomic_write_hw_boundary = b->atomic_write_hw_boundary;
return true;
}
static void blk_stack_atomic_writes_chunk_sectors(struct queue_limits *t)
{
unsigned int chunk_bytes;
@@ -695,13 +691,14 @@ static void blk_stack_atomic_writes_chunk_sectors(struct queue_limits *t)
static bool blk_stack_atomic_writes_head(struct queue_limits *t,
struct queue_limits *b)
{
if (b->atomic_write_hw_boundary &&
!blk_stack_atomic_writes_boundary_head(t, b))
if (!blk_valid_atomic_writes_boundary(t->chunk_sectors,
b->atomic_write_hw_boundary >> SECTOR_SHIFT))
return false;
t->atomic_write_hw_unit_max = b->atomic_write_hw_unit_max;
t->atomic_write_hw_unit_min = b->atomic_write_hw_unit_min;
t->atomic_write_hw_max = b->atomic_write_hw_max;
t->atomic_write_hw_boundary = b->atomic_write_hw_boundary;
return true;
}
@@ -717,18 +714,14 @@ static void blk_stack_atomic_writes_limits(struct queue_limits *t,
if (!blk_atomic_write_start_sect_aligned(start, b))
goto unsupported;
/*
* If atomic_write_hw_max is set, we have already stacked 1x bottom
* device, so check for compliance.
*/
if (t->atomic_write_hw_max) {
/* UINT_MAX indicates no stacking of bottom devices yet */
if (t->atomic_write_hw_max == UINT_MAX) {
if (!blk_stack_atomic_writes_head(t, b))
goto unsupported;
} else {
if (!blk_stack_atomic_writes_tail(t, b))
goto unsupported;
return;
}
if (!blk_stack_atomic_writes_head(t, b))
goto unsupported;
blk_stack_atomic_writes_chunk_sectors(t);
return;
@@ -763,7 +756,8 @@ unsupported:
int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
sector_t start)
{
unsigned int top, bottom, alignment, ret = 0;
unsigned int top, bottom, alignment;
int ret = 0;
t->features |= (b->features & BLK_FEAT_INHERIT_MASK);

View File

@@ -64,28 +64,66 @@ static ssize_t queue_requests_show(struct gendisk *disk, char *page)
static ssize_t
queue_requests_store(struct gendisk *disk, const char *page, size_t count)
{
unsigned long nr;
int ret, err;
unsigned int memflags;
struct request_queue *q = disk->queue;
if (!queue_is_mq(q))
return -EINVAL;
struct blk_mq_tag_set *set = q->tag_set;
struct elevator_tags *et = NULL;
unsigned int memflags;
unsigned long nr;
int ret;
ret = queue_var_store(&nr, page, count);
if (ret < 0)
return ret;
memflags = blk_mq_freeze_queue(q);
mutex_lock(&q->elevator_lock);
/*
* Serialize updating nr_requests with concurrent queue_requests_store()
* and switching elevator.
*/
down_write(&set->update_nr_hwq_lock);
if (nr == q->nr_requests)
goto unlock;
if (nr < BLKDEV_MIN_RQ)
nr = BLKDEV_MIN_RQ;
err = blk_mq_update_nr_requests(disk->queue, nr);
if (err)
ret = err;
/*
* Switching elevator is protected by update_nr_hwq_lock:
* - read lock is held from elevator sysfs attribute;
* - write lock is held from updating nr_hw_queues;
* Hence it's safe to access q->elevator here with write lock held.
*/
if (nr <= set->reserved_tags ||
(q->elevator && nr > MAX_SCHED_RQ) ||
(!q->elevator && nr > set->queue_depth)) {
ret = -EINVAL;
goto unlock;
}
if (!blk_mq_is_shared_tags(set->flags) && q->elevator &&
nr > q->elevator->et->nr_requests) {
/*
* Tags will grow, allocate memory before freezing queue to
* prevent deadlock.
*/
et = blk_mq_alloc_sched_tags(set, q->nr_hw_queues, nr);
if (!et) {
ret = -ENOMEM;
goto unlock;
}
}
memflags = blk_mq_freeze_queue(q);
mutex_lock(&q->elevator_lock);
et = blk_mq_update_nr_requests(q, et, nr);
mutex_unlock(&q->elevator_lock);
blk_mq_unfreeze_queue(q, memflags);
if (et)
blk_mq_free_sched_tags(et, set);
unlock:
up_write(&set->update_nr_hwq_lock);
return ret;
}
@@ -620,6 +658,11 @@ static ssize_t queue_wb_lat_store(struct gendisk *disk, const char *page,
if (val < -1)
return -EINVAL;
/*
* Ensure that the queue is idled, in case the latency update
* ends up either enabling or disabling wbt completely. We can't
* have IO inflight if that happens.
*/
memflags = blk_mq_freeze_queue(q);
rqos = wbt_rq_qos(q);
@@ -638,11 +681,6 @@ static ssize_t queue_wb_lat_store(struct gendisk *disk, const char *page,
if (wbt_get_min_lat(q) == val)
goto out;
/*
* Ensure that the queue is idled, in case the latency update
* ends up either enabling or disabling wbt completely. We can't
* have IO inflight if that happens.
*/
blk_mq_quiesce_queue(q);
mutex_lock(&disk->rqos_state_mutex);

View File

@@ -1224,7 +1224,7 @@ static void blk_throtl_dispatch_work_fn(struct work_struct *work)
if (!bio_list_empty(&bio_list_on_stack)) {
blk_start_plug(&plug);
while ((bio = bio_list_pop(&bio_list_on_stack)))
submit_bio_noacct_nocheck(bio);
submit_bio_noacct_nocheck(bio, false);
blk_finish_plug(&plug);
}
}
@@ -1327,17 +1327,13 @@ static int blk_throtl_init(struct gendisk *disk)
INIT_WORK(&td->dispatch_work, blk_throtl_dispatch_work_fn);
throtl_service_queue_init(&td->service_queue);
/*
* Freeze queue before activating policy, to synchronize with IO path,
* which is protected by 'q_usage_counter'.
*/
memflags = blk_mq_freeze_queue(disk->queue);
blk_mq_quiesce_queue(disk->queue);
q->td = td;
td->queue = q;
/* activate policy */
/* activate policy, blk_throtl_activated() will return true */
ret = blkcg_activate_policy(disk, &blkcg_policy_throtl);
if (ret) {
q->td = NULL;
@@ -1846,12 +1842,15 @@ void blk_throtl_exit(struct gendisk *disk)
{
struct request_queue *q = disk->queue;
if (!blk_throtl_activated(q))
/*
* blkg_destroy_all() already deactivate throtl policy, just check and
* free throtl data.
*/
if (!q->td)
return;
timer_delete_sync(&q->td->service_queue.pending_timer);
throtl_shutdown_wq(q);
blkcg_deactivate_policy(disk, &blkcg_policy_throtl);
kfree(q->td);
}

View File

@@ -156,7 +156,13 @@ void blk_throtl_cancel_bios(struct gendisk *disk);
static inline bool blk_throtl_activated(struct request_queue *q)
{
return q->td != NULL;
/*
* q->td guarantees that the blk-throttle module is already loaded,
* and the plid of blk-throttle is assigned.
* blkcg_policy_enabled() guarantees that the policy is activated
* in the request_queue.
*/
return q->td != NULL && blkcg_policy_enabled(q, &blkcg_policy_throtl);
}
static inline bool blk_should_throtl(struct bio *bio)
@@ -164,11 +170,6 @@ static inline bool blk_should_throtl(struct bio *bio)
struct throtl_grp *tg;
int rw = bio_data_dir(bio);
/*
* This is called under bio_queue_enter(), and it's synchronized with
* the activation of blk-throtl, which is protected by
* blk_mq_freeze_queue().
*/
if (!blk_throtl_activated(bio->bi_bdev->bd_queue))
return false;
@@ -194,7 +195,10 @@ static inline bool blk_should_throtl(struct bio *bio)
static inline bool blk_throtl_bio(struct bio *bio)
{
/*
* block throttling takes effect if the policy is activated
* in the bio's request_queue.
*/
if (!blk_should_throtl(bio))
return false;

View File

@@ -41,6 +41,7 @@ struct blk_flush_queue {
struct list_head flush_queue[2];
unsigned long flush_data_in_flight;
struct request *flush_rq;
struct rcu_head rcu_head;
};
bool is_flush_rq(struct request *req);
@@ -54,7 +55,7 @@ bool blk_queue_start_drain(struct request_queue *q);
bool __blk_freeze_queue_start(struct request_queue *q,
struct task_struct *owner);
int __bio_queue_enter(struct request_queue *q, struct bio *bio);
void submit_bio_noacct_nocheck(struct bio *bio);
void submit_bio_noacct_nocheck(struct bio *bio, bool split);
void bio_await_chain(struct bio *bio);
static inline bool blk_try_enter_queue(struct request_queue *q, bool pm)
@@ -615,6 +616,7 @@ extern const struct address_space_operations def_blk_aops;
int disk_register_independent_access_ranges(struct gendisk *disk);
void disk_unregister_independent_access_ranges(struct gendisk *disk);
int should_fail_bio(struct bio *bio);
#ifdef CONFIG_FAIL_MAKE_REQUEST
bool should_fail_request(struct block_device *part, unsigned int bytes);
#else /* CONFIG_FAIL_MAKE_REQUEST */
@@ -680,48 +682,6 @@ static inline ktime_t blk_time_get(void)
return ns_to_ktime(blk_time_get_ns());
}
/*
* From most significant bit:
* 1 bit: reserved for other usage, see below
* 12 bits: original size of bio
* 51 bits: issue time of bio
*/
#define BIO_ISSUE_RES_BITS 1
#define BIO_ISSUE_SIZE_BITS 12
#define BIO_ISSUE_RES_SHIFT (64 - BIO_ISSUE_RES_BITS)
#define BIO_ISSUE_SIZE_SHIFT (BIO_ISSUE_RES_SHIFT - BIO_ISSUE_SIZE_BITS)
#define BIO_ISSUE_TIME_MASK ((1ULL << BIO_ISSUE_SIZE_SHIFT) - 1)
#define BIO_ISSUE_SIZE_MASK \
(((1ULL << BIO_ISSUE_SIZE_BITS) - 1) << BIO_ISSUE_SIZE_SHIFT)
#define BIO_ISSUE_RES_MASK (~((1ULL << BIO_ISSUE_RES_SHIFT) - 1))
/* Reserved bit for blk-throtl */
#define BIO_ISSUE_THROTL_SKIP_LATENCY (1ULL << 63)
static inline u64 __bio_issue_time(u64 time)
{
return time & BIO_ISSUE_TIME_MASK;
}
static inline u64 bio_issue_time(struct bio_issue *issue)
{
return __bio_issue_time(issue->value);
}
static inline sector_t bio_issue_size(struct bio_issue *issue)
{
return ((issue->value & BIO_ISSUE_SIZE_MASK) >> BIO_ISSUE_SIZE_SHIFT);
}
static inline void bio_issue_init(struct bio_issue *issue,
sector_t size)
{
size &= (1ULL << BIO_ISSUE_SIZE_BITS) - 1;
issue->value = ((issue->value & BIO_ISSUE_RES_MASK) |
(blk_time_get_ns() & BIO_ISSUE_TIME_MASK) |
((u64)size << BIO_ISSUE_SIZE_SHIFT));
}
void bdev_release(struct file *bdev_file);
int bdev_open(struct block_device *bdev, blk_mode_t mode, void *holder,
const struct blk_holder_ops *hops, struct file *bdev_file);

View File

@@ -669,7 +669,8 @@ static int elevator_change(struct request_queue *q, struct elv_change_ctx *ctx)
lockdep_assert_held(&set->update_nr_hwq_lock);
if (strncmp(ctx->name, "none", 4)) {
ctx->et = blk_mq_alloc_sched_tags(set, set->nr_hw_queues);
ctx->et = blk_mq_alloc_sched_tags(set, set->nr_hw_queues,
blk_mq_default_nr_requests(set));
if (!ctx->et)
return -ENOMEM;
}

View File

@@ -37,7 +37,7 @@ struct elevator_mq_ops {
void (*exit_sched)(struct elevator_queue *);
int (*init_hctx)(struct blk_mq_hw_ctx *, unsigned int);
void (*exit_hctx)(struct blk_mq_hw_ctx *, unsigned int);
void (*depth_updated)(struct blk_mq_hw_ctx *);
void (*depth_updated)(struct request_queue *);
bool (*allow_merge)(struct request_queue *, struct request *, struct bio *);
bool (*bio_merge)(struct request_queue *, struct bio *, unsigned int);

View File

@@ -39,8 +39,8 @@ static blk_opf_t dio_bio_write_op(struct kiocb *iocb)
static bool blkdev_dio_invalid(struct block_device *bdev, struct kiocb *iocb,
struct iov_iter *iter)
{
return iocb->ki_pos & (bdev_logical_block_size(bdev) - 1) ||
!bdev_iter_is_aligned(bdev, iter);
return (iocb->ki_pos | iov_iter_count(iter)) &
(bdev_logical_block_size(bdev) - 1);
}
#define DIO_INLINE_BIO_VECS 4
@@ -78,7 +78,7 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
if (iocb->ki_flags & IOCB_ATOMIC)
bio.bi_opf |= REQ_ATOMIC;
ret = bio_iov_iter_get_pages(&bio, iter);
ret = bio_iov_iter_get_bdev_pages(&bio, iter, bdev);
if (unlikely(ret))
goto out;
ret = bio.bi_iter.bi_size;
@@ -212,7 +212,7 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
bio->bi_end_io = blkdev_bio_end_io;
bio->bi_ioprio = iocb->ki_ioprio;
ret = bio_iov_iter_get_pages(bio, iter);
ret = bio_iov_iter_get_bdev_pages(bio, iter, bdev);
if (unlikely(ret)) {
bio->bi_status = BLK_STS_IOERR;
bio_endio(bio);
@@ -348,7 +348,7 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
*/
bio_iov_bvec_set(bio, iter);
} else {
ret = bio_iov_iter_get_pages(bio, iter);
ret = bio_iov_iter_get_bdev_pages(bio, iter, bdev);
if (unlikely(ret))
goto out_bio_put;
}

View File

@@ -481,7 +481,7 @@ static int blkdev_getgeo(struct block_device *bdev,
*/
memset(&geo, 0, sizeof(geo));
geo.start = get_start_sect(bdev);
ret = disk->fops->getgeo(bdev, &geo);
ret = disk->fops->getgeo(disk, &geo);
if (ret)
return ret;
if (copy_to_user(argp, &geo, sizeof(geo)))
@@ -515,7 +515,7 @@ static int compat_hdio_getgeo(struct block_device *bdev,
* want to override it.
*/
geo.start = get_start_sect(bdev);
ret = disk->fops->getgeo(bdev, &geo);
ret = disk->fops->getgeo(disk, &geo);
if (ret)
return ret;

View File

@@ -399,6 +399,14 @@ err:
return ERR_PTR(ret);
}
static void kyber_depth_updated(struct request_queue *q)
{
struct kyber_queue_data *kqd = q->elevator->elevator_data;
kqd->async_depth = q->nr_requests * KYBER_ASYNC_PERCENT / 100U;
blk_mq_set_min_shallow_depth(q, kqd->async_depth);
}
static int kyber_init_sched(struct request_queue *q, struct elevator_queue *eq)
{
struct kyber_queue_data *kqd;
@@ -413,6 +421,7 @@ static int kyber_init_sched(struct request_queue *q, struct elevator_queue *eq)
eq->elevator_data = kqd;
q->elevator = eq;
kyber_depth_updated(q);
return 0;
}
@@ -440,15 +449,6 @@ static void kyber_ctx_queue_init(struct kyber_ctx_queue *kcq)
INIT_LIST_HEAD(&kcq->rq_list[i]);
}
static void kyber_depth_updated(struct blk_mq_hw_ctx *hctx)
{
struct kyber_queue_data *kqd = hctx->queue->elevator->elevator_data;
struct blk_mq_tags *tags = hctx->sched_tags;
kqd->async_depth = hctx->queue->nr_requests * KYBER_ASYNC_PERCENT / 100U;
sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, kqd->async_depth);
}
static int kyber_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx)
{
struct kyber_hctx_data *khd;
@@ -493,7 +493,6 @@ static int kyber_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx)
khd->batching = 0;
hctx->sched_data = khd;
kyber_depth_updated(hctx);
return 0;

View File

@@ -136,10 +136,6 @@ static inline struct request *deadline_from_pos(struct dd_per_prio *per_prio,
struct rb_node *node = per_prio->sort_list[data_dir].rb_node;
struct request *rq, *res = NULL;
if (!node)
return NULL;
rq = rb_entry_rq(node);
while (node) {
rq = rb_entry_rq(node);
if (blk_rq_pos(rq) >= pos) {
@@ -507,22 +503,12 @@ static void dd_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
}
/* Called by blk_mq_update_nr_requests(). */
static void dd_depth_updated(struct blk_mq_hw_ctx *hctx)
static void dd_depth_updated(struct request_queue *q)
{
struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
struct blk_mq_tags *tags = hctx->sched_tags;
dd->async_depth = q->nr_requests;
sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, 1);
}
/* Called by blk_mq_init_hctx() and blk_mq_init_sched(). */
static int dd_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx)
{
dd_depth_updated(hctx);
return 0;
blk_mq_set_min_shallow_depth(q, 1);
}
static void dd_exit_sched(struct elevator_queue *e)
@@ -587,6 +573,7 @@ static int dd_init_sched(struct request_queue *q, struct elevator_queue *eq)
blk_queue_flag_set(QUEUE_FLAG_SQ_SCHED, q);
q->elevator = eq;
dd_depth_updated(q);
return 0;
}
@@ -1048,7 +1035,6 @@ static struct elevator_type mq_deadline = {
.has_work = dd_has_work,
.init_sched = dd_init_sched,
.exit_sched = dd_exit_sched,
.init_hctx = dd_init_hctx,
},
#ifdef CONFIG_BLK_DEBUG_FS

View File

@@ -358,7 +358,7 @@ int ibm_partition(struct parsed_partitions *state)
goto out_nolab;
/* set start if not filled by getgeo function e.g. virtblk */
geo->start = get_start_sect(bdev);
if (disk->fops->getgeo(bdev, geo))
if (disk->fops->getgeo(disk, geo))
goto out_freeall;
if (!fn || fn(disk, info)) {
kfree(info);

View File

@@ -351,7 +351,7 @@ EXPORT_SYMBOL_GPL(ata_common_sdev_groups);
/**
* ata_std_bios_param - generic bios head/sector/cylinder calculator used by sd.
* @sdev: SCSI device for which BIOS geometry is to be determined
* @bdev: block device associated with @sdev
* @unused: gendisk associated with @sdev
* @capacity: capacity of SCSI device
* @geom: location to which geometry will be output
*
@@ -366,7 +366,7 @@ EXPORT_SYMBOL_GPL(ata_common_sdev_groups);
* RETURNS:
* Zero.
*/
int ata_std_bios_param(struct scsi_device *sdev, struct block_device *bdev,
int ata_std_bios_param(struct scsi_device *sdev, struct gendisk *unused,
sector_t capacity, int geom[])
{
geom[0] = 255;

View File

@@ -17,6 +17,7 @@ menuconfig BLK_DEV
if BLK_DEV
source "drivers/block/null_blk/Kconfig"
source "drivers/block/rnull/Kconfig"
config BLK_DEV_FD
tristate "Normal floppy disk support"
@@ -311,15 +312,6 @@ config VIRTIO_BLK
This is the virtual block driver for virtio. It can be used with
QEMU based VMMs (like KVM or Xen). Say Y or M.
config BLK_DEV_RUST_NULL
tristate "Rust null block driver (Experimental)"
depends on RUST
help
This is the Rust implementation of the null block driver. For now it
is only a minimal stub.
If unsure, say N.
config BLK_DEV_RBD
tristate "Rados block device (RBD)"
depends on INET && BLOCK

View File

@@ -9,9 +9,6 @@
# needed for trace events
ccflags-y += -I$(src)
obj-$(CONFIG_BLK_DEV_RUST_NULL) += rnull_mod.o
rnull_mod-y := rnull.o
obj-$(CONFIG_MAC_FLOPPY) += swim3.o
obj-$(CONFIG_BLK_DEV_SWIM) += swim_mod.o
obj-$(CONFIG_BLK_DEV_FD) += floppy.o
@@ -38,6 +35,7 @@ obj-$(CONFIG_ZRAM) += zram/
obj-$(CONFIG_BLK_DEV_RNBD) += rnbd/
obj-$(CONFIG_BLK_DEV_NULL_BLK) += null_blk/
obj-$(CONFIG_BLK_DEV_RUST_NULL) += rnull/
obj-$(CONFIG_BLK_DEV_UBLK) += ublk_drv.o
obj-$(CONFIG_BLK_DEV_ZONED_LOOP) += zloop.o

View File

@@ -1523,13 +1523,13 @@ static blk_status_t amiflop_queue_rq(struct blk_mq_hw_ctx *hctx,
return BLK_STS_OK;
}
static int fd_getgeo(struct block_device *bdev, struct hd_geometry *geo)
static int fd_getgeo(struct gendisk *disk, struct hd_geometry *geo)
{
int drive = MINOR(bdev->bd_dev) & 3;
struct amiga_floppy_struct *p = disk->private_data;
geo->heads = unit[drive].type->heads;
geo->sectors = unit[drive].dtype->sects * unit[drive].type->sect_mult;
geo->cylinders = unit[drive].type->tracks;
geo->heads = p->type->heads;
geo->sectors = p->dtype->sects * p->type->sect_mult;
geo->cylinders = p->type->tracks;
return 0;
}

View File

@@ -269,9 +269,9 @@ static blk_status_t aoeblk_queue_rq(struct blk_mq_hw_ctx *hctx,
}
static int
aoeblk_getgeo(struct block_device *bdev, struct hd_geometry *geo)
aoeblk_getgeo(struct gendisk *disk, struct hd_geometry *geo)
{
struct aoedev *d = bdev->bd_disk->private_data;
struct aoedev *d = disk->private_data;
if ((d->flags & DEVFL_UP) == 0) {
printk(KERN_ERR "aoe: disk not up\n");

View File

@@ -44,7 +44,7 @@ aoe_init(void)
{
int ret;
aoe_wq = alloc_workqueue("aoe_wq", 0, 0);
aoe_wq = alloc_workqueue("aoe_wq", WQ_PERCPU, 0);
if (!aoe_wq)
return -ENOMEM;

View File

@@ -44,45 +44,74 @@ struct brd_device {
};
/*
* Look up and return a brd's page for a given sector.
* Look up and return a brd's page with reference grabbed for a given sector.
*/
static struct page *brd_lookup_page(struct brd_device *brd, sector_t sector)
{
return xa_load(&brd->brd_pages, sector >> PAGE_SECTORS_SHIFT);
struct page *page;
XA_STATE(xas, &brd->brd_pages, sector >> PAGE_SECTORS_SHIFT);
rcu_read_lock();
repeat:
page = xas_load(&xas);
if (xas_retry(&xas, page)) {
xas_reset(&xas);
goto repeat;
}
if (!page)
goto out;
if (!get_page_unless_zero(page)) {
xas_reset(&xas);
goto repeat;
}
if (unlikely(page != xas_reload(&xas))) {
put_page(page);
xas_reset(&xas);
goto repeat;
}
out:
rcu_read_unlock();
return page;
}
/*
* Insert a new page for a given sector, if one does not already exist.
* The returned page will grab reference.
*/
static struct page *brd_insert_page(struct brd_device *brd, sector_t sector,
blk_opf_t opf)
__releases(rcu)
__acquires(rcu)
{
gfp_t gfp = (opf & REQ_NOWAIT) ? GFP_NOWAIT : GFP_NOIO;
struct page *page, *ret;
rcu_read_unlock();
page = alloc_page(gfp | __GFP_ZERO | __GFP_HIGHMEM);
if (!page) {
rcu_read_lock();
if (!page)
return ERR_PTR(-ENOMEM);
}
xa_lock(&brd->brd_pages);
ret = __xa_cmpxchg(&brd->brd_pages, sector >> PAGE_SECTORS_SHIFT, NULL,
page, gfp);
rcu_read_lock();
if (ret) {
if (!ret) {
brd->brd_nr_pages++;
get_page(page);
xa_unlock(&brd->brd_pages);
__free_page(page);
if (xa_is_err(ret))
return ERR_PTR(xa_err(ret));
return page;
}
if (!xa_is_err(ret)) {
get_page(ret);
xa_unlock(&brd->brd_pages);
put_page(page);
return ret;
}
brd->brd_nr_pages++;
xa_unlock(&brd->brd_pages);
return page;
put_page(page);
return ERR_PTR(xa_err(ret));
}
/*
@@ -95,7 +124,7 @@ static void brd_free_pages(struct brd_device *brd)
pgoff_t idx;
xa_for_each(&brd->brd_pages, idx, page) {
__free_page(page);
put_page(page);
cond_resched();
}
@@ -117,7 +146,6 @@ static bool brd_rw_bvec(struct brd_device *brd, struct bio *bio)
bv.bv_len = min_t(u32, bv.bv_len, PAGE_SIZE - offset);
rcu_read_lock();
page = brd_lookup_page(brd, sector);
if (!page && op_is_write(opf)) {
page = brd_insert_page(brd, sector, opf);
@@ -135,13 +163,13 @@ static bool brd_rw_bvec(struct brd_device *brd, struct bio *bio)
memset(kaddr, 0, bv.bv_len);
}
kunmap_local(kaddr);
rcu_read_unlock();
bio_advance_iter_single(bio, &bio->bi_iter, bv.bv_len);
if (page)
put_page(page);
return true;
out_error:
rcu_read_unlock();
if (PTR_ERR(page) == -ENOMEM && (opf & REQ_NOWAIT))
bio_wouldblock_error(bio);
else
@@ -149,13 +177,6 @@ out_error:
return false;
}
static void brd_free_one_page(struct rcu_head *head)
{
struct page *page = container_of(head, struct page, rcu_head);
__free_page(page);
}
static void brd_do_discard(struct brd_device *brd, sector_t sector, u32 size)
{
sector_t aligned_sector = round_up(sector, PAGE_SECTORS);
@@ -170,7 +191,7 @@ static void brd_do_discard(struct brd_device *brd, sector_t sector, u32 size)
while (aligned_sector < aligned_end && aligned_sector < rd_size * 2) {
page = __xa_erase(&brd->brd_pages, aligned_sector >> PAGE_SECTORS_SHIFT);
if (page) {
call_rcu(&page->rcu_head, brd_free_one_page);
put_page(page);
brd->brd_nr_pages--;
}
aligned_sector += PAGE_SECTORS;

View File

@@ -163,35 +163,35 @@
/* do print messages for unexpected interrupts */
static int print_unex = 1;
#include <linux/module.h>
#include <linux/sched.h>
#include <linux/fs.h>
#include <linux/kernel.h>
#include <linux/timer.h>
#include <linux/workqueue.h>
#include <linux/fdreg.h>
#include <linux/fd.h>
#include <linux/hdreg.h>
#include <linux/errno.h>
#include <linux/slab.h>
#include <linux/mm.h>
#include <linux/bio.h>
#include <linux/string.h>
#include <linux/jiffies.h>
#include <linux/fcntl.h>
#include <linux/delay.h>
#include <linux/mc146818rtc.h> /* CMOS defines */
#include <linux/ioport.h>
#include <linux/interrupt.h>
#include <linux/init.h>
#include <linux/major.h>
#include <linux/platform_device.h>
#include <linux/mod_devicetable.h>
#include <linux/mutex.h>
#include <linux/io.h>
#include <linux/uaccess.h>
#include <linux/async.h>
#include <linux/bio.h>
#include <linux/compat.h>
#include <linux/delay.h>
#include <linux/errno.h>
#include <linux/fcntl.h>
#include <linux/fd.h>
#include <linux/fdreg.h>
#include <linux/fs.h>
#include <linux/hdreg.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/ioport.h>
#include <linux/jiffies.h>
#include <linux/kernel.h>
#include <linux/major.h>
#include <linux/mc146818rtc.h> /* CMOS defines */
#include <linux/mm.h>
#include <linux/mod_devicetable.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/platform_device.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/timer.h>
#include <linux/uaccess.h>
#include <linux/workqueue.h>
/*
* PS/2 floppies have much slower step rates than regular floppies.
@@ -233,8 +233,6 @@ static unsigned short virtual_dma_port = 0x3f0;
irqreturn_t floppy_interrupt(int irq, void *dev_id);
static int set_dor(int fdc, char mask, char data);
#define K_64 0x10000 /* 64KB */
/* the following is the mask of allowed drives. By default units 2 and
* 3 of both floppy controllers are disabled, because switching on the
* motor of these drives causes system hangs on some PCI computers. drive
@@ -3092,16 +3090,13 @@ static int raw_cmd_copyin(int cmd, void __user *param,
*rcmd = NULL;
loop:
ptr = kmalloc(sizeof(struct floppy_raw_cmd), GFP_KERNEL);
if (!ptr)
return -ENOMEM;
ptr = memdup_user(param, sizeof(*ptr));
if (IS_ERR(ptr))
return PTR_ERR(ptr);
*rcmd = ptr;
ret = copy_from_user(ptr, param, sizeof(*ptr));
ptr->next = NULL;
ptr->buffer_length = 0;
ptr->kernel_data = NULL;
if (ret)
return -EFAULT;
param += sizeof(struct floppy_raw_cmd);
if (ptr->cmd_count > FD_RAW_CMD_FULLSIZE)
return -EINVAL;
@@ -3363,9 +3358,9 @@ static int get_floppy_geometry(int drive, int type, struct floppy_struct **g)
return 0;
}
static int fd_getgeo(struct block_device *bdev, struct hd_geometry *geo)
static int fd_getgeo(struct gendisk *disk, struct hd_geometry *geo)
{
int drive = (long)bdev->bd_disk->private_data;
int drive = (long)disk->private_data;
int type = ITYPE(drive_state[drive].fd_device);
struct floppy_struct *g;
int ret;

View File

@@ -3148,17 +3148,17 @@ static int mtip_block_compat_ioctl(struct block_device *dev,
* that each partition is also 4KB aligned. Non-aligned partitions adversely
* affects performance.
*
* @dev Pointer to the block_device strucutre.
* @disk Pointer to the gendisk strucutre.
* @geo Pointer to a hd_geometry structure.
*
* return value
* 0 Operation completed successfully.
* -ENOTTY An error occurred while reading the drive capacity.
*/
static int mtip_block_getgeo(struct block_device *dev,
static int mtip_block_getgeo(struct gendisk *disk,
struct hd_geometry *geo)
{
struct driver_data *dd = dev->bd_disk->private_data;
struct driver_data *dd = disk->private_data;
sector_t capacity;
if (!dd)

View File

@@ -311,7 +311,7 @@ static void nbd_mark_nsock_dead(struct nbd_device *nbd, struct nbd_sock *nsock,
if (args) {
INIT_WORK(&args->work, nbd_dead_link_work);
args->index = nbd->index;
queue_work(system_wq, &args->work);
queue_work(system_percpu_wq, &args->work);
}
}
if (!nsock->dead) {
@@ -1217,6 +1217,14 @@ static struct socket *nbd_get_socket(struct nbd_device *nbd, unsigned long fd,
if (!sock)
return NULL;
if (!sk_is_tcp(sock->sk) &&
!sk_is_stream_unix(sock->sk)) {
dev_err(disk_to_dev(nbd->disk), "Unsupported socket: should be TCP or UNIX.\n");
*err = -EINVAL;
sockfd_put(sock);
return NULL;
}
if (sock->ops->shutdown == sock_no_shutdown) {
dev_err(disk_to_dev(nbd->disk), "Unsupported socket: shutdown callout must be supported.\n");
*err = -EINVAL;

View File

@@ -223,7 +223,7 @@ MODULE_PARM_DESC(discard, "Support discard operations (requires memory-backed nu
static unsigned long g_cache_size;
module_param_named(cache_size, g_cache_size, ulong, 0444);
MODULE_PARM_DESC(mbps, "Cache size in MiB for memory-backed device. Default: 0 (none)");
MODULE_PARM_DESC(cache_size, "Cache size in MiB for memory-backed device. Default: 0 (none)");
static bool g_fua = true;
module_param_named(fua, g_fua, bool, 0444);

View File

@@ -7389,7 +7389,7 @@ static int __init rbd_init(void)
* The number of active work items is limited by the number of
* rbd devices * queue depth, so leave @max_active at default.
*/
rbd_wq = alloc_workqueue(RBD_DRV_NAME, WQ_MEM_RECLAIM, 0);
rbd_wq = alloc_workqueue(RBD_DRV_NAME, WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!rbd_wq) {
rc = -ENOMEM;
goto err_out_slab;

View File

@@ -942,11 +942,11 @@ static void rnbd_client_release(struct gendisk *gen)
rnbd_clt_put_dev(dev);
}
static int rnbd_client_getgeo(struct block_device *block_device,
static int rnbd_client_getgeo(struct gendisk *disk,
struct hd_geometry *geo)
{
u64 size;
struct rnbd_clt_dev *dev = block_device->bd_disk->private_data;
struct rnbd_clt_dev *dev = disk->private_data;
struct queue_limits *limit = &dev->queue->limits;
size = dev->size * (limit->logical_block_size / SECTOR_SIZE);
@@ -1809,7 +1809,7 @@ static int __init rnbd_client_init(void)
unregister_blkdev(rnbd_client_major, "rnbd");
return err;
}
rnbd_clt_wq = alloc_workqueue("rnbd_clt_wq", 0, 0);
rnbd_clt_wq = alloc_workqueue("rnbd_clt_wq", WQ_PERCPU, 0);
if (!rnbd_clt_wq) {
pr_err("Failed to load module, alloc_workqueue failed.\n");
rnbd_clt_destroy_sysfs_files();

View File

@@ -1,80 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
//! This is a Rust implementation of the C null block driver.
//!
//! Supported features:
//!
//! - blk-mq interface
//! - direct completion
//! - block size 4k
//!
//! The driver is not configurable.
use kernel::{
alloc::flags,
block::mq::{
self,
gen_disk::{self, GenDisk},
Operations, TagSet,
},
error::Result,
new_mutex, pr_info,
prelude::*,
sync::{Arc, Mutex},
types::ARef,
};
module! {
type: NullBlkModule,
name: "rnull_mod",
authors: ["Andreas Hindborg"],
description: "Rust implementation of the C null block driver",
license: "GPL v2",
}
#[pin_data]
struct NullBlkModule {
#[pin]
_disk: Mutex<GenDisk<NullBlkDevice>>,
}
impl kernel::InPlaceModule for NullBlkModule {
fn init(_module: &'static ThisModule) -> impl PinInit<Self, Error> {
pr_info!("Rust null_blk loaded\n");
// Use a immediately-called closure as a stable `try` block
let disk = /* try */ (|| {
let tagset = Arc::pin_init(TagSet::new(1, 256, 1), flags::GFP_KERNEL)?;
gen_disk::GenDiskBuilder::new()
.capacity_sectors(4096 << 11)
.logical_block_size(4096)?
.physical_block_size(4096)?
.rotational(false)
.build(fmt!("rnullb{}", 0), tagset)
})();
try_pin_init!(Self {
_disk <- new_mutex!(disk?, "nullb:disk"),
})
}
}
struct NullBlkDevice;
#[vtable]
impl Operations for NullBlkDevice {
#[inline(always)]
fn queue_rq(rq: ARef<mq::Request<Self>>, _is_last: bool) -> Result {
mq::Request::end_ok(rq)
.map_err(|_e| kernel::error::code::EIO)
// We take no refcounts on the request, so we expect to be able to
// end the request. The request reference must be unique at this
// point, and so `end_ok` cannot fail.
.expect("Fatal error - expected to be able to end request");
Ok(())
}
fn commit_rqs() {}
}

View File

@@ -0,0 +1,13 @@
# SPDX-License-Identifier: GPL-2.0
#
# Rust null block device driver configuration
config BLK_DEV_RUST_NULL
tristate "Rust null block driver (Experimental)"
depends on RUST && CONFIGFS_FS
help
This is the Rust implementation of the null block driver. Like
the C version, the driver allows the user to create virutal block
devices that can be configured via various configuration options.
If unsure, say N.

View File

@@ -0,0 +1,3 @@
obj-$(CONFIG_BLK_DEV_RUST_NULL) += rnull_mod.o
rnull_mod-y := rnull.o

View File

@@ -0,0 +1,262 @@
// SPDX-License-Identifier: GPL-2.0
use super::{NullBlkDevice, THIS_MODULE};
use core::fmt::{Display, Write};
use kernel::{
block::mq::gen_disk::{GenDisk, GenDiskBuilder},
c_str,
configfs::{self, AttributeOperations},
configfs_attrs, new_mutex,
page::PAGE_SIZE,
prelude::*,
str::{kstrtobool_bytes, CString},
sync::Mutex,
};
use pin_init::PinInit;
pub(crate) fn subsystem() -> impl PinInit<kernel::configfs::Subsystem<Config>, Error> {
let item_type = configfs_attrs! {
container: configfs::Subsystem<Config>,
data: Config,
child: DeviceConfig,
attributes: [
features: 0,
],
};
kernel::configfs::Subsystem::new(c_str!("rnull"), item_type, try_pin_init!(Config {}))
}
#[pin_data]
pub(crate) struct Config {}
#[vtable]
impl AttributeOperations<0> for Config {
type Data = Config;
fn show(_this: &Config, page: &mut [u8; PAGE_SIZE]) -> Result<usize> {
let mut writer = kernel::str::Formatter::new(page);
writer.write_str("blocksize,size,rotational,irqmode\n")?;
Ok(writer.bytes_written())
}
}
#[vtable]
impl configfs::GroupOperations for Config {
type Child = DeviceConfig;
fn make_group(
&self,
name: &CStr,
) -> Result<impl PinInit<configfs::Group<DeviceConfig>, Error>> {
let item_type = configfs_attrs! {
container: configfs::Group<DeviceConfig>,
data: DeviceConfig,
attributes: [
// Named for compatibility with C null_blk
power: 0,
blocksize: 1,
rotational: 2,
size: 3,
irqmode: 4,
],
};
Ok(configfs::Group::new(
name.try_into()?,
item_type,
// TODO: cannot coerce new_mutex!() to impl PinInit<_, Error>, so put mutex inside
try_pin_init!( DeviceConfig {
data <- new_mutex!(DeviceConfigInner {
powered: false,
block_size: 4096,
rotational: false,
disk: None,
capacity_mib: 4096,
irq_mode: IRQMode::None,
name: name.try_into()?,
}),
}),
))
}
}
#[derive(Debug, Clone, Copy)]
pub(crate) enum IRQMode {
None,
Soft,
}
impl TryFrom<u8> for IRQMode {
type Error = kernel::error::Error;
fn try_from(value: u8) -> Result<Self> {
match value {
0 => Ok(Self::None),
1 => Ok(Self::Soft),
_ => Err(EINVAL),
}
}
}
impl Display for IRQMode {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
match self {
Self::None => f.write_str("0")?,
Self::Soft => f.write_str("1")?,
}
Ok(())
}
}
#[pin_data]
pub(crate) struct DeviceConfig {
#[pin]
data: Mutex<DeviceConfigInner>,
}
#[pin_data]
struct DeviceConfigInner {
powered: bool,
name: CString,
block_size: u32,
rotational: bool,
capacity_mib: u64,
irq_mode: IRQMode,
disk: Option<GenDisk<NullBlkDevice>>,
}
#[vtable]
impl configfs::AttributeOperations<0> for DeviceConfig {
type Data = DeviceConfig;
fn show(this: &DeviceConfig, page: &mut [u8; PAGE_SIZE]) -> Result<usize> {
let mut writer = kernel::str::Formatter::new(page);
if this.data.lock().powered {
writer.write_str("1\n")?;
} else {
writer.write_str("0\n")?;
}
Ok(writer.bytes_written())
}
fn store(this: &DeviceConfig, page: &[u8]) -> Result {
let power_op = kstrtobool_bytes(page)?;
let mut guard = this.data.lock();
if !guard.powered && power_op {
guard.disk = Some(NullBlkDevice::new(
&guard.name,
guard.block_size,
guard.rotational,
guard.capacity_mib,
guard.irq_mode,
)?);
guard.powered = true;
} else if guard.powered && !power_op {
drop(guard.disk.take());
guard.powered = false;
}
Ok(())
}
}
#[vtable]
impl configfs::AttributeOperations<1> for DeviceConfig {
type Data = DeviceConfig;
fn show(this: &DeviceConfig, page: &mut [u8; PAGE_SIZE]) -> Result<usize> {
let mut writer = kernel::str::Formatter::new(page);
writer.write_fmt(fmt!("{}\n", this.data.lock().block_size))?;
Ok(writer.bytes_written())
}
fn store(this: &DeviceConfig, page: &[u8]) -> Result {
if this.data.lock().powered {
return Err(EBUSY);
}
let text = core::str::from_utf8(page)?.trim();
let value = text.parse::<u32>().map_err(|_| EINVAL)?;
GenDiskBuilder::validate_block_size(value)?;
this.data.lock().block_size = value;
Ok(())
}
}
#[vtable]
impl configfs::AttributeOperations<2> for DeviceConfig {
type Data = DeviceConfig;
fn show(this: &DeviceConfig, page: &mut [u8; PAGE_SIZE]) -> Result<usize> {
let mut writer = kernel::str::Formatter::new(page);
if this.data.lock().rotational {
writer.write_str("1\n")?;
} else {
writer.write_str("0\n")?;
}
Ok(writer.bytes_written())
}
fn store(this: &DeviceConfig, page: &[u8]) -> Result {
if this.data.lock().powered {
return Err(EBUSY);
}
this.data.lock().rotational = kstrtobool_bytes(page)?;
Ok(())
}
}
#[vtable]
impl configfs::AttributeOperations<3> for DeviceConfig {
type Data = DeviceConfig;
fn show(this: &DeviceConfig, page: &mut [u8; PAGE_SIZE]) -> Result<usize> {
let mut writer = kernel::str::Formatter::new(page);
writer.write_fmt(fmt!("{}\n", this.data.lock().capacity_mib))?;
Ok(writer.bytes_written())
}
fn store(this: &DeviceConfig, page: &[u8]) -> Result {
if this.data.lock().powered {
return Err(EBUSY);
}
let text = core::str::from_utf8(page)?.trim();
let value = text.parse::<u64>().map_err(|_| EINVAL)?;
this.data.lock().capacity_mib = value;
Ok(())
}
}
#[vtable]
impl configfs::AttributeOperations<4> for DeviceConfig {
type Data = DeviceConfig;
fn show(this: &DeviceConfig, page: &mut [u8; PAGE_SIZE]) -> Result<usize> {
let mut writer = kernel::str::Formatter::new(page);
writer.write_fmt(fmt!("{}\n", this.data.lock().irq_mode))?;
Ok(writer.bytes_written())
}
fn store(this: &DeviceConfig, page: &[u8]) -> Result {
if this.data.lock().powered {
return Err(EBUSY);
}
let text = core::str::from_utf8(page)?.trim();
let value = text.parse::<u8>().map_err(|_| EINVAL)?;
this.data.lock().irq_mode = IRQMode::try_from(value)?;
Ok(())
}
}

View File

@@ -0,0 +1,104 @@
// SPDX-License-Identifier: GPL-2.0
//! This is a Rust implementation of the C null block driver.
mod configfs;
use configfs::IRQMode;
use kernel::{
block::{
self,
mq::{
self,
gen_disk::{self, GenDisk},
Operations, TagSet,
},
},
error::Result,
pr_info,
prelude::*,
sync::Arc,
types::ARef,
};
use pin_init::PinInit;
module! {
type: NullBlkModule,
name: "rnull_mod",
authors: ["Andreas Hindborg"],
description: "Rust implementation of the C null block driver",
license: "GPL v2",
}
#[pin_data]
struct NullBlkModule {
#[pin]
configfs_subsystem: kernel::configfs::Subsystem<configfs::Config>,
}
impl kernel::InPlaceModule for NullBlkModule {
fn init(_module: &'static ThisModule) -> impl PinInit<Self, Error> {
pr_info!("Rust null_blk loaded\n");
try_pin_init!(Self {
configfs_subsystem <- configfs::subsystem(),
})
}
}
struct NullBlkDevice;
impl NullBlkDevice {
fn new(
name: &CStr,
block_size: u32,
rotational: bool,
capacity_mib: u64,
irq_mode: IRQMode,
) -> Result<GenDisk<Self>> {
let tagset = Arc::pin_init(TagSet::new(1, 256, 1), GFP_KERNEL)?;
let queue_data = Box::new(QueueData { irq_mode }, GFP_KERNEL)?;
gen_disk::GenDiskBuilder::new()
.capacity_sectors(capacity_mib << (20 - block::SECTOR_SHIFT))
.logical_block_size(block_size)?
.physical_block_size(block_size)?
.rotational(rotational)
.build(fmt!("{}", name.to_str()?), tagset, queue_data)
}
}
struct QueueData {
irq_mode: IRQMode,
}
#[vtable]
impl Operations for NullBlkDevice {
type QueueData = KBox<QueueData>;
#[inline(always)]
fn queue_rq(queue_data: &QueueData, rq: ARef<mq::Request<Self>>, _is_last: bool) -> Result {
match queue_data.irq_mode {
IRQMode::None => mq::Request::end_ok(rq)
.map_err(|_e| kernel::error::code::EIO)
// We take no refcounts on the request, so we expect to be able to
// end the request. The request reference must be unique at this
// point, and so `end_ok` cannot fail.
.expect("Fatal error - expected to be able to end request"),
IRQMode::Soft => mq::Request::complete(rq),
}
Ok(())
}
fn commit_rqs(_queue_data: &QueueData) {}
fn complete(rq: ARef<mq::Request<Self>>) {
mq::Request::end_ok(rq)
.map_err(|_e| kernel::error::code::EIO)
// We take no refcounts on the request, so we expect to be able to
// end the request. The request reference must be unique at this
// point, and so `end_ok` cannot fail.
.expect("Fatal error - expected to be able to end request");
}
}

View File

@@ -119,9 +119,8 @@ static inline u32 vdc_tx_dring_avail(struct vio_dring_state *dr)
return vio_dring_avail(dr, VDC_TX_RING_SIZE);
}
static int vdc_getgeo(struct block_device *bdev, struct hd_geometry *geo)
static int vdc_getgeo(struct gendisk *disk, struct hd_geometry *geo)
{
struct gendisk *disk = bdev->bd_disk;
sector_t nsect = get_capacity(disk);
sector_t cylinders = nsect;
@@ -1189,7 +1188,7 @@ static void vdc_ldc_reset(struct vdc_port *port)
}
if (port->ldc_timeout)
mod_delayed_work(system_wq, &port->ldc_reset_timer_work,
mod_delayed_work(system_percpu_wq, &port->ldc_reset_timer_work,
round_jiffies(jiffies + HZ * port->ldc_timeout));
mod_timer(&port->vio.timer, round_jiffies(jiffies + HZ));
return;
@@ -1217,7 +1216,7 @@ static int __init vdc_init(void)
{
int err;
sunvdc_wq = alloc_workqueue("sunvdc", 0, 0);
sunvdc_wq = alloc_workqueue("sunvdc", WQ_PERCPU, 0);
if (!sunvdc_wq)
return -ENOMEM;

View File

@@ -711,9 +711,9 @@ static int floppy_ioctl(struct block_device *bdev, blk_mode_t mode,
return -ENOTTY;
}
static int floppy_getgeo(struct block_device *bdev, struct hd_geometry *geo)
static int floppy_getgeo(struct gendisk *disk, struct hd_geometry *geo)
{
struct floppy_state *fs = bdev->bd_disk->private_data;
struct floppy_state *fs = disk->private_data;
struct floppy_struct *g;
int ret;

View File

@@ -201,7 +201,6 @@ struct ublk_queue {
bool force_abort;
bool canceling;
bool fail_io; /* copy of dev->state == UBLK_S_DEV_FAIL_IO */
unsigned short nr_io_ready; /* how many ios setup */
spinlock_t cancel_lock;
struct ublk_device *dev;
struct ublk_io ios[];
@@ -234,7 +233,7 @@ struct ublk_device {
struct ublk_params params;
struct completion completion;
unsigned int nr_queues_ready;
u32 nr_io_ready;
bool unprivileged_daemons;
struct mutex cancel_mutex;
bool canceling;
@@ -252,8 +251,7 @@ static void ublk_io_release(void *priv);
static void ublk_stop_dev_unlocked(struct ublk_device *ub);
static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq);
static inline struct request *__ublk_check_and_get_req(struct ublk_device *ub,
const struct ublk_queue *ubq, struct ublk_io *io,
size_t offset);
u16 q_id, u16 tag, struct ublk_io *io, size_t offset);
static inline unsigned int ublk_req_build_flags(struct request *req);
static inline struct ublksrv_io_desc *
@@ -532,7 +530,8 @@ static blk_status_t ublk_setup_iod_zoned(struct ublk_queue *ubq,
#endif
static inline void __ublk_complete_rq(struct request *req);
static inline void __ublk_complete_rq(struct request *req, struct ublk_io *io,
bool need_map);
static dev_t ublk_chr_devt;
static const struct class ublk_chr_class = {
@@ -664,22 +663,44 @@ static inline bool ublk_support_zero_copy(const struct ublk_queue *ubq)
return ubq->flags & UBLK_F_SUPPORT_ZERO_COPY;
}
static inline bool ublk_dev_support_zero_copy(const struct ublk_device *ub)
{
return ub->dev_info.flags & UBLK_F_SUPPORT_ZERO_COPY;
}
static inline bool ublk_support_auto_buf_reg(const struct ublk_queue *ubq)
{
return ubq->flags & UBLK_F_AUTO_BUF_REG;
}
static inline bool ublk_dev_support_auto_buf_reg(const struct ublk_device *ub)
{
return ub->dev_info.flags & UBLK_F_AUTO_BUF_REG;
}
static inline bool ublk_support_user_copy(const struct ublk_queue *ubq)
{
return ubq->flags & UBLK_F_USER_COPY;
}
static inline bool ublk_dev_support_user_copy(const struct ublk_device *ub)
{
return ub->dev_info.flags & UBLK_F_USER_COPY;
}
static inline bool ublk_need_map_io(const struct ublk_queue *ubq)
{
return !ublk_support_user_copy(ubq) && !ublk_support_zero_copy(ubq) &&
!ublk_support_auto_buf_reg(ubq);
}
static inline bool ublk_dev_need_map_io(const struct ublk_device *ub)
{
return !ublk_dev_support_user_copy(ub) &&
!ublk_dev_support_zero_copy(ub) &&
!ublk_dev_support_auto_buf_reg(ub);
}
static inline bool ublk_need_req_ref(const struct ublk_queue *ubq)
{
/*
@@ -697,6 +718,13 @@ static inline bool ublk_need_req_ref(const struct ublk_queue *ubq)
ublk_support_auto_buf_reg(ubq);
}
static inline bool ublk_dev_need_req_ref(const struct ublk_device *ub)
{
return ublk_dev_support_user_copy(ub) ||
ublk_dev_support_zero_copy(ub) ||
ublk_dev_support_auto_buf_reg(ub);
}
static inline void ublk_init_req_ref(const struct ublk_queue *ubq,
struct ublk_io *io)
{
@@ -711,8 +739,11 @@ static inline bool ublk_get_req_ref(struct ublk_io *io)
static inline void ublk_put_req_ref(struct ublk_io *io, struct request *req)
{
if (refcount_dec_and_test(&io->ref))
__ublk_complete_rq(req);
if (!refcount_dec_and_test(&io->ref))
return;
/* ublk_need_map_io() and ublk_need_req_ref() are mutually exclusive */
__ublk_complete_rq(req, io, false);
}
static inline bool ublk_sub_req_ref(struct ublk_io *io)
@@ -728,6 +759,11 @@ static inline bool ublk_need_get_data(const struct ublk_queue *ubq)
return ubq->flags & UBLK_F_NEED_GET_DATA;
}
static inline bool ublk_dev_need_get_data(const struct ublk_device *ub)
{
return ub->dev_info.flags & UBLK_F_NEED_GET_DATA;
}
/* Called in slow path only, keep it noinline for trace purpose */
static noinline struct ublk_device *ublk_get_device(struct ublk_device *ub)
{
@@ -764,11 +800,9 @@ static inline int __ublk_queue_cmd_buf_size(int depth)
return round_up(depth * sizeof(struct ublksrv_io_desc), PAGE_SIZE);
}
static inline int ublk_queue_cmd_buf_size(struct ublk_device *ub, int q_id)
static inline int ublk_queue_cmd_buf_size(struct ublk_device *ub)
{
struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
return __ublk_queue_cmd_buf_size(ubq->q_depth);
return __ublk_queue_cmd_buf_size(ub->dev_info.queue_depth);
}
static int ublk_max_cmd_buf_size(void)
@@ -1019,13 +1053,13 @@ static int ublk_map_io(const struct ublk_queue *ubq, const struct request *req,
return rq_bytes;
}
static int ublk_unmap_io(const struct ublk_queue *ubq,
static int ublk_unmap_io(bool need_map,
const struct request *req,
const struct ublk_io *io)
{
const unsigned int rq_bytes = blk_rq_bytes(req);
if (!ublk_need_map_io(ubq))
if (!need_map)
return rq_bytes;
if (ublk_need_unmap_req(req)) {
@@ -1072,13 +1106,8 @@ static blk_status_t ublk_setup_iod(struct ublk_queue *ubq, struct request *req)
{
struct ublksrv_io_desc *iod = ublk_get_iod(ubq, req->tag);
struct ublk_io *io = &ubq->ios[req->tag];
enum req_op op = req_op(req);
u32 ublk_op;
if (!ublk_queue_is_zoned(ubq) &&
(op_is_zone_mgmt(op) || op == REQ_OP_ZONE_APPEND))
return BLK_STS_IOERR;
switch (req_op(req)) {
case REQ_OP_READ:
ublk_op = UBLK_IO_OP_READ;
@@ -1117,10 +1146,9 @@ static inline struct ublk_uring_cmd_pdu *ublk_get_uring_cmd_pdu(
}
/* todo: handle partial completion */
static inline void __ublk_complete_rq(struct request *req)
static inline void __ublk_complete_rq(struct request *req, struct ublk_io *io,
bool need_map)
{
struct ublk_queue *ubq = req->mq_hctx->driver_data;
struct ublk_io *io = &ubq->ios[req->tag];
unsigned int unmapped_bytes;
blk_status_t res = BLK_STS_OK;
@@ -1144,7 +1172,7 @@ static inline void __ublk_complete_rq(struct request *req)
goto exit;
/* for READ request, writing data in iod->addr to rq buffers */
unmapped_bytes = ublk_unmap_io(ubq, req, io);
unmapped_bytes = ublk_unmap_io(need_map, req, io);
/*
* Extremely impossible since we got data filled in just before
@@ -1500,9 +1528,6 @@ static void ublk_queue_reinit(struct ublk_device *ub, struct ublk_queue *ubq)
{
int i;
/* All old ioucmds have to be completed */
ubq->nr_io_ready = 0;
for (i = 0; i < ubq->q_depth; i++) {
struct ublk_io *io = &ubq->ios[i];
@@ -1551,7 +1576,7 @@ static void ublk_reset_ch_dev(struct ublk_device *ub)
/* set to NULL, otherwise new tasks cannot mmap io_cmd_buf */
ub->mm = NULL;
ub->nr_queues_ready = 0;
ub->nr_io_ready = 0;
ub->unprivileged_daemons = false;
ub->ublksrv_tgid = -1;
}
@@ -1775,23 +1800,23 @@ static int ublk_ch_mmap(struct file *filp, struct vm_area_struct *vma)
__func__, q_id, current->pid, vma->vm_start,
phys_off, (unsigned long)sz);
if (sz != ublk_queue_cmd_buf_size(ub, q_id))
if (sz != ublk_queue_cmd_buf_size(ub))
return -EINVAL;
pfn = virt_to_phys(ublk_queue_cmd_buf(ub, q_id)) >> PAGE_SHIFT;
return remap_pfn_range(vma, vma->vm_start, pfn, sz, vma->vm_page_prot);
}
static void __ublk_fail_req(struct ublk_queue *ubq, struct ublk_io *io,
static void __ublk_fail_req(struct ublk_device *ub, struct ublk_io *io,
struct request *req)
{
WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_ACTIVE);
if (ublk_nosrv_should_reissue_outstanding(ubq->dev))
if (ublk_nosrv_should_reissue_outstanding(ub))
blk_mq_requeue_request(req, false);
else {
io->res = -EIO;
__ublk_complete_rq(req);
__ublk_complete_rq(req, io, ublk_dev_need_map_io(ub));
}
}
@@ -1811,7 +1836,7 @@ static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq)
struct ublk_io *io = &ubq->ios[i];
if (io->flags & UBLK_IO_FLAG_OWNED_BY_SRV)
__ublk_fail_req(ubq, io, io->req);
__ublk_fail_req(ub, io, io->req);
}
}
@@ -1916,9 +1941,11 @@ static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd,
ublk_cancel_cmd(ubq, pdu->tag, issue_flags);
}
static inline bool ublk_queue_ready(struct ublk_queue *ubq)
static inline bool ublk_dev_ready(const struct ublk_device *ub)
{
return ubq->nr_io_ready == ubq->q_depth;
u32 total = (u32)ub->dev_info.nr_hw_queues * ub->dev_info.queue_depth;
return ub->nr_io_ready == total;
}
static void ublk_cancel_queue(struct ublk_queue *ubq)
@@ -2042,16 +2069,14 @@ static void ublk_reset_io_flags(struct ublk_device *ub)
}
/* device can only be started after all IOs are ready */
static void ublk_mark_io_ready(struct ublk_device *ub, struct ublk_queue *ubq)
static void ublk_mark_io_ready(struct ublk_device *ub)
__must_hold(&ub->mutex)
{
ubq->nr_io_ready++;
if (ublk_queue_ready(ubq))
ub->nr_queues_ready++;
if (!ub->unprivileged_daemons && !capable(CAP_SYS_ADMIN))
ub->unprivileged_daemons = true;
if (ub->nr_queues_ready == ub->dev_info.nr_hw_queues) {
ub->nr_io_ready++;
if (ublk_dev_ready(ub)) {
/* now we are ready for handling ublk io request */
ublk_reset_io_flags(ub);
complete_all(&ub->completion);
@@ -2122,11 +2147,11 @@ ublk_fill_io_cmd(struct ublk_io *io, struct io_uring_cmd *cmd)
}
static inline int
ublk_config_io_buf(const struct ublk_queue *ubq, struct ublk_io *io,
ublk_config_io_buf(const struct ublk_device *ub, struct ublk_io *io,
struct io_uring_cmd *cmd, unsigned long buf_addr,
u16 *buf_idx)
{
if (ublk_support_auto_buf_reg(ubq))
if (ublk_dev_support_auto_buf_reg(ub))
return ublk_handle_auto_buf_reg(io, cmd, buf_idx);
io->addr = buf_addr;
@@ -2165,18 +2190,18 @@ static void ublk_io_release(void *priv)
}
static int ublk_register_io_buf(struct io_uring_cmd *cmd,
const struct ublk_queue *ubq,
struct ublk_device *ub,
u16 q_id, u16 tag,
struct ublk_io *io,
unsigned int index, unsigned int issue_flags)
{
struct ublk_device *ub = cmd->file->private_data;
struct request *req;
int ret;
if (!ublk_support_zero_copy(ubq))
if (!ublk_dev_support_zero_copy(ub))
return -EINVAL;
req = __ublk_check_and_get_req(ub, ubq, io, 0);
req = __ublk_check_and_get_req(ub, q_id, tag, io, 0);
if (!req)
return -EINVAL;
@@ -2192,7 +2217,8 @@ static int ublk_register_io_buf(struct io_uring_cmd *cmd,
static int
ublk_daemon_register_io_buf(struct io_uring_cmd *cmd,
const struct ublk_queue *ubq, struct ublk_io *io,
struct ublk_device *ub,
u16 q_id, u16 tag, struct ublk_io *io,
unsigned index, unsigned issue_flags)
{
unsigned new_registered_buffers;
@@ -2205,9 +2231,10 @@ ublk_daemon_register_io_buf(struct io_uring_cmd *cmd,
*/
new_registered_buffers = io->task_registered_buffers + 1;
if (unlikely(new_registered_buffers >= UBLK_REFCOUNT_INIT))
return ublk_register_io_buf(cmd, ubq, io, index, issue_flags);
return ublk_register_io_buf(cmd, ub, q_id, tag, io, index,
issue_flags);
if (!ublk_support_zero_copy(ubq) || !ublk_rq_has_data(req))
if (!ublk_dev_support_zero_copy(ub) || !ublk_rq_has_data(req))
return -EINVAL;
ret = io_buffer_register_bvec(cmd, req, ublk_io_release, index,
@@ -2229,14 +2256,14 @@ static int ublk_unregister_io_buf(struct io_uring_cmd *cmd,
return io_buffer_unregister_bvec(cmd, index, issue_flags);
}
static int ublk_check_fetch_buf(const struct ublk_queue *ubq, __u64 buf_addr)
static int ublk_check_fetch_buf(const struct ublk_device *ub, __u64 buf_addr)
{
if (ublk_need_map_io(ubq)) {
if (ublk_dev_need_map_io(ub)) {
/*
* FETCH_RQ has to provide IO buffer if NEED GET
* DATA is not enabled
*/
if (!buf_addr && !ublk_need_get_data(ubq))
if (!buf_addr && !ublk_dev_need_get_data(ub))
return -EINVAL;
} else if (buf_addr) {
/* User copy requires addr to be unset */
@@ -2245,10 +2272,9 @@ static int ublk_check_fetch_buf(const struct ublk_queue *ubq, __u64 buf_addr)
return 0;
}
static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_device *ub,
struct ublk_io *io, __u64 buf_addr)
{
struct ublk_device *ub = ubq->dev;
int ret = 0;
/*
@@ -2257,8 +2283,8 @@ static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
* FETCH, so it is fine even for IO_URING_F_NONBLOCK.
*/
mutex_lock(&ub->mutex);
/* UBLK_IO_FETCH_REQ is only allowed before queue is setup */
if (ublk_queue_ready(ubq)) {
/* UBLK_IO_FETCH_REQ is only allowed before dev is setup */
if (ublk_dev_ready(ub)) {
ret = -EBUSY;
goto out;
}
@@ -2272,28 +2298,28 @@ static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV);
ublk_fill_io_cmd(io, cmd);
ret = ublk_config_io_buf(ubq, io, cmd, buf_addr, NULL);
ret = ublk_config_io_buf(ub, io, cmd, buf_addr, NULL);
if (ret)
goto out;
WRITE_ONCE(io->task, get_task_struct(current));
ublk_mark_io_ready(ub, ubq);
ublk_mark_io_ready(ub);
out:
mutex_unlock(&ub->mutex);
return ret;
}
static int ublk_check_commit_and_fetch(const struct ublk_queue *ubq,
static int ublk_check_commit_and_fetch(const struct ublk_device *ub,
struct ublk_io *io, __u64 buf_addr)
{
struct request *req = io->req;
if (ublk_need_map_io(ubq)) {
if (ublk_dev_need_map_io(ub)) {
/*
* COMMIT_AND_FETCH_REQ has to provide IO buffer if
* NEED GET DATA is not enabled or it is Read IO.
*/
if (!buf_addr && (!ublk_need_get_data(ubq) ||
if (!buf_addr && (!ublk_dev_need_get_data(ub) ||
req_op(req) == REQ_OP_READ))
return -EINVAL;
} else if (req_op(req) != REQ_OP_ZONE_APPEND && buf_addr) {
@@ -2307,10 +2333,10 @@ static int ublk_check_commit_and_fetch(const struct ublk_queue *ubq,
return 0;
}
static bool ublk_need_complete_req(const struct ublk_queue *ubq,
static bool ublk_need_complete_req(const struct ublk_device *ub,
struct ublk_io *io)
{
if (ublk_need_req_ref(ubq))
if (ublk_dev_need_req_ref(ub))
return ublk_sub_req_ref(io);
return true;
}
@@ -2333,23 +2359,28 @@ static bool ublk_get_data(const struct ublk_queue *ubq, struct ublk_io *io,
return ublk_start_io(ubq, req, io);
}
static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
unsigned int issue_flags,
const struct ublksrv_io_cmd *ub_cmd)
static int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
unsigned int issue_flags)
{
/* May point to userspace-mapped memory */
const struct ublksrv_io_cmd *ub_src = io_uring_sqe_cmd(cmd->sqe);
u16 buf_idx = UBLK_INVALID_BUF_IDX;
struct ublk_device *ub = cmd->file->private_data;
struct ublk_queue *ubq;
struct ublk_io *io;
u32 cmd_op = cmd->cmd_op;
unsigned tag = ub_cmd->tag;
u16 q_id = READ_ONCE(ub_src->q_id);
u16 tag = READ_ONCE(ub_src->tag);
s32 result = READ_ONCE(ub_src->result);
u64 addr = READ_ONCE(ub_src->addr); /* unioned with zone_append_lba */
struct request *req;
int ret;
bool compl;
WARN_ON_ONCE(issue_flags & IO_URING_F_UNLOCKED);
pr_devel("%s: received: cmd op %d queue %d tag %d result %d\n",
__func__, cmd->cmd_op, ub_cmd->q_id, tag,
ub_cmd->result);
__func__, cmd->cmd_op, q_id, tag, result);
ret = ublk_check_cmd_op(cmd_op);
if (ret)
@@ -2360,25 +2391,24 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
* so no need to validate the q_id, tag, or task
*/
if (_IOC_NR(cmd_op) == UBLK_IO_UNREGISTER_IO_BUF)
return ublk_unregister_io_buf(cmd, ub, ub_cmd->addr,
issue_flags);
return ublk_unregister_io_buf(cmd, ub, addr, issue_flags);
ret = -EINVAL;
if (ub_cmd->q_id >= ub->dev_info.nr_hw_queues)
if (q_id >= ub->dev_info.nr_hw_queues)
goto out;
ubq = ublk_get_queue(ub, ub_cmd->q_id);
ubq = ublk_get_queue(ub, q_id);
if (tag >= ubq->q_depth)
if (tag >= ub->dev_info.queue_depth)
goto out;
io = &ubq->ios[tag];
/* UBLK_IO_FETCH_REQ can be handled on any task, which sets io->task */
if (unlikely(_IOC_NR(cmd_op) == UBLK_IO_FETCH_REQ)) {
ret = ublk_check_fetch_buf(ubq, ub_cmd->addr);
ret = ublk_check_fetch_buf(ub, addr);
if (ret)
goto out;
ret = ublk_fetch(cmd, ubq, io, ub_cmd->addr);
ret = ublk_fetch(cmd, ub, io, addr);
if (ret)
goto out;
@@ -2392,8 +2422,8 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
* so can be handled on any task
*/
if (_IOC_NR(cmd_op) == UBLK_IO_REGISTER_IO_BUF)
return ublk_register_io_buf(cmd, ubq, io, ub_cmd->addr,
issue_flags);
return ublk_register_io_buf(cmd, ub, q_id, tag, io,
addr, issue_flags);
goto out;
}
@@ -2414,24 +2444,24 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
switch (_IOC_NR(cmd_op)) {
case UBLK_IO_REGISTER_IO_BUF:
return ublk_daemon_register_io_buf(cmd, ubq, io, ub_cmd->addr,
return ublk_daemon_register_io_buf(cmd, ub, q_id, tag, io, addr,
issue_flags);
case UBLK_IO_COMMIT_AND_FETCH_REQ:
ret = ublk_check_commit_and_fetch(ubq, io, ub_cmd->addr);
ret = ublk_check_commit_and_fetch(ub, io, addr);
if (ret)
goto out;
io->res = ub_cmd->result;
io->res = result;
req = ublk_fill_io_cmd(io, cmd);
ret = ublk_config_io_buf(ubq, io, cmd, ub_cmd->addr, &buf_idx);
compl = ublk_need_complete_req(ubq, io);
ret = ublk_config_io_buf(ub, io, cmd, addr, &buf_idx);
compl = ublk_need_complete_req(ub, io);
/* can't touch 'ublk_io' any more */
if (buf_idx != UBLK_INVALID_BUF_IDX)
io_buffer_unregister_bvec(cmd, buf_idx, issue_flags);
if (req_op(req) == REQ_OP_ZONE_APPEND)
req->__sector = ub_cmd->zone_append_lba;
req->__sector = addr;
if (compl)
__ublk_complete_rq(req);
__ublk_complete_rq(req, io, ublk_dev_need_map_io(ub));
if (ret)
goto out;
@@ -2443,7 +2473,7 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
* request
*/
req = ublk_fill_io_cmd(io, cmd);
ret = ublk_config_io_buf(ubq, io, cmd, ub_cmd->addr, NULL);
ret = ublk_config_io_buf(ub, io, cmd, addr, NULL);
WARN_ON_ONCE(ret);
if (likely(ublk_get_data(ubq, io, req))) {
__ublk_prep_compl_io_cmd(io, req);
@@ -2463,16 +2493,15 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
}
static inline struct request *__ublk_check_and_get_req(struct ublk_device *ub,
const struct ublk_queue *ubq, struct ublk_io *io, size_t offset)
u16 q_id, u16 tag, struct ublk_io *io, size_t offset)
{
unsigned tag = io - ubq->ios;
struct request *req;
/*
* can't use io->req in case of concurrent UBLK_IO_COMMIT_AND_FETCH_REQ,
* which would overwrite it with io->cmd
*/
req = blk_mq_tag_to_rq(ub->tag_set.tags[ubq->q_id], tag);
req = blk_mq_tag_to_rq(ub->tag_set.tags[q_id], tag);
if (!req)
return NULL;
@@ -2494,26 +2523,6 @@ fail_put:
return NULL;
}
static inline int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
unsigned int issue_flags)
{
/*
* Not necessary for async retry, but let's keep it simple and always
* copy the values to avoid any potential reuse.
*/
const struct ublksrv_io_cmd *ub_src = io_uring_sqe_cmd(cmd->sqe);
const struct ublksrv_io_cmd ub_cmd = {
.q_id = READ_ONCE(ub_src->q_id),
.tag = READ_ONCE(ub_src->tag),
.result = READ_ONCE(ub_src->result),
.addr = READ_ONCE(ub_src->addr)
};
WARN_ON_ONCE(issue_flags & IO_URING_F_UNLOCKED);
return __ublk_ch_uring_cmd(cmd, issue_flags, &ub_cmd);
}
static void ublk_ch_uring_cmd_cb(struct io_uring_cmd *cmd,
unsigned int issue_flags)
{
@@ -2583,17 +2592,14 @@ static struct request *ublk_check_and_get_req(struct kiocb *iocb,
return ERR_PTR(-EINVAL);
ubq = ublk_get_queue(ub, q_id);
if (!ubq)
return ERR_PTR(-EINVAL);
if (!ublk_support_user_copy(ubq))
if (!ublk_dev_support_user_copy(ub))
return ERR_PTR(-EACCES);
if (tag >= ubq->q_depth)
if (tag >= ub->dev_info.queue_depth)
return ERR_PTR(-EINVAL);
*io = &ubq->ios[tag];
req = __ublk_check_and_get_req(ub, ubq, *io, buf_off);
req = __ublk_check_and_get_req(ub, q_id, tag, *io, buf_off);
if (!req)
return ERR_PTR(-EINVAL);
@@ -2656,7 +2662,7 @@ static const struct file_operations ublk_ch_fops = {
static void ublk_deinit_queue(struct ublk_device *ub, int q_id)
{
int size = ublk_queue_cmd_buf_size(ub, q_id);
int size = ublk_queue_cmd_buf_size(ub);
struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
int i;
@@ -2683,7 +2689,7 @@ static int ublk_init_queue(struct ublk_device *ub, int q_id)
ubq->flags = ub->dev_info.flags;
ubq->q_id = q_id;
ubq->q_depth = ub->dev_info.queue_depth;
size = ublk_queue_cmd_buf_size(ub, q_id);
size = ublk_queue_cmd_buf_size(ub);
ptr = (void *) __get_free_pages(gfp_flags, get_order(size));
if (!ptr)

View File

@@ -829,9 +829,9 @@ out:
}
/* We provide getgeo only to please some old bootloader/partitioning tools */
static int virtblk_getgeo(struct block_device *bd, struct hd_geometry *geo)
static int virtblk_getgeo(struct gendisk *disk, struct hd_geometry *geo)
{
struct virtio_blk *vblk = bd->bd_disk->private_data;
struct virtio_blk *vblk = disk->private_data;
int ret = 0;
mutex_lock(&vblk->vdev_mutex);
@@ -853,7 +853,7 @@ static int virtblk_getgeo(struct block_device *bd, struct hd_geometry *geo)
/* some standard values, similar to sd */
geo->heads = 1 << 6;
geo->sectors = 1 << 5;
geo->cylinders = get_capacity(bd->bd_disk) >> 11;
geo->cylinders = get_capacity(disk) >> 11;
}
out:
mutex_unlock(&vblk->vdev_mutex);
@@ -1682,7 +1682,7 @@ static int __init virtio_blk_init(void)
{
int error;
virtblk_wq = alloc_workqueue("virtio-blk", 0, 0);
virtblk_wq = alloc_workqueue("virtio-blk", WQ_PERCPU, 0);
if (!virtblk_wq)
return -ENOMEM;

View File

@@ -493,11 +493,11 @@ static void blkif_restart_queue_callback(void *arg)
schedule_work(&rinfo->work);
}
static int blkif_getgeo(struct block_device *bd, struct hd_geometry *hg)
static int blkif_getgeo(struct gendisk *disk, struct hd_geometry *hg)
{
/* We don't have real geometry info, but let's at least return
values consistent with the size of the device */
sector_t nsect = get_capacity(bd->bd_disk);
sector_t nsect = get_capacity(disk);
sector_t cylinders = nsect;
hg->heads = 0xff;

View File

@@ -1085,7 +1085,7 @@ static int read_from_bdev_sync(struct zram *zram, struct page *page,
work.entry = entry;
INIT_WORK_ONSTACK(&work.work, zram_sync_read);
queue_work(system_unbound_wq, &work.work);
queue_work(system_dfl_wq, &work.work);
flush_work(&work.work);
destroy_work_on_stack(&work.work);

View File

@@ -37,6 +37,32 @@ config BLK_DEV_MD
If unsure, say N.
config MD_BITMAP
bool "MD RAID bitmap support"
default y
depends on BLK_DEV_MD
help
If you say Y here, support for the write intent bitmap will be
enabled. The bitmap can be used to optimize resync speed after power
failure or readding a disk, limiting it to recorded dirty sectors in
bitmap.
This feature can be added to existing MD array or MD array can be
created with bitmap via mdadm(8).
If unsure, say Y.
config MD_LLBITMAP
bool "MD RAID lockless bitmap support"
depends on BLK_DEV_MD
help
If you say Y here, support for the lockless write intent bitmap will
be enabled.
Note, this is an experimental feature.
If unsure, say N.
config MD_AUTODETECT
bool "Autodetect RAID arrays during kernel boot"
depends on BLK_DEV_MD=y
@@ -54,6 +80,7 @@ config MD_AUTODETECT
config MD_BITMAP_FILE
bool "MD bitmap file support (deprecated)"
default y
depends on MD_BITMAP
help
If you say Y here, support for write intent bitmaps in files on an
external file system is enabled. This is an alternative to the internal
@@ -174,6 +201,7 @@ config MD_RAID456
config MD_CLUSTER
tristate "Cluster Support for MD"
select MD_BITMAP
depends on BLK_DEV_MD
depends on DLM
default n
@@ -393,6 +421,7 @@ config DM_RAID
select MD_RAID1
select MD_RAID10
select MD_RAID456
select MD_BITMAP
select BLK_DEV_MD
help
A dm target that supports RAID1, RAID10, RAID4, RAID5 and RAID6 mappings

View File

@@ -27,7 +27,9 @@ dm-clone-y += dm-clone-target.o dm-clone-metadata.o
dm-verity-y += dm-verity-target.o
dm-zoned-y += dm-zoned-target.o dm-zoned-metadata.o dm-zoned-reclaim.o
md-mod-y += md.o md-bitmap.o
md-mod-y += md.o
md-mod-$(CONFIG_MD_BITMAP) += md-bitmap.o
md-mod-$(CONFIG_MD_LLBITMAP) += md-llbitmap.o
raid456-y += raid5.o raid5-cache.o raid5-ppl.o
linear-y += md-linear.o

View File

@@ -115,8 +115,7 @@ void bch_data_verify(struct cached_dev *dc, struct bio *bio)
check = bio_kmalloc(nr_segs, GFP_NOIO);
if (!check)
return;
bio_init(check, bio->bi_bdev, check->bi_inline_vecs, nr_segs,
REQ_OP_READ);
bio_init_inline(check, bio->bi_bdev, nr_segs, REQ_OP_READ);
check->bi_iter.bi_sector = bio->bi_iter.bi_sector;
check->bi_iter.bi_size = bio->bi_iter.bi_size;

View File

@@ -26,8 +26,7 @@ struct bio *bch_bbio_alloc(struct cache_set *c)
struct bbio *b = mempool_alloc(&c->bio_meta, GFP_NOIO);
struct bio *bio = &b->bio;
bio_init(bio, NULL, bio->bi_inline_vecs,
meta_bucket_pages(&c->cache->sb), 0);
bio_init_inline(bio, NULL, meta_bucket_pages(&c->cache->sb), 0);
return bio;
}

View File

@@ -615,7 +615,7 @@ static void do_journal_discard(struct cache *ca)
atomic_set(&ja->discard_in_flight, DISCARD_IN_FLIGHT);
bio_init(bio, ca->bdev, bio->bi_inline_vecs, 1, REQ_OP_DISCARD);
bio_init_inline(bio, ca->bdev, 1, REQ_OP_DISCARD);
bio->bi_iter.bi_sector = bucket_to_sector(ca->set,
ca->sb.d[ja->discard_idx]);
bio->bi_iter.bi_size = bucket_bytes(ca);

View File

@@ -79,7 +79,7 @@ static void moving_init(struct moving_io *io)
{
struct bio *bio = &io->bio.bio;
bio_init(bio, NULL, bio->bi_inline_vecs,
bio_init_inline(bio, NULL,
DIV_ROUND_UP(KEY_SIZE(&io->w->key), PAGE_SECTORS), 0);
bio_get(bio);
bio->bi_ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_IDLE, 0);
@@ -145,9 +145,9 @@ static void read_moving(struct cache_set *c)
continue;
}
io = kzalloc(struct_size(io, bio.bio.bi_inline_vecs,
DIV_ROUND_UP(KEY_SIZE(&w->key), PAGE_SECTORS)),
GFP_KERNEL);
io = kzalloc(sizeof(*io) + sizeof(struct bio_vec) *
DIV_ROUND_UP(KEY_SIZE(&w->key), PAGE_SECTORS),
GFP_KERNEL);
if (!io)
goto err;

View File

@@ -2236,7 +2236,7 @@ static int cache_alloc(struct cache *ca)
__module_get(THIS_MODULE);
kobject_init(&ca->kobj, &bch_cache_ktype);
bio_init(&ca->journal.bio, NULL, ca->journal.bio.bi_inline_vecs, 8, 0);
bio_init_inline(&ca->journal.bio, NULL, 8, 0);
/*
* When the cache disk is first registered, ca->sb.njournal_buckets

View File

@@ -331,7 +331,7 @@ static void dirty_init(struct keybuf_key *w)
struct dirty_io *io = w->private;
struct bio *bio = &io->bio;
bio_init(bio, NULL, bio->bi_inline_vecs,
bio_init_inline(bio, NULL,
DIV_ROUND_UP(KEY_SIZE(&w->key), PAGE_SECTORS), 0);
if (!io->dc->writeback_percent)
bio->bi_ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_IDLE, 0);
@@ -536,9 +536,9 @@ static void read_dirty(struct cached_dev *dc)
for (i = 0; i < nk; i++) {
w = keys[i];
io = kzalloc(struct_size(io, bio.bi_inline_vecs,
DIV_ROUND_UP(KEY_SIZE(&w->key), PAGE_SECTORS)),
GFP_KERNEL);
io = kzalloc(sizeof(*io) + sizeof(struct bio_vec) *
DIV_ROUND_UP(KEY_SIZE(&w->key), PAGE_SECTORS),
GFP_KERNEL);
if (!io)
goto err;

View File

@@ -1342,7 +1342,7 @@ static void use_bio(struct dm_buffer *b, enum req_op op, sector_t sector,
use_dmio(b, op, sector, n_sectors, offset, ioprio);
return;
}
bio_init(bio, b->c->bdev, bio->bi_inline_vecs, 1, op);
bio_init_inline(bio, b->c->bdev, 1, op);
bio->bi_iter.bi_sector = sector;
bio->bi_end_io = bio_complete;
bio->bi_private = b;

View File

@@ -441,7 +441,7 @@ static struct bio *clone_bio(struct dm_target *ti, struct flakey_c *fc, struct b
if (!clone)
return NULL;
bio_init(clone, fc->dev->bdev, clone->bi_inline_vecs, nr_iovecs, bio->bi_opf);
bio_init_inline(clone, fc->dev->bdev, nr_iovecs, bio->bi_opf);
clone->bi_iter.bi_sector = flakey_map_sector(ti, bio->bi_iter.bi_sector);
clone->bi_private = bio;

View File

@@ -3955,9 +3955,11 @@ static int __load_dirty_region_bitmap(struct raid_set *rs)
!test_and_set_bit(RT_FLAG_RS_BITMAP_LOADED, &rs->runtime_flags)) {
struct mddev *mddev = &rs->md;
r = mddev->bitmap_ops->load(mddev);
if (r)
DMERR("Failed to load bitmap");
if (md_bitmap_enabled(mddev, false)) {
r = mddev->bitmap_ops->load(mddev);
if (r)
DMERR("Failed to load bitmap");
}
}
return r;
@@ -4072,10 +4074,12 @@ static int raid_preresume(struct dm_target *ti)
mddev->bitmap_info.chunksize != to_bytes(rs->requested_bitmap_chunk_sectors)))) {
int chunksize = to_bytes(rs->requested_bitmap_chunk_sectors) ?: mddev->bitmap_info.chunksize;
r = mddev->bitmap_ops->resize(mddev, mddev->dev_sectors,
chunksize, false);
if (r)
DMERR("Failed to resize bitmap");
if (md_bitmap_enabled(mddev, false)) {
r = mddev->bitmap_ops->resize(mddev, mddev->dev_sectors,
chunksize);
if (r)
DMERR("Failed to resize bitmap");
}
}
/* Check for any resize/reshape on @rs and adjust/initiate */

View File

@@ -212,7 +212,7 @@ int vio_reset_bio_with_size(struct vio *vio, char *data, int size, bio_end_io_t
return VDO_SUCCESS;
bio->bi_ioprio = 0;
bio->bi_io_vec = bio->bi_inline_vecs;
bio->bi_io_vec = bio_inline_vecs(bio);
bio->bi_max_vecs = vio->block_count + 1;
if (VDO_ASSERT(size <= vio_size, "specified size %d is not greater than allocated %d",
size, vio_size) != VDO_SUCCESS)

View File

@@ -403,9 +403,9 @@ static void do_deferred_remove(struct work_struct *w)
dm_deferred_remove();
}
static int dm_blk_getgeo(struct block_device *bdev, struct hd_geometry *geo)
static int dm_blk_getgeo(struct gendisk *disk, struct hd_geometry *geo)
{
struct mapped_device *md = bdev->bd_disk->private_data;
struct mapped_device *md = disk->private_data;
return dm_get_geometry(md, geo);
}

View File

@@ -34,15 +34,6 @@
#include "md-bitmap.h"
#include "md-cluster.h"
#define BITMAP_MAJOR_LO 3
/* version 4 insists the bitmap is in little-endian order
* with version 3, it is host-endian which is non-portable
* Version 5 is currently set only for clustered devices
*/
#define BITMAP_MAJOR_HI 4
#define BITMAP_MAJOR_CLUSTERED 5
#define BITMAP_MAJOR_HOSTENDIAN 3
/*
* in-memory bitmap:
*
@@ -224,6 +215,8 @@ struct bitmap {
int cluster_slot;
};
static struct workqueue_struct *md_bitmap_wq;
static int __bitmap_resize(struct bitmap *bitmap, sector_t blocks,
int chunksize, bool init);
@@ -232,20 +225,19 @@ static inline char *bmname(struct bitmap *bitmap)
return bitmap->mddev ? mdname(bitmap->mddev) : "mdX";
}
static bool __bitmap_enabled(struct bitmap *bitmap)
static bool bitmap_enabled(void *data, bool flush)
{
return bitmap->storage.filemap &&
!test_bit(BITMAP_STALE, &bitmap->flags);
}
struct bitmap *bitmap = data;
static bool bitmap_enabled(struct mddev *mddev)
{
struct bitmap *bitmap = mddev->bitmap;
if (!flush)
return true;
if (!bitmap)
return false;
return __bitmap_enabled(bitmap);
/*
* If caller want to flush bitmap pages to underlying disks, check if
* there are cached pages in filemap.
*/
return !test_bit(BITMAP_STALE, &bitmap->flags) &&
bitmap->storage.filemap != NULL;
}
/*
@@ -484,7 +476,8 @@ static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap,
return -EINVAL;
}
md_super_write(mddev, rdev, sboff + ps, (int)min(size, bitmap_limit), page);
md_write_metadata(mddev, rdev, sboff + ps, (int)min(size, bitmap_limit),
page, 0);
return 0;
}
@@ -1244,7 +1237,7 @@ static void __bitmap_unplug(struct bitmap *bitmap)
int dirty, need_write;
int writing = 0;
if (!__bitmap_enabled(bitmap))
if (!bitmap_enabled(bitmap, true))
return;
/* look at each page to see if there are any set bits that need to be
@@ -1788,15 +1781,9 @@ static bool __bitmap_start_sync(struct bitmap *bitmap, sector_t offset,
sector_t *blocks, bool degraded)
{
bitmap_counter_t *bmc;
bool rv;
bool rv = false;
if (bitmap == NULL) {/* FIXME or bitmap set as 'failed' */
*blocks = 1024;
return true; /* always resync if no bitmap */
}
spin_lock_irq(&bitmap->counts.lock);
rv = false;
bmc = md_bitmap_get_counter(&bitmap->counts, offset, blocks, 0);
if (bmc) {
/* locked */
@@ -1845,10 +1832,6 @@ static void __bitmap_end_sync(struct bitmap *bitmap, sector_t offset,
bitmap_counter_t *bmc;
unsigned long flags;
if (bitmap == NULL) {
*blocks = 1024;
return;
}
spin_lock_irqsave(&bitmap->counts.lock, flags);
bmc = md_bitmap_get_counter(&bitmap->counts, offset, blocks, 0);
if (bmc == NULL)
@@ -2060,9 +2043,6 @@ static void bitmap_start_behind_write(struct mddev *mddev)
struct bitmap *bitmap = mddev->bitmap;
int bw;
if (!bitmap)
return;
atomic_inc(&bitmap->behind_writes);
bw = atomic_read(&bitmap->behind_writes);
if (bw > bitmap->behind_writes_used)
@@ -2076,9 +2056,6 @@ static void bitmap_end_behind_write(struct mddev *mddev)
{
struct bitmap *bitmap = mddev->bitmap;
if (!bitmap)
return;
if (atomic_dec_and_test(&bitmap->behind_writes))
wake_up(&bitmap->behind_wait);
pr_debug("dec write-behind count %d/%lu\n",
@@ -2593,15 +2570,14 @@ err:
return ret;
}
static int bitmap_resize(struct mddev *mddev, sector_t blocks, int chunksize,
bool init)
static int bitmap_resize(struct mddev *mddev, sector_t blocks, int chunksize)
{
struct bitmap *bitmap = mddev->bitmap;
if (!bitmap)
return 0;
return __bitmap_resize(bitmap, blocks, chunksize, init);
return __bitmap_resize(bitmap, blocks, chunksize, false);
}
static ssize_t
@@ -2990,12 +2966,19 @@ static struct attribute *md_bitmap_attrs[] = {
&max_backlog_used.attr,
NULL
};
const struct attribute_group md_bitmap_group = {
static struct attribute_group md_bitmap_group = {
.name = "bitmap",
.attrs = md_bitmap_attrs,
};
static struct bitmap_operations bitmap_ops = {
.head = {
.type = MD_BITMAP,
.id = ID_BITMAP,
.name = "bitmap",
},
.enabled = bitmap_enabled,
.create = bitmap_create,
.resize = bitmap_resize,
@@ -3013,6 +2996,9 @@ static struct bitmap_operations bitmap_ops = {
.start_write = bitmap_start_write,
.end_write = bitmap_end_write,
.start_discard = bitmap_start_write,
.end_discard = bitmap_end_write,
.start_sync = bitmap_start_sync,
.end_sync = bitmap_end_sync,
.cond_end_sync = bitmap_cond_end_sync,
@@ -3026,9 +3012,22 @@ static struct bitmap_operations bitmap_ops = {
.copy_from_slot = bitmap_copy_from_slot,
.set_pages = bitmap_set_pages,
.free = md_bitmap_free,
.group = &md_bitmap_group,
};
void mddev_set_bitmap_ops(struct mddev *mddev)
int md_bitmap_init(void)
{
mddev->bitmap_ops = &bitmap_ops;
md_bitmap_wq = alloc_workqueue("md_bitmap", WQ_MEM_RECLAIM | WQ_UNBOUND,
0);
if (!md_bitmap_wq)
return -ENOMEM;
return register_md_submodule(&bitmap_ops.head);
}
void md_bitmap_exit(void)
{
destroy_workqueue(md_bitmap_wq);
unregister_md_submodule(&bitmap_ops.head);
}

View File

@@ -9,10 +9,26 @@
#define BITMAP_MAGIC 0x6d746962
/*
* version 3 is host-endian order, this is deprecated and not used for new
* array
*/
#define BITMAP_MAJOR_LO 3
#define BITMAP_MAJOR_HOSTENDIAN 3
/* version 4 is little-endian order, the default value */
#define BITMAP_MAJOR_HI 4
/* version 5 is only used for cluster */
#define BITMAP_MAJOR_CLUSTERED 5
/* version 6 is only used for lockless bitmap */
#define BITMAP_MAJOR_LOCKLESS 6
/* use these for bitmap->flags and bitmap->sb->state bit-fields */
enum bitmap_state {
BITMAP_STALE = 1, /* the bitmap file is out of date or had -EIO */
BITMAP_STALE = 1, /* the bitmap file is out of date or had -EIO */
BITMAP_WRITE_ERROR = 2, /* A write error has occurred */
BITMAP_FIRST_USE = 3, /* llbitmap is just created */
BITMAP_CLEAN = 4, /* llbitmap is created with assume_clean */
BITMAP_DAEMON_BUSY = 5, /* llbitmap daemon is not finished after daemon_sleep */
BITMAP_HOSTENDIAN =15,
};
@@ -61,11 +77,15 @@ struct md_bitmap_stats {
struct file *file;
};
typedef void (md_bitmap_fn)(struct mddev *mddev, sector_t offset,
unsigned long sectors);
struct bitmap_operations {
bool (*enabled)(struct mddev *mddev);
struct md_submodule_head head;
bool (*enabled)(void *data, bool flush);
int (*create)(struct mddev *mddev);
int (*resize)(struct mddev *mddev, sector_t blocks, int chunksize,
bool init);
int (*resize)(struct mddev *mddev, sector_t blocks, int chunksize);
int (*load)(struct mddev *mddev);
void (*destroy)(struct mddev *mddev);
@@ -80,10 +100,13 @@ struct bitmap_operations {
void (*end_behind_write)(struct mddev *mddev);
void (*wait_behind_writes)(struct mddev *mddev);
void (*start_write)(struct mddev *mddev, sector_t offset,
unsigned long sectors);
void (*end_write)(struct mddev *mddev, sector_t offset,
unsigned long sectors);
md_bitmap_fn *start_write;
md_bitmap_fn *end_write;
md_bitmap_fn *start_discard;
md_bitmap_fn *end_discard;
sector_t (*skip_sync_blocks)(struct mddev *mddev, sector_t offset);
bool (*blocks_synced)(struct mddev *mddev, sector_t offset);
bool (*start_sync)(struct mddev *mddev, sector_t offset,
sector_t *blocks, bool degraded);
void (*end_sync)(struct mddev *mddev, sector_t offset, sector_t *blocks);
@@ -101,9 +124,75 @@ struct bitmap_operations {
sector_t *hi, bool clear_bits);
void (*set_pages)(void *data, unsigned long pages);
void (*free)(void *data);
struct attribute_group *group;
};
/* the bitmap API */
void mddev_set_bitmap_ops(struct mddev *mddev);
static inline bool md_bitmap_registered(struct mddev *mddev)
{
return mddev->bitmap_ops != NULL;
}
static inline bool md_bitmap_enabled(struct mddev *mddev, bool flush)
{
/* bitmap_ops must be registered before creating bitmap. */
if (!md_bitmap_registered(mddev))
return false;
if (!mddev->bitmap)
return false;
return mddev->bitmap_ops->enabled(mddev->bitmap, flush);
}
static inline bool md_bitmap_start_sync(struct mddev *mddev, sector_t offset,
sector_t *blocks, bool degraded)
{
/* always resync if no bitmap */
if (!md_bitmap_enabled(mddev, false)) {
*blocks = 1024;
return true;
}
return mddev->bitmap_ops->start_sync(mddev, offset, blocks, degraded);
}
static inline void md_bitmap_end_sync(struct mddev *mddev, sector_t offset,
sector_t *blocks)
{
if (!md_bitmap_enabled(mddev, false)) {
*blocks = 1024;
return;
}
mddev->bitmap_ops->end_sync(mddev, offset, blocks);
}
#ifdef CONFIG_MD_BITMAP
int md_bitmap_init(void);
void md_bitmap_exit(void);
#else
static inline int md_bitmap_init(void)
{
return 0;
}
static inline void md_bitmap_exit(void)
{
}
#endif
#ifdef CONFIG_MD_LLBITMAP
int md_llbitmap_init(void);
void md_llbitmap_exit(void);
#else
static inline int md_llbitmap_init(void)
{
return 0;
}
static inline void md_llbitmap_exit(void)
{
}
#endif
#endif

View File

@@ -630,7 +630,7 @@ static int process_recvd_msg(struct mddev *mddev, struct cluster_msg *msg)
if (le64_to_cpu(msg->high) != mddev->pers->size(mddev, 0, 0))
ret = mddev->bitmap_ops->resize(mddev,
le64_to_cpu(msg->high),
0, false);
0);
break;
default:
ret = -1;

View File

@@ -257,18 +257,10 @@ static bool linear_make_request(struct mddev *mddev, struct bio *bio)
if (unlikely(bio_end_sector(bio) > end_sector)) {
/* This bio crosses a device boundary, so we have to split it */
struct bio *split = bio_split(bio, end_sector - bio_sector,
GFP_NOIO, &mddev->bio_set);
if (IS_ERR(split)) {
bio->bi_status = errno_to_blk_status(PTR_ERR(split));
bio_endio(bio);
bio = bio_submit_split_bioset(bio, end_sector - bio_sector,
&mddev->bio_set);
if (!bio)
return true;
}
bio_chain(split, bio);
submit_bio_noacct(bio);
bio = split;
}
md_account_bio(mddev, &bio);

1626
drivers/md/md-llbitmap.c Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -94,7 +94,6 @@ static struct workqueue_struct *md_wq;
* workqueue whith reconfig_mutex grabbed.
*/
static struct workqueue_struct *md_misc_wq;
struct workqueue_struct *md_bitmap_wq;
static int remove_and_add_spares(struct mddev *mddev,
struct md_rdev *this);
@@ -677,8 +676,64 @@ static void active_io_release(struct percpu_ref *ref)
static void no_op(struct percpu_ref *r) {}
static bool mddev_set_bitmap_ops(struct mddev *mddev)
{
struct bitmap_operations *old = mddev->bitmap_ops;
struct md_submodule_head *head;
if (mddev->bitmap_id == ID_BITMAP_NONE ||
(old && old->head.id == mddev->bitmap_id))
return true;
xa_lock(&md_submodule);
head = xa_load(&md_submodule, mddev->bitmap_id);
if (!head) {
pr_warn("md: can't find bitmap id %d\n", mddev->bitmap_id);
goto err;
}
if (head->type != MD_BITMAP) {
pr_warn("md: invalid bitmap id %d\n", mddev->bitmap_id);
goto err;
}
mddev->bitmap_ops = (void *)head;
xa_unlock(&md_submodule);
if (!mddev_is_dm(mddev) && mddev->bitmap_ops->group) {
if (sysfs_create_group(&mddev->kobj, mddev->bitmap_ops->group))
pr_warn("md: cannot register extra bitmap attributes for %s\n",
mdname(mddev));
else
/*
* Inform user with KOBJ_CHANGE about new bitmap
* attributes.
*/
kobject_uevent(&mddev->kobj, KOBJ_CHANGE);
}
return true;
err:
xa_unlock(&md_submodule);
return false;
}
static void mddev_clear_bitmap_ops(struct mddev *mddev)
{
if (!mddev_is_dm(mddev) && mddev->bitmap_ops &&
mddev->bitmap_ops->group)
sysfs_remove_group(&mddev->kobj, mddev->bitmap_ops->group);
mddev->bitmap_ops = NULL;
}
int mddev_init(struct mddev *mddev)
{
if (!IS_ENABLED(CONFIG_MD_BITMAP))
mddev->bitmap_id = ID_BITMAP_NONE;
else
mddev->bitmap_id = ID_BITMAP;
if (percpu_ref_init(&mddev->active_io, active_io_release,
PERCPU_REF_ALLOW_REINIT, GFP_KERNEL))
@@ -713,7 +768,6 @@ int mddev_init(struct mddev *mddev)
mddev->resync_min = 0;
mddev->resync_max = MaxSector;
mddev->level = LEVEL_NONE;
mddev_set_bitmap_ops(mddev);
INIT_WORK(&mddev->sync_work, md_start_sync);
INIT_WORK(&mddev->del_work, mddev_delayed_delete);
@@ -1020,15 +1074,26 @@ static void super_written(struct bio *bio)
wake_up(&mddev->sb_wait);
}
void md_super_write(struct mddev *mddev, struct md_rdev *rdev,
sector_t sector, int size, struct page *page)
/**
* md_write_metadata - write metadata to underlying disk, including
* array superblock, badblocks, bitmap superblock and bitmap bits.
* @mddev: the array to write
* @rdev: the underlying disk to write
* @sector: the offset to @rdev
* @size: the length of the metadata
* @page: the metadata
* @offset: the offset to @page
*
* Write @size bytes of @page start from @offset, to @sector of @rdev, Increment
* mddev->pending_writes before returning, and decrement it on completion,
* waking up sb_wait. Caller must call md_super_wait() after issuing io to all
* rdev. If an error occurred, md_error() will be called, and the @rdev will be
* kicked out from @mddev.
*/
void md_write_metadata(struct mddev *mddev, struct md_rdev *rdev,
sector_t sector, int size, struct page *page,
unsigned int offset)
{
/* write first size bytes of page to sector of rdev
* Increment mddev->pending_writes before returning
* and decrement it on completion, waking up sb_wait
* if zero is reached.
* If an error occurred, call md_error
*/
struct bio *bio;
if (!page)
@@ -1046,7 +1111,7 @@ void md_super_write(struct mddev *mddev, struct md_rdev *rdev,
atomic_inc(&rdev->nr_pending);
bio->bi_iter.bi_sector = sector;
__bio_add_page(bio, page, size, 0);
__bio_add_page(bio, page, size, offset);
bio->bi_private = rdev;
bio->bi_end_io = super_written;
@@ -1356,6 +1421,9 @@ static u64 md_bitmap_events_cleared(struct mddev *mddev)
struct md_bitmap_stats stats;
int err;
if (!md_bitmap_enabled(mddev, false))
return 0;
err = mddev->bitmap_ops->get_stats(mddev->bitmap, &stats);
if (err)
return 0;
@@ -1653,8 +1721,8 @@ super_90_rdev_size_change(struct md_rdev *rdev, sector_t num_sectors)
if ((u64)num_sectors >= (2ULL << 32) && rdev->mddev->level >= 1)
num_sectors = (sector_t)(2ULL << 32) - 2;
do {
md_super_write(rdev->mddev, rdev, rdev->sb_start, rdev->sb_size,
rdev->sb_page);
md_write_metadata(rdev->mddev, rdev, rdev->sb_start,
rdev->sb_size, rdev->sb_page, 0);
} while (md_super_wait(rdev->mddev) < 0);
return num_sectors;
}
@@ -2302,8 +2370,8 @@ super_1_rdev_size_change(struct md_rdev *rdev, sector_t num_sectors)
sb->super_offset = cpu_to_le64(rdev->sb_start);
sb->sb_csum = calc_sb_1_csum(sb);
do {
md_super_write(rdev->mddev, rdev, rdev->sb_start, rdev->sb_size,
rdev->sb_page);
md_write_metadata(rdev->mddev, rdev, rdev->sb_start,
rdev->sb_size, rdev->sb_page, 0);
} while (md_super_wait(rdev->mddev) < 0);
return num_sectors;
@@ -2313,13 +2381,15 @@ static int
super_1_allow_new_offset(struct md_rdev *rdev,
unsigned long long new_offset)
{
struct mddev *mddev = rdev->mddev;
/* All necessary checks on new >= old have been done */
if (new_offset >= rdev->data_offset)
return 1;
/* with 1.0 metadata, there is no metadata to tread on
* so we can always move back */
if (rdev->mddev->minor_version == 0)
if (mddev->minor_version == 0)
return 1;
/* otherwise we must be sure not to step on
@@ -2331,8 +2401,7 @@ super_1_allow_new_offset(struct md_rdev *rdev,
if (rdev->sb_start + (32+4)*2 > new_offset)
return 0;
if (!rdev->mddev->bitmap_info.file) {
struct mddev *mddev = rdev->mddev;
if (md_bitmap_registered(mddev) && !mddev->bitmap_info.file) {
struct md_bitmap_stats stats;
int err;
@@ -2804,24 +2873,24 @@ repeat:
mddev_add_trace_msg(mddev, "md md_update_sb");
rewrite:
mddev->bitmap_ops->update_sb(mddev->bitmap);
if (md_bitmap_enabled(mddev, false))
mddev->bitmap_ops->update_sb(mddev->bitmap);
rdev_for_each(rdev, mddev) {
if (rdev->sb_loaded != 1)
continue; /* no noise on spare devices */
if (!test_bit(Faulty, &rdev->flags)) {
md_super_write(mddev,rdev,
rdev->sb_start, rdev->sb_size,
rdev->sb_page);
md_write_metadata(mddev, rdev, rdev->sb_start,
rdev->sb_size, rdev->sb_page, 0);
pr_debug("md: (write) %pg's sb offset: %llu\n",
rdev->bdev,
(unsigned long long)rdev->sb_start);
rdev->sb_events = mddev->events;
if (rdev->badblocks.size) {
md_super_write(mddev, rdev,
rdev->badblocks.sector,
rdev->badblocks.size << 9,
rdev->bb_page);
md_write_metadata(mddev, rdev,
rdev->badblocks.sector,
rdev->badblocks.size << 9,
rdev->bb_page, 0);
rdev->badblocks.size = 0;
}
@@ -4149,6 +4218,86 @@ new_level_store(struct mddev *mddev, const char *buf, size_t len)
static struct md_sysfs_entry md_new_level =
__ATTR(new_level, 0664, new_level_show, new_level_store);
static ssize_t
bitmap_type_show(struct mddev *mddev, char *page)
{
struct md_submodule_head *head;
unsigned long i;
ssize_t len = 0;
if (mddev->bitmap_id == ID_BITMAP_NONE)
len += sprintf(page + len, "[none] ");
else
len += sprintf(page + len, "none ");
xa_lock(&md_submodule);
xa_for_each(&md_submodule, i, head) {
if (head->type != MD_BITMAP)
continue;
if (mddev->bitmap_id == head->id)
len += sprintf(page + len, "[%s] ", head->name);
else
len += sprintf(page + len, "%s ", head->name);
}
xa_unlock(&md_submodule);
len += sprintf(page + len, "\n");
return len;
}
static ssize_t
bitmap_type_store(struct mddev *mddev, const char *buf, size_t len)
{
struct md_submodule_head *head;
enum md_submodule_id id;
unsigned long i;
int err = 0;
xa_lock(&md_submodule);
if (mddev->bitmap_ops) {
err = -EBUSY;
goto out;
}
if (cmd_match(buf, "none")) {
mddev->bitmap_id = ID_BITMAP_NONE;
goto out;
}
xa_for_each(&md_submodule, i, head) {
if (head->type == MD_BITMAP && cmd_match(buf, head->name)) {
mddev->bitmap_id = head->id;
goto out;
}
}
err = kstrtoint(buf, 10, &id);
if (err)
goto out;
if (id == ID_BITMAP_NONE) {
mddev->bitmap_id = id;
goto out;
}
head = xa_load(&md_submodule, id);
if (head && head->type == MD_BITMAP) {
mddev->bitmap_id = id;
goto out;
}
err = -ENOENT;
out:
xa_unlock(&md_submodule);
return err ? err : len;
}
static struct md_sysfs_entry md_bitmap_type =
__ATTR(bitmap_type, 0664, bitmap_type_show, bitmap_type_store);
static ssize_t
layout_show(struct mddev *mddev, char *page)
{
@@ -4680,6 +4829,9 @@ bitmap_store(struct mddev *mddev, const char *buf, size_t len)
unsigned long chunk, end_chunk;
int err;
if (!md_bitmap_enabled(mddev, false))
return len;
err = mddev_lock(mddev);
if (err)
return err;
@@ -5752,6 +5904,7 @@ __ATTR(serialize_policy, S_IRUGO | S_IWUSR, serialize_policy_show,
static struct attribute *md_default_attrs[] = {
&md_level.attr,
&md_new_level.attr,
&md_bitmap_type.attr,
&md_layout.attr,
&md_raid_disks.attr,
&md_uuid.attr,
@@ -5801,7 +5954,6 @@ static const struct attribute_group md_redundancy_group = {
static const struct attribute_group *md_attr_groups[] = {
&md_default_group,
&md_bitmap_group,
NULL,
};
@@ -6133,6 +6285,26 @@ static void md_safemode_timeout(struct timer_list *t)
static int start_dirty_degraded;
static int md_bitmap_create(struct mddev *mddev)
{
if (mddev->bitmap_id == ID_BITMAP_NONE)
return -EINVAL;
if (!mddev_set_bitmap_ops(mddev))
return -ENOENT;
return mddev->bitmap_ops->create(mddev);
}
static void md_bitmap_destroy(struct mddev *mddev)
{
if (!md_bitmap_registered(mddev))
return;
mddev->bitmap_ops->destroy(mddev);
mddev_clear_bitmap_ops(mddev);
}
int md_run(struct mddev *mddev)
{
int err;
@@ -6299,7 +6471,7 @@ int md_run(struct mddev *mddev)
}
if (err == 0 && pers->sync_request &&
(mddev->bitmap_info.file || mddev->bitmap_info.offset)) {
err = mddev->bitmap_ops->create(mddev);
err = md_bitmap_create(mddev);
if (err)
pr_warn("%s: failed to create bitmap (%d)\n",
mdname(mddev), err);
@@ -6372,7 +6544,7 @@ bitmap_abort:
pers->free(mddev, mddev->private);
mddev->private = NULL;
put_pers(pers);
mddev->bitmap_ops->destroy(mddev);
md_bitmap_destroy(mddev);
abort:
bioset_exit(&mddev->io_clone_set);
exit_sync_set:
@@ -6392,10 +6564,12 @@ int do_md_run(struct mddev *mddev)
if (err)
goto out;
err = mddev->bitmap_ops->load(mddev);
if (err) {
mddev->bitmap_ops->destroy(mddev);
goto out;
if (md_bitmap_registered(mddev)) {
err = mddev->bitmap_ops->load(mddev);
if (err) {
md_bitmap_destroy(mddev);
goto out;
}
}
if (mddev_is_clustered(mddev))
@@ -6546,7 +6720,8 @@ static void __md_stop_writes(struct mddev *mddev)
mddev->pers->quiesce(mddev, 0);
}
mddev->bitmap_ops->flush(mddev);
if (md_bitmap_enabled(mddev, true))
mddev->bitmap_ops->flush(mddev);
if (md_is_rdwr(mddev) &&
((!mddev->in_sync && !mddev_is_clustered(mddev)) ||
@@ -6573,7 +6748,8 @@ EXPORT_SYMBOL_GPL(md_stop_writes);
static void mddev_detach(struct mddev *mddev)
{
mddev->bitmap_ops->wait_behind_writes(mddev);
if (md_bitmap_enabled(mddev, false))
mddev->bitmap_ops->wait_behind_writes(mddev);
if (mddev->pers && mddev->pers->quiesce && !is_md_suspended(mddev)) {
mddev->pers->quiesce(mddev, 1);
mddev->pers->quiesce(mddev, 0);
@@ -6589,7 +6765,7 @@ static void __md_stop(struct mddev *mddev)
{
struct md_personality *pers = mddev->pers;
mddev->bitmap_ops->destroy(mddev);
md_bitmap_destroy(mddev);
mddev_detach(mddev);
spin_lock(&mddev->lock);
mddev->pers = NULL;
@@ -7307,6 +7483,9 @@ static int set_bitmap_file(struct mddev *mddev, int fd)
{
int err = 0;
if (!md_bitmap_registered(mddev))
return -EINVAL;
if (mddev->pers) {
if (!mddev->pers->quiesce || !mddev->thread)
return -EBUSY;
@@ -7363,16 +7542,16 @@ static int set_bitmap_file(struct mddev *mddev, int fd)
err = 0;
if (mddev->pers) {
if (fd >= 0) {
err = mddev->bitmap_ops->create(mddev);
err = md_bitmap_create(mddev);
if (!err)
err = mddev->bitmap_ops->load(mddev);
if (err) {
mddev->bitmap_ops->destroy(mddev);
md_bitmap_destroy(mddev);
fd = -1;
}
} else if (fd < 0) {
mddev->bitmap_ops->destroy(mddev);
md_bitmap_destroy(mddev);
}
}
@@ -7679,12 +7858,12 @@ static int update_array_info(struct mddev *mddev, mdu_array_info_t *info)
mddev->bitmap_info.default_offset;
mddev->bitmap_info.space =
mddev->bitmap_info.default_space;
rv = mddev->bitmap_ops->create(mddev);
rv = md_bitmap_create(mddev);
if (!rv)
rv = mddev->bitmap_ops->load(mddev);
if (rv)
mddev->bitmap_ops->destroy(mddev);
md_bitmap_destroy(mddev);
} else {
struct md_bitmap_stats stats;
@@ -7710,7 +7889,7 @@ static int update_array_info(struct mddev *mddev, mdu_array_info_t *info)
put_cluster_ops(mddev);
mddev->safemode_delay = DEFAULT_SAFEMODE_DELAY;
}
mddev->bitmap_ops->destroy(mddev);
md_bitmap_destroy(mddev);
mddev->bitmap_info.offset = 0;
}
}
@@ -7747,9 +7926,9 @@ static int set_disk_faulty(struct mddev *mddev, dev_t dev)
* 4 sectors (with a BIG number of cylinders...). This drives
* dosfs just mad... ;-)
*/
static int md_getgeo(struct block_device *bdev, struct hd_geometry *geo)
static int md_getgeo(struct gendisk *disk, struct hd_geometry *geo)
{
struct mddev *mddev = bdev->bd_disk->private_data;
struct mddev *mddev = disk->private_data;
geo->heads = 2;
geo->sectors = 4;
@@ -8491,6 +8670,9 @@ static void md_bitmap_status(struct seq_file *seq, struct mddev *mddev)
unsigned long chunk_kb;
int err;
if (!md_bitmap_enabled(mddev, false))
return;
err = mddev->bitmap_ops->get_stats(mddev->bitmap, &stats);
if (err)
return;
@@ -8873,18 +9055,24 @@ EXPORT_SYMBOL_GPL(md_submit_discard_bio);
static void md_bitmap_start(struct mddev *mddev,
struct md_io_clone *md_io_clone)
{
md_bitmap_fn *fn = unlikely(md_io_clone->rw == STAT_DISCARD) ?
mddev->bitmap_ops->start_discard :
mddev->bitmap_ops->start_write;
if (mddev->pers->bitmap_sector)
mddev->pers->bitmap_sector(mddev, &md_io_clone->offset,
&md_io_clone->sectors);
mddev->bitmap_ops->start_write(mddev, md_io_clone->offset,
md_io_clone->sectors);
fn(mddev, md_io_clone->offset, md_io_clone->sectors);
}
static void md_bitmap_end(struct mddev *mddev, struct md_io_clone *md_io_clone)
{
mddev->bitmap_ops->end_write(mddev, md_io_clone->offset,
md_io_clone->sectors);
md_bitmap_fn *fn = unlikely(md_io_clone->rw == STAT_DISCARD) ?
mddev->bitmap_ops->end_discard :
mddev->bitmap_ops->end_write;
fn(mddev, md_io_clone->offset, md_io_clone->sectors);
}
static void md_end_clone_io(struct bio *bio)
@@ -8893,7 +9081,7 @@ static void md_end_clone_io(struct bio *bio)
struct bio *orig_bio = md_io_clone->orig_bio;
struct mddev *mddev = md_io_clone->mddev;
if (bio_data_dir(orig_bio) == WRITE && mddev->bitmap)
if (bio_data_dir(orig_bio) == WRITE && md_bitmap_enabled(mddev, false))
md_bitmap_end(mddev, md_io_clone);
if (bio->bi_status && !orig_bio->bi_status)
@@ -8920,9 +9108,10 @@ static void md_clone_bio(struct mddev *mddev, struct bio **bio)
if (blk_queue_io_stat(bdev->bd_disk->queue))
md_io_clone->start_time = bio_start_io_acct(*bio);
if (bio_data_dir(*bio) == WRITE && mddev->bitmap) {
if (bio_data_dir(*bio) == WRITE && md_bitmap_enabled(mddev, false)) {
md_io_clone->offset = (*bio)->bi_iter.bi_sector;
md_io_clone->sectors = bio_sectors(*bio);
md_io_clone->rw = op_stat_group(bio_op(*bio));
md_bitmap_start(mddev, md_io_clone);
}
@@ -8944,7 +9133,7 @@ void md_free_cloned_bio(struct bio *bio)
struct bio *orig_bio = md_io_clone->orig_bio;
struct mddev *mddev = md_io_clone->mddev;
if (bio_data_dir(orig_bio) == WRITE && mddev->bitmap)
if (bio_data_dir(orig_bio) == WRITE && md_bitmap_enabled(mddev, false))
md_bitmap_end(mddev, md_io_clone);
if (bio->bi_status && !orig_bio->bi_status)
@@ -9010,6 +9199,39 @@ static sector_t md_sync_max_sectors(struct mddev *mddev,
}
}
/*
* If lazy recovery is requested and all rdevs are in sync, select the rdev with
* the higest index to perfore recovery to build initial xor data, this is the
* same as old bitmap.
*/
static bool mddev_select_lazy_recover_rdev(struct mddev *mddev)
{
struct md_rdev *recover_rdev = NULL;
struct md_rdev *rdev;
bool ret = false;
rcu_read_lock();
rdev_for_each_rcu(rdev, mddev) {
if (rdev->raid_disk < 0)
continue;
if (test_bit(Faulty, &rdev->flags) ||
!test_bit(In_sync, &rdev->flags))
break;
if (!recover_rdev || recover_rdev->raid_disk < rdev->raid_disk)
recover_rdev = rdev;
}
if (recover_rdev) {
clear_bit(In_sync, &recover_rdev->flags);
ret = true;
}
rcu_read_unlock();
return ret;
}
static sector_t md_sync_position(struct mddev *mddev, enum sync_action action)
{
sector_t start = 0;
@@ -9041,6 +9263,14 @@ static sector_t md_sync_position(struct mddev *mddev, enum sync_action action)
start = rdev->recovery_offset;
rcu_read_unlock();
/*
* If there are no spares, and raid456 lazy initial recover is
* requested.
*/
if (test_bit(MD_RECOVERY_LAZY_RECOVER, &mddev->recovery) &&
start == MaxSector && mddev_select_lazy_recover_rdev(mddev))
start = 0;
/* If there is a bitmap, we need to make sure all
* writes that started before we added a spare
* complete before we start doing a recovery.
@@ -9061,19 +9291,12 @@ static sector_t md_sync_position(struct mddev *mddev, enum sync_action action)
static bool sync_io_within_limit(struct mddev *mddev)
{
int io_sectors;
/*
* For raid456, sync IO is stripe(4k) per IO, for other levels, it's
* RESYNC_PAGES(64k) per IO.
*/
if (mddev->level == 4 || mddev->level == 5 || mddev->level == 6)
io_sectors = 8;
else
io_sectors = 128;
return atomic_read(&mddev->recovery_active) <
io_sectors * sync_io_depth(mddev);
(raid_is_456(mddev) ? 8 : 128) * sync_io_depth(mddev);
}
#define SYNC_MARKS 10
@@ -9283,6 +9506,12 @@ void md_do_sync(struct md_thread *thread)
if (test_bit(MD_RECOVERY_INTR, &mddev->recovery))
break;
if (mddev->bitmap_ops && mddev->bitmap_ops->skip_sync_blocks) {
sectors = mddev->bitmap_ops->skip_sync_blocks(mddev, j);
if (sectors)
goto update;
}
sectors = mddev->pers->sync_request(mddev, j, max_sectors,
&skipped);
if (sectors == 0) {
@@ -9298,6 +9527,7 @@ void md_do_sync(struct md_thread *thread)
if (test_bit(MD_RECOVERY_INTR, &mddev->recovery))
break;
update:
j += sectors;
if (j > max_sectors)
/* when skipping, extra large numbers can be returned. */
@@ -9607,6 +9837,7 @@ static bool md_choose_sync_action(struct mddev *mddev, int *spares)
set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
clear_bit(MD_RECOVERY_LAZY_RECOVER, &mddev->recovery);
return true;
}
@@ -9615,6 +9846,7 @@ static bool md_choose_sync_action(struct mddev *mddev, int *spares)
remove_spares(mddev, NULL);
set_bit(MD_RECOVERY_SYNC, &mddev->recovery);
clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
clear_bit(MD_RECOVERY_LAZY_RECOVER, &mddev->recovery);
return true;
}
@@ -9624,7 +9856,7 @@ static bool md_choose_sync_action(struct mddev *mddev, int *spares)
* re-add.
*/
*spares = remove_and_add_spares(mddev, NULL);
if (*spares) {
if (*spares || test_bit(MD_RECOVERY_LAZY_RECOVER, &mddev->recovery)) {
clear_bit(MD_RECOVERY_SYNC, &mddev->recovery);
clear_bit(MD_RECOVERY_CHECK, &mddev->recovery);
clear_bit(MD_RECOVERY_REQUESTED, &mddev->recovery);
@@ -9682,7 +9914,7 @@ static void md_start_sync(struct work_struct *ws)
* We are adding a device or devices to an array which has the bitmap
* stored on all devices. So make sure all bitmap pages get written.
*/
if (spares)
if (spares && md_bitmap_enabled(mddev, true))
mddev->bitmap_ops->write_all(mddev);
name = test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) ?
@@ -9770,7 +10002,7 @@ static void unregister_sync_thread(struct mddev *mddev)
*/
void md_check_recovery(struct mddev *mddev)
{
if (mddev->bitmap)
if (md_bitmap_enabled(mddev, false) && mddev->bitmap_ops->daemon_work)
mddev->bitmap_ops->daemon_work(mddev);
if (signal_pending(current)) {
@@ -9837,6 +10069,7 @@ void md_check_recovery(struct mddev *mddev)
}
clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
clear_bit(MD_RECOVERY_LAZY_RECOVER, &mddev->recovery);
clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
clear_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags);
@@ -9947,6 +10180,7 @@ void md_reap_sync_thread(struct mddev *mddev)
clear_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
clear_bit(MD_RECOVERY_REQUESTED, &mddev->recovery);
clear_bit(MD_RECOVERY_CHECK, &mddev->recovery);
clear_bit(MD_RECOVERY_LAZY_RECOVER, &mddev->recovery);
/*
* We call mddev->cluster_ops->update_size here because sync_size could
* be changed by md_update_sb, and MD_RECOVERY_RESHAPE is cleared,
@@ -10094,8 +10328,16 @@ static void md_geninit(void)
static int __init md_init(void)
{
int ret = -ENOMEM;
int ret = md_bitmap_init();
if (ret)
return ret;
ret = md_llbitmap_init();
if (ret)
goto err_bitmap;
ret = -ENOMEM;
md_wq = alloc_workqueue("md", WQ_MEM_RECLAIM, 0);
if (!md_wq)
goto err_wq;
@@ -10104,11 +10346,6 @@ static int __init md_init(void)
if (!md_misc_wq)
goto err_misc_wq;
md_bitmap_wq = alloc_workqueue("md_bitmap", WQ_MEM_RECLAIM | WQ_UNBOUND,
0);
if (!md_bitmap_wq)
goto err_bitmap_wq;
ret = __register_blkdev(MD_MAJOR, "md", md_probe);
if (ret < 0)
goto err_md;
@@ -10127,12 +10364,13 @@ static int __init md_init(void)
err_mdp:
unregister_blkdev(MD_MAJOR, "md");
err_md:
destroy_workqueue(md_bitmap_wq);
err_bitmap_wq:
destroy_workqueue(md_misc_wq);
err_misc_wq:
destroy_workqueue(md_wq);
err_wq:
md_llbitmap_exit();
err_bitmap:
md_bitmap_exit();
return ret;
}
@@ -10150,7 +10388,7 @@ static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
ret = mddev->pers->resize(mddev, le64_to_cpu(sb->size));
if (ret)
pr_info("md-cluster: resize failed\n");
else
else if (md_bitmap_enabled(mddev, false))
mddev->bitmap_ops->update_sb(mddev->bitmap);
}
@@ -10438,8 +10676,8 @@ static __exit void md_exit(void)
spin_unlock(&all_mddevs_lock);
destroy_workqueue(md_misc_wq);
destroy_workqueue(md_bitmap_wq);
destroy_workqueue(md_wq);
md_bitmap_exit();
}
subsys_initcall(md_init);

View File

@@ -26,7 +26,7 @@
enum md_submodule_type {
MD_PERSONALITY = 0,
MD_CLUSTER,
MD_BITMAP, /* TODO */
MD_BITMAP,
};
enum md_submodule_id {
@@ -38,8 +38,9 @@ enum md_submodule_id {
ID_RAID6 = 6,
ID_RAID10 = 10,
ID_CLUSTER,
ID_BITMAP, /* TODO */
ID_LLBITMAP, /* TODO */
ID_BITMAP,
ID_LLBITMAP,
ID_BITMAP_NONE,
};
struct md_submodule_head {
@@ -565,6 +566,7 @@ struct mddev {
struct percpu_ref writes_pending;
int sync_checkers; /* # of threads checking writes_pending */
enum md_submodule_id bitmap_id;
void *bitmap; /* the bitmap for the device */
struct bitmap_operations *bitmap_ops;
struct {
@@ -665,6 +667,8 @@ enum recovery_flags {
MD_RECOVERY_RESHAPE,
/* remote node is running resync thread */
MD_RESYNCING_REMOTE,
/* raid456 lazy initial recover */
MD_RECOVERY_LAZY_RECOVER,
};
enum md_ro_state {
@@ -796,7 +800,6 @@ struct md_sysfs_entry {
ssize_t (*show)(struct mddev *, char *);
ssize_t (*store)(struct mddev *, const char *, size_t);
};
extern const struct attribute_group md_bitmap_group;
static inline struct kernfs_node *sysfs_get_dirent_safe(struct kernfs_node *sd, char *name)
{
@@ -873,6 +876,7 @@ struct md_io_clone {
unsigned long start_time;
sector_t offset;
unsigned long sectors;
enum stat_group rw;
struct bio bio_clone;
};
@@ -909,8 +913,9 @@ void md_account_bio(struct mddev *mddev, struct bio **bio);
void md_free_cloned_bio(struct bio *bio);
extern bool __must_check md_flush_request(struct mddev *mddev, struct bio *bio);
extern void md_super_write(struct mddev *mddev, struct md_rdev *rdev,
sector_t sector, int size, struct page *page);
void md_write_metadata(struct mddev *mddev, struct md_rdev *rdev,
sector_t sector, int size, struct page *page,
unsigned int offset);
extern int md_super_wait(struct mddev *mddev);
extern int sync_page_io(struct md_rdev *rdev, sector_t sector, int size,
struct page *page, blk_opf_t opf, bool metadata_op);
@@ -1013,7 +1018,6 @@ struct mdu_array_info_s;
struct mdu_disk_info_s;
extern int mdp_major;
extern struct workqueue_struct *md_bitmap_wq;
void md_autostart_arrays(int part);
int md_set_array_info(struct mddev *mddev, struct mdu_array_info_s *info);
int md_add_new_disk(struct mddev *mddev, struct mdu_disk_info_s *info);
@@ -1034,6 +1038,12 @@ static inline bool mddev_is_dm(struct mddev *mddev)
return !mddev->gendisk;
}
static inline bool raid_is_456(struct mddev *mddev)
{
return mddev->level == ID_RAID4 || mddev->level == ID_RAID5 ||
mddev->level == ID_RAID6;
}
static inline void mddev_trace_remap(struct mddev *mddev, struct bio *bio,
sector_t sector)
{

View File

@@ -464,21 +464,16 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
zone = find_zone(conf, &start);
if (bio_end_sector(bio) > zone->zone_end) {
struct bio *split = bio_split(bio,
zone->zone_end - bio->bi_iter.bi_sector, GFP_NOIO,
&mddev->bio_set);
if (IS_ERR(split)) {
bio->bi_status = errno_to_blk_status(PTR_ERR(split));
bio_endio(bio);
bio = bio_submit_split_bioset(bio,
zone->zone_end - bio->bi_iter.bi_sector,
&mddev->bio_set);
if (!bio)
return;
}
bio_chain(split, bio);
submit_bio_noacct(bio);
bio = split;
end = zone->zone_end;
} else
} else {
end = bio_end_sector(bio);
}
orig_end = end;
if (zone != conf->strip_zone)
@@ -613,17 +608,10 @@ static bool raid0_make_request(struct mddev *mddev, struct bio *bio)
: sector_div(sector, chunk_sects));
if (sectors < bio_sectors(bio)) {
struct bio *split = bio_split(bio, sectors, GFP_NOIO,
bio = bio_submit_split_bioset(bio, sectors,
&mddev->bio_set);
if (IS_ERR(split)) {
bio->bi_status = errno_to_blk_status(PTR_ERR(split));
bio_endio(bio);
if (!bio)
return true;
}
bio_chain(split, bio);
raid0_map_submit_bio(mddev, bio);
bio = split;
}
raid0_map_submit_bio(mddev, bio);

View File

@@ -140,7 +140,7 @@ static inline bool raid1_add_bio_to_plug(struct mddev *mddev, struct bio *bio,
* If bitmap is not enabled, it's safe to submit the io directly, and
* this can get optimal performance.
*/
if (!mddev->bitmap_ops->enabled(mddev)) {
if (!md_bitmap_enabled(mddev, true)) {
raid1_submit_write(bio);
return true;
}

View File

@@ -167,7 +167,7 @@ static void * r1buf_pool_alloc(gfp_t gfp_flags, void *data)
bio = bio_kmalloc(RESYNC_PAGES, gfp_flags);
if (!bio)
goto out_free_bio;
bio_init(bio, NULL, bio->bi_inline_vecs, RESYNC_PAGES, 0);
bio_init_inline(bio, NULL, RESYNC_PAGES, 0);
r1_bio->bios[j] = bio;
}
/*
@@ -1317,7 +1317,7 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio,
struct raid1_info *mirror;
struct bio *read_bio;
int max_sectors;
int rdisk, error;
int rdisk;
bool r1bio_existed = !!r1_bio;
/*
@@ -1366,7 +1366,8 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio,
(unsigned long long)r1_bio->sector,
mirror->rdev->bdev);
if (test_bit(WriteMostly, &mirror->rdev->flags)) {
if (test_bit(WriteMostly, &mirror->rdev->flags) &&
md_bitmap_enabled(mddev, false)) {
/*
* Reading from a write-mostly device must take care not to
* over-take any writes that are 'behind'
@@ -1376,16 +1377,13 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio,
}
if (max_sectors < bio_sectors(bio)) {
struct bio *split = bio_split(bio, max_sectors,
gfp, &conf->bio_split);
if (IS_ERR(split)) {
error = PTR_ERR(split);
bio = bio_submit_split_bioset(bio, max_sectors,
&conf->bio_split);
if (!bio) {
set_bit(R1BIO_Returned, &r1_bio->state);
goto err_handle;
}
bio_chain(split, bio);
submit_bio_noacct(bio);
bio = split;
r1_bio->master_bio = bio;
r1_bio->sectors = max_sectors;
}
@@ -1413,8 +1411,6 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio,
err_handle:
atomic_dec(&mirror->rdev->nr_pending);
bio->bi_status = errno_to_blk_status(error);
set_bit(R1BIO_Uptodate, &r1_bio->state);
raid_end_bio_io(r1_bio);
}
@@ -1452,12 +1448,36 @@ retry:
return true;
}
static void raid1_start_write_behind(struct mddev *mddev, struct r1bio *r1_bio,
struct bio *bio)
{
unsigned long max_write_behind = mddev->bitmap_info.max_write_behind;
struct md_bitmap_stats stats;
int err;
/* behind write rely on bitmap, see bitmap_operations */
if (!md_bitmap_enabled(mddev, false))
return;
err = mddev->bitmap_ops->get_stats(mddev->bitmap, &stats);
if (err)
return;
/* Don't do behind IO if reader is waiting, or there are too many. */
if (!stats.behind_wait && stats.behind_writes < max_write_behind)
alloc_behind_master_bio(r1_bio, bio);
if (test_bit(R1BIO_BehindIO, &r1_bio->state))
mddev->bitmap_ops->start_behind_write(mddev);
}
static void raid1_write_request(struct mddev *mddev, struct bio *bio,
int max_write_sectors)
{
struct r1conf *conf = mddev->private;
struct r1bio *r1_bio;
int i, disks, k, error;
int i, disks, k;
unsigned long flags;
int first_clone;
int max_sectors;
@@ -1561,10 +1581,8 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
* complexity of supporting that is not worth
* the benefit.
*/
if (bio->bi_opf & REQ_ATOMIC) {
error = -EIO;
if (bio->bi_opf & REQ_ATOMIC)
goto err_handle;
}
good_sectors = first_bad - r1_bio->sector;
if (good_sectors < max_sectors)
@@ -1584,16 +1602,13 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
max_sectors = min_t(int, max_sectors,
BIO_MAX_VECS * (PAGE_SIZE >> 9));
if (max_sectors < bio_sectors(bio)) {
struct bio *split = bio_split(bio, max_sectors,
GFP_NOIO, &conf->bio_split);
if (IS_ERR(split)) {
error = PTR_ERR(split);
bio = bio_submit_split_bioset(bio, max_sectors,
&conf->bio_split);
if (!bio) {
set_bit(R1BIO_Returned, &r1_bio->state);
goto err_handle;
}
bio_chain(split, bio);
submit_bio_noacct(bio);
bio = split;
r1_bio->master_bio = bio;
r1_bio->sectors = max_sectors;
}
@@ -1612,22 +1627,8 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
continue;
if (first_clone) {
unsigned long max_write_behind =
mddev->bitmap_info.max_write_behind;
struct md_bitmap_stats stats;
int err;
/* do behind I/O ?
* Not if there are too many, or cannot
* allocate memory, or a reader on WriteMostly
* is waiting for behind writes to flush */
err = mddev->bitmap_ops->get_stats(mddev->bitmap, &stats);
if (!err && write_behind && !stats.behind_wait &&
stats.behind_writes < max_write_behind)
alloc_behind_master_bio(r1_bio, bio);
if (test_bit(R1BIO_BehindIO, &r1_bio->state))
mddev->bitmap_ops->start_behind_write(mddev);
if (write_behind)
raid1_start_write_behind(mddev, r1_bio, bio);
first_clone = 0;
}
@@ -1683,8 +1684,6 @@ err_handle:
}
}
bio->bi_status = errno_to_blk_status(error);
set_bit(R1BIO_Uptodate, &r1_bio->state);
raid_end_bio_io(r1_bio);
}
@@ -2057,7 +2056,7 @@ static void abort_sync_write(struct mddev *mddev, struct r1bio *r1_bio)
/* make sure these bits don't get cleared. */
do {
mddev->bitmap_ops->end_sync(mddev, s, &sync_blocks);
md_bitmap_end_sync(mddev, s, &sync_blocks);
s += sync_blocks;
sectors_to_go -= sync_blocks;
} while (sectors_to_go > 0);
@@ -2804,12 +2803,13 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr,
* We can find the current addess in mddev->curr_resync
*/
if (mddev->curr_resync < max_sector) /* aborted */
mddev->bitmap_ops->end_sync(mddev, mddev->curr_resync,
&sync_blocks);
md_bitmap_end_sync(mddev, mddev->curr_resync,
&sync_blocks);
else /* completed sync */
conf->fullsync = 0;
mddev->bitmap_ops->close_sync(mddev);
if (md_bitmap_enabled(mddev, false))
mddev->bitmap_ops->close_sync(mddev);
close_sync(conf);
if (mddev_is_clustered(mddev)) {
@@ -2829,7 +2829,7 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr,
/* before building a request, check if we can skip these blocks..
* This call the bitmap_start_sync doesn't actually record anything
*/
if (!mddev->bitmap_ops->start_sync(mddev, sector_nr, &sync_blocks, true) &&
if (!md_bitmap_start_sync(mddev, sector_nr, &sync_blocks, true) &&
!conf->fullsync && !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) {
/* We can skip this block, and probably several more */
*skipped = 1;
@@ -2846,10 +2846,11 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr,
/* we are incrementing sector_nr below. To be safe, we check against
* sector_nr + two times RESYNC_SECTORS
*/
mddev->bitmap_ops->cond_end_sync(mddev, sector_nr,
mddev_is_clustered(mddev) &&
(sector_nr + 2 * RESYNC_SECTORS > conf->cluster_sync_high));
if (md_bitmap_enabled(mddev, false))
mddev->bitmap_ops->cond_end_sync(mddev, sector_nr,
mddev_is_clustered(mddev) &&
(sector_nr + 2 * RESYNC_SECTORS >
conf->cluster_sync_high));
if (raise_barrier(conf, sector_nr))
return 0;
@@ -3004,8 +3005,8 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr,
if (len == 0)
break;
if (sync_blocks == 0) {
if (!mddev->bitmap_ops->start_sync(mddev, sector_nr,
&sync_blocks, still_degraded) &&
if (!md_bitmap_start_sync(mddev, sector_nr,
&sync_blocks, still_degraded) &&
!conf->fullsync &&
!test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery))
break;
@@ -3325,15 +3326,17 @@ static int raid1_resize(struct mddev *mddev, sector_t sectors)
* worth it.
*/
sector_t newsize = raid1_size(mddev, sectors, 0);
int ret;
if (mddev->external_size &&
mddev->array_sectors > newsize)
return -EINVAL;
ret = mddev->bitmap_ops->resize(mddev, newsize, 0, false);
if (ret)
return ret;
if (md_bitmap_enabled(mddev, false)) {
int ret = mddev->bitmap_ops->resize(mddev, newsize, 0);
if (ret)
return ret;
}
md_set_array_sectors(mddev, newsize);
if (sectors > mddev->dev_sectors &&

View File

@@ -178,7 +178,9 @@ enum r1bio_state {
* any write was successful. Otherwise we call when
* any write-behind write succeeds, otherwise we call
* with failure when last write completes (and all failed).
* Record that bi_end_io was called with this flag...
*
* And for bio_split errors, record that bi_end_io was called
* with this flag...
*/
R1BIO_Returned,
/* If a write for this request means we can clear some

View File

@@ -163,14 +163,14 @@ static void * r10buf_pool_alloc(gfp_t gfp_flags, void *data)
bio = bio_kmalloc(RESYNC_PAGES, gfp_flags);
if (!bio)
goto out_free_bio;
bio_init(bio, NULL, bio->bi_inline_vecs, RESYNC_PAGES, 0);
bio_init_inline(bio, NULL, RESYNC_PAGES, 0);
r10_bio->devs[j].bio = bio;
if (!conf->have_replacement)
continue;
bio = bio_kmalloc(RESYNC_PAGES, gfp_flags);
if (!bio)
goto out_free_bio;
bio_init(bio, NULL, bio->bi_inline_vecs, RESYNC_PAGES, 0);
bio_init_inline(bio, NULL, RESYNC_PAGES, 0);
r10_bio->devs[j].repl_bio = bio;
}
/*
@@ -322,10 +322,12 @@ static void raid_end_bio_io(struct r10bio *r10_bio)
struct bio *bio = r10_bio->master_bio;
struct r10conf *conf = r10_bio->mddev->private;
if (!test_bit(R10BIO_Uptodate, &r10_bio->state))
bio->bi_status = BLK_STS_IOERR;
if (!test_and_set_bit(R10BIO_Returned, &r10_bio->state)) {
if (!test_bit(R10BIO_Uptodate, &r10_bio->state))
bio->bi_status = BLK_STS_IOERR;
bio_endio(bio);
}
bio_endio(bio);
/*
* Wake up any possible resync thread that waits for the device
* to go idle.
@@ -1154,7 +1156,6 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio,
int slot = r10_bio->read_slot;
struct md_rdev *err_rdev = NULL;
gfp_t gfp = GFP_NOIO;
int error;
if (slot >= 0 && r10_bio->devs[slot].rdev) {
/*
@@ -1203,17 +1204,15 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio,
rdev->bdev,
(unsigned long long)r10_bio->sector);
if (max_sectors < bio_sectors(bio)) {
struct bio *split = bio_split(bio, max_sectors,
gfp, &conf->bio_split);
if (IS_ERR(split)) {
error = PTR_ERR(split);
allow_barrier(conf);
bio = bio_submit_split_bioset(bio, max_sectors,
&conf->bio_split);
wait_barrier(conf, false);
if (!bio) {
set_bit(R10BIO_Returned, &r10_bio->state);
goto err_handle;
}
bio_chain(split, bio);
allow_barrier(conf);
submit_bio_noacct(bio);
wait_barrier(conf, false);
bio = split;
r10_bio->master_bio = bio;
r10_bio->sectors = max_sectors;
}
@@ -1241,8 +1240,6 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio,
return;
err_handle:
atomic_dec(&rdev->nr_pending);
bio->bi_status = errno_to_blk_status(error);
set_bit(R10BIO_Uptodate, &r10_bio->state);
raid_end_bio_io(r10_bio);
}
@@ -1351,7 +1348,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
int i, k;
sector_t sectors;
int max_sectors;
int error;
if ((mddev_is_clustered(mddev) &&
mddev->cluster_ops->area_resyncing(mddev, WRITE,
@@ -1465,10 +1461,8 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
* complexity of supporting that is not worth
* the benefit.
*/
if (bio->bi_opf & REQ_ATOMIC) {
error = -EIO;
if (bio->bi_opf & REQ_ATOMIC)
goto err_handle;
}
good_sectors = first_bad - dev_sector;
if (good_sectors < max_sectors)
@@ -1489,17 +1483,15 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
r10_bio->sectors = max_sectors;
if (r10_bio->sectors < bio_sectors(bio)) {
struct bio *split = bio_split(bio, r10_bio->sectors,
GFP_NOIO, &conf->bio_split);
if (IS_ERR(split)) {
error = PTR_ERR(split);
allow_barrier(conf);
bio = bio_submit_split_bioset(bio, r10_bio->sectors,
&conf->bio_split);
wait_barrier(conf, false);
if (!bio) {
set_bit(R10BIO_Returned, &r10_bio->state);
goto err_handle;
}
bio_chain(split, bio);
allow_barrier(conf);
submit_bio_noacct(bio);
wait_barrier(conf, false);
bio = split;
r10_bio->master_bio = bio;
}
@@ -1531,8 +1523,6 @@ err_handle:
}
}
bio->bi_status = errno_to_blk_status(error);
set_bit(R10BIO_Uptodate, &r10_bio->state);
raid_end_bio_io(r10_bio);
}
@@ -1679,7 +1669,9 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio)
bio_endio(bio);
return 0;
}
bio_chain(split, bio);
trace_block_split(split, bio->bi_iter.bi_sector);
allow_barrier(conf);
/* Resend the fist split part */
submit_bio_noacct(split);
@@ -1694,7 +1686,9 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio)
bio_endio(bio);
return 0;
}
bio_chain(split, bio);
trace_block_split(split, bio->bi_iter.bi_sector);
allow_barrier(conf);
/* Resend the second split part */
submit_bio_noacct(bio);
@@ -3221,15 +3215,13 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
if (mddev->curr_resync < max_sector) { /* aborted */
if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery))
mddev->bitmap_ops->end_sync(mddev,
mddev->curr_resync,
&sync_blocks);
md_bitmap_end_sync(mddev, mddev->curr_resync,
&sync_blocks);
else for (i = 0; i < conf->geo.raid_disks; i++) {
sector_t sect =
raid10_find_virt(conf, mddev->curr_resync, i);
mddev->bitmap_ops->end_sync(mddev, sect,
&sync_blocks);
md_bitmap_end_sync(mddev, sect, &sync_blocks);
}
} else {
/* completed sync */
@@ -3249,7 +3241,8 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
}
conf->fullsync = 0;
}
mddev->bitmap_ops->close_sync(mddev);
if (md_bitmap_enabled(mddev, false))
mddev->bitmap_ops->close_sync(mddev);
close_sync(conf);
*skipped = 1;
return sectors_skipped;
@@ -3351,9 +3344,8 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
* we only need to recover the block if it is set in
* the bitmap
*/
must_sync = mddev->bitmap_ops->start_sync(mddev, sect,
&sync_blocks,
true);
must_sync = md_bitmap_start_sync(mddev, sect,
&sync_blocks, true);
if (sync_blocks < max_sync)
max_sync = sync_blocks;
if (!must_sync &&
@@ -3396,9 +3388,8 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
}
}
must_sync = mddev->bitmap_ops->start_sync(mddev, sect,
&sync_blocks, still_degraded);
md_bitmap_start_sync(mddev, sect, &sync_blocks,
still_degraded);
any_working = 0;
for (j=0; j<conf->copies;j++) {
int k;
@@ -3570,13 +3561,13 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
* safety reason, which ensures curr_resync_completed is
* updated in bitmap_cond_end_sync.
*/
mddev->bitmap_ops->cond_end_sync(mddev, sector_nr,
if (md_bitmap_enabled(mddev, false))
mddev->bitmap_ops->cond_end_sync(mddev, sector_nr,
mddev_is_clustered(mddev) &&
(sector_nr + 2 * RESYNC_SECTORS > conf->cluster_sync_high));
if (!mddev->bitmap_ops->start_sync(mddev, sector_nr,
&sync_blocks,
mddev->degraded) &&
if (!md_bitmap_start_sync(mddev, sector_nr, &sync_blocks,
mddev->degraded) &&
!conf->fullsync && !test_bit(MD_RECOVERY_REQUESTED,
&mddev->recovery)) {
/* We can skip this block */
@@ -4226,7 +4217,6 @@ static int raid10_resize(struct mddev *mddev, sector_t sectors)
*/
struct r10conf *conf = mddev->private;
sector_t oldsize, size;
int ret;
if (mddev->reshape_position != MaxSector)
return -EBUSY;
@@ -4240,9 +4230,12 @@ static int raid10_resize(struct mddev *mddev, sector_t sectors)
mddev->array_sectors > size)
return -EINVAL;
ret = mddev->bitmap_ops->resize(mddev, size, 0, false);
if (ret)
return ret;
if (md_bitmap_enabled(mddev, false)) {
int ret = mddev->bitmap_ops->resize(mddev, size, 0);
if (ret)
return ret;
}
md_set_array_sectors(mddev, size);
if (sectors > mddev->dev_sectors &&
@@ -4508,8 +4501,9 @@ static int raid10_start_reshape(struct mddev *mddev)
oldsize = raid10_size(mddev, 0, 0);
newsize = raid10_size(mddev, 0, conf->geo.raid_disks);
if (!mddev_is_clustered(mddev)) {
ret = mddev->bitmap_ops->resize(mddev, newsize, 0, false);
if (!mddev_is_clustered(mddev) &&
md_bitmap_enabled(mddev, false)) {
ret = mddev->bitmap_ops->resize(mddev, newsize, 0);
if (ret)
goto abort;
else
@@ -4531,13 +4525,14 @@ static int raid10_start_reshape(struct mddev *mddev)
MD_FEATURE_RESHAPE_ACTIVE)) || (oldsize == newsize))
goto out;
ret = mddev->bitmap_ops->resize(mddev, newsize, 0, false);
/* cluster can't be setup without bitmap */
ret = mddev->bitmap_ops->resize(mddev, newsize, 0);
if (ret)
goto abort;
ret = mddev->cluster_ops->resize_bitmaps(mddev, newsize, oldsize);
if (ret) {
mddev->bitmap_ops->resize(mddev, oldsize, 0, false);
mddev->bitmap_ops->resize(mddev, oldsize, 0);
goto abort;
}
}

View File

@@ -165,6 +165,8 @@ enum r10bio_state {
* so that raid10d knows what to do with them.
*/
R10BIO_ReadError,
/* For bio_split errors, record that bi_end_io was called. */
R10BIO_Returned,
/* If a write for this request means we can clear some
* known-bad-block records, we set this flag.
*/

View File

@@ -4097,7 +4097,8 @@ static int handle_stripe_dirtying(struct r5conf *conf,
int disks)
{
int rmw = 0, rcw = 0, i;
sector_t resync_offset = conf->mddev->resync_offset;
struct mddev *mddev = conf->mddev;
sector_t resync_offset = mddev->resync_offset;
/* Check whether resync is now happening or should start.
* If yes, then the array is dirty (after unclean shutdown or
@@ -4116,6 +4117,12 @@ static int handle_stripe_dirtying(struct r5conf *conf,
pr_debug("force RCW rmw_level=%u, resync_offset=%llu sh->sector=%llu\n",
conf->rmw_level, (unsigned long long)resync_offset,
(unsigned long long)sh->sector);
} else if (mddev->bitmap_ops && mddev->bitmap_ops->blocks_synced &&
!mddev->bitmap_ops->blocks_synced(mddev, sh->sector)) {
/* The initial recover is not done, must read everything */
rcw = 1; rmw = 2;
pr_debug("force RCW by lazy recovery, sh->sector=%llu\n",
sh->sector);
} else for (i = disks; i--; ) {
/* would I have to read this buffer for read_modify_write */
struct r5dev *dev = &sh->dev[i];
@@ -4148,7 +4155,7 @@ static int handle_stripe_dirtying(struct r5conf *conf,
set_bit(STRIPE_HANDLE, &sh->state);
if ((rmw < rcw || (rmw == rcw && conf->rmw_level == PARITY_PREFER_RMW)) && rmw > 0) {
/* prefer read-modify-write, but need to get some data */
mddev_add_trace_msg(conf->mddev, "raid5 rmw %llu %d",
mddev_add_trace_msg(mddev, "raid5 rmw %llu %d",
sh->sector, rmw);
for (i = disks; i--; ) {
@@ -4227,8 +4234,8 @@ static int handle_stripe_dirtying(struct r5conf *conf,
set_bit(STRIPE_DELAYED, &sh->state);
}
}
if (rcw && !mddev_is_dm(conf->mddev))
blk_add_trace_msg(conf->mddev->gendisk->queue,
if (rcw && !mddev_is_dm(mddev))
blk_add_trace_msg(mddev->gendisk->queue,
"raid5 rcw %llu %d %d %d",
(unsigned long long)sh->sector, rcw, qread,
test_bit(STRIPE_DELAYED, &sh->state));
@@ -4698,10 +4705,21 @@ static void analyse_stripe(struct stripe_head *sh, struct stripe_head_state *s)
}
} else if (test_bit(In_sync, &rdev->flags))
set_bit(R5_Insync, &dev->flags);
else if (sh->sector + RAID5_STRIPE_SECTORS(conf) <= rdev->recovery_offset)
/* in sync if before recovery_offset */
set_bit(R5_Insync, &dev->flags);
else if (test_bit(R5_UPTODATE, &dev->flags) &&
else if (sh->sector + RAID5_STRIPE_SECTORS(conf) <=
rdev->recovery_offset) {
/*
* in sync if:
* - normal IO, or
* - resync IO that is not lazy recovery
*
* For lazy recovery, we have to mark the rdev without
* In_sync as failed, to build initial xor data.
*/
if (!test_bit(STRIPE_SYNCING, &sh->state) ||
!test_bit(MD_RECOVERY_LAZY_RECOVER,
&conf->mddev->recovery))
set_bit(R5_Insync, &dev->flags);
} else if (test_bit(R5_UPTODATE, &dev->flags) &&
test_bit(R5_Expanded, &dev->flags))
/* If we've reshaped into here, we assume it is Insync.
* We will shortly update recovery_offset to make
@@ -5468,17 +5486,17 @@ static int raid5_read_one_chunk(struct mddev *mddev, struct bio *raid_bio)
static struct bio *chunk_aligned_read(struct mddev *mddev, struct bio *raid_bio)
{
struct bio *split;
sector_t sector = raid_bio->bi_iter.bi_sector;
unsigned chunk_sects = mddev->chunk_sectors;
unsigned sectors = chunk_sects - (sector & (chunk_sects-1));
if (sectors < bio_sectors(raid_bio)) {
struct r5conf *conf = mddev->private;
split = bio_split(raid_bio, sectors, GFP_NOIO, &conf->bio_split);
bio_chain(split, raid_bio);
submit_bio_noacct(raid_bio);
raid_bio = split;
raid_bio = bio_submit_split_bioset(raid_bio, sectors,
&conf->bio_split);
if (!raid_bio)
return NULL;
}
if (!raid5_read_one_chunk(mddev, raid_bio))
@@ -6492,11 +6510,12 @@ static inline sector_t raid5_sync_request(struct mddev *mddev, sector_t sector_n
}
if (mddev->curr_resync < max_sector) /* aborted */
mddev->bitmap_ops->end_sync(mddev, mddev->curr_resync,
&sync_blocks);
md_bitmap_end_sync(mddev, mddev->curr_resync,
&sync_blocks);
else /* completed sync */
conf->fullsync = 0;
mddev->bitmap_ops->close_sync(mddev);
if (md_bitmap_enabled(mddev, false))
mddev->bitmap_ops->close_sync(mddev);
return 0;
}
@@ -6525,8 +6544,7 @@ static inline sector_t raid5_sync_request(struct mddev *mddev, sector_t sector_n
}
if (!test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery) &&
!conf->fullsync &&
!mddev->bitmap_ops->start_sync(mddev, sector_nr, &sync_blocks,
true) &&
!md_bitmap_start_sync(mddev, sector_nr, &sync_blocks, true) &&
sync_blocks >= RAID5_STRIPE_SECTORS(conf)) {
/* we can skip this block, and probably more */
do_div(sync_blocks, RAID5_STRIPE_SECTORS(conf));
@@ -6535,7 +6553,8 @@ static inline sector_t raid5_sync_request(struct mddev *mddev, sector_t sector_n
return sync_blocks * RAID5_STRIPE_SECTORS(conf);
}
mddev->bitmap_ops->cond_end_sync(mddev, sector_nr, false);
if (md_bitmap_enabled(mddev, false))
mddev->bitmap_ops->cond_end_sync(mddev, sector_nr, false);
sh = raid5_get_active_stripe(conf, NULL, sector_nr,
R5_GAS_NOBLOCK);
@@ -6557,9 +6576,7 @@ static inline sector_t raid5_sync_request(struct mddev *mddev, sector_t sector_n
still_degraded = true;
}
mddev->bitmap_ops->start_sync(mddev, sector_nr, &sync_blocks,
still_degraded);
md_bitmap_start_sync(mddev, sector_nr, &sync_blocks, still_degraded);
set_bit(STRIPE_SYNC_REQUESTED, &sh->state);
set_bit(STRIPE_HANDLE, &sh->state);
@@ -6763,7 +6780,8 @@ static void raid5d(struct md_thread *thread)
/* Now is a good time to flush some bitmap updates */
conf->seq_flush++;
spin_unlock_irq(&conf->device_lock);
mddev->bitmap_ops->unplug(mddev, true);
if (md_bitmap_enabled(mddev, true))
mddev->bitmap_ops->unplug(mddev, true);
spin_lock_irq(&conf->device_lock);
conf->seq_write = conf->seq_flush;
activate_bit_delay(conf, conf->temp_inactive_list);
@@ -8313,7 +8331,6 @@ static int raid5_resize(struct mddev *mddev, sector_t sectors)
*/
sector_t newsize;
struct r5conf *conf = mddev->private;
int ret;
if (raid5_has_log(conf) || raid5_has_ppl(conf))
return -EINVAL;
@@ -8323,9 +8340,12 @@ static int raid5_resize(struct mddev *mddev, sector_t sectors)
mddev->array_sectors > newsize)
return -EINVAL;
ret = mddev->bitmap_ops->resize(mddev, sectors, 0, false);
if (ret)
return ret;
if (md_bitmap_enabled(mddev, false)) {
int ret = mddev->bitmap_ops->resize(mddev, sectors, 0);
if (ret)
return ret;
}
md_set_array_sectors(mddev, newsize);
if (sectors > mddev->dev_sectors &&

View File

@@ -1953,10 +1953,10 @@ static void msb_data_clear(struct msb_data *msb)
msb->card = NULL;
}
static int msb_bd_getgeo(struct block_device *bdev,
static int msb_bd_getgeo(struct gendisk *disk,
struct hd_geometry *geo)
{
struct msb_data *msb = bdev->bd_disk->private_data;
struct msb_data *msb = disk->private_data;
*geo = msb->geometry;
return 0;
}

View File

@@ -189,10 +189,10 @@ static void mspro_block_bd_free_disk(struct gendisk *disk)
kfree(msb);
}
static int mspro_block_bd_getgeo(struct block_device *bdev,
static int mspro_block_bd_getgeo(struct gendisk *disk,
struct hd_geometry *geo)
{
struct mspro_block_data *msb = bdev->bd_disk->private_data;
struct mspro_block_data *msb = disk->private_data;
geo->heads = msb->heads;
geo->sectors = msb->sectors_per_track;

Some files were not shown because too many files have changed in this diff Show More