Merge tag 'rcu.release.v6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/rcu/linux

Pull RCU updates from Frederic Weisbecker:
 "SRCU:

   - Properly handle SRCU readers within IRQ disabled sections in tiny
     SRCU

   - Preparation to reimplement RCU Tasks Trace on top of SRCU fast:

      - Introduce API to expedite a grace period and test it through
        rcutorture

      - Split srcu-fast in two flavours: SRCU-fast and SRCU-fast-updown.

        Both are still targeted toward faster readers (without full
        barriers on LOCK and UNLOCK) at the expense of heavier write
        side (using full RCU grace period ordering instead of simply
        full ordering) as compared to "traditional" non-fast SRCU. But
        those srcu-fast flavours are going to be optimized in two
        different ways:

          - SRCU-fast will become the reimplementation basis for
            RCU-TASK-TRACE for consolidation. Since RCU-TASK-TRACE must
            be NMI safe, SRCU-fast must be as well.

          - SRCU-fast-updown will be needed for uretprobes code in order
            to get rid of the read-side memory barriers while still
            allowing entering the reader at task level while exiting it
            in a timer handler. It is considered semaphore-like in that
            it can have different owners between LOCK and UNLOCK.
            However it is not NMI-safe.

        The actual optimizations are work in progress for the next
        cycle. Only the new interfaces are added for now, along with
        related torture and scalability test code.

   - Create/document/debug/torture new proper initializers for RCU fast:
     DEFINE_SRCU_FAST() and init_srcu_struct_fast()

     This allows for using right away the proper ordering on the write
     side (either full ordering or full RCU grace period ordering)
     without waiting for the read side to tell which to use.

     This also optimizes the read side altogether with moving flavour
     debug checks under debug config and with removing a costly RmW
     operation on their first call.

   - Make some diagnostic functions tracing safe

  Refscale:

   - Add performance testing for common context synchronizations
     (Preemption, IRQ, Softirq) and per-cpu increments. Those are
     relevant comparisons against SRCU-fast read side APIs, especially
     as they are planned to synchronize further tracing fast-path code

  Miscellanous:

   - In order to prepare the layout for nohz_full work deferral to user
     exit, the context tracking state must shrink the counter of
     transitions to/from RCU not watching. The only possible hazard is
     to trigger wrap-around more easily, delaying a bit grace periods
     when that happens. This should be a rare event though. Yet add
     debugging and torture code to test that assumption

   - Fix memory leak on locktorture module

   - Annotate accesses in rculist_nulls.h to prevent from KCSAN
     warnings. On recent discussions, we also concluded that all those
     WRITE_ONCE() and READ_ONCE() on list APIs deserve appropriate
     comments. Something to be expected for the next cycle

   - Provide a script to apply several configs to several commits with
     torture

   - Allow torture to reuse a build directory in order to save needless
     rebuild time

   - Various cleanups"

* tag 'rcu.release.v6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/rcu/linux: (29 commits)
  refscale: Add SRCU-fast-updown readers
  refscale: Exercise DEFINE_STATIC_SRCU_FAST() and init_srcu_struct_fast()
  rcutorture: Make srcu{,d}_torture_init() announce the SRCU type
  srcu: Create an SRCU-fast-updown API
  refscale: Do not disable interrupts for tests involving local_bh_enable()
  refscale: Add non-atomic per-CPU increment readers
  refscale: Add this_cpu_inc() readers
  refscale: Add preempt_disable() readers
  refscale: Add local_bh_disable() readers
  refscale: Add local_irq_disable() and local_irq_save() readers
  torture: Permit negative kvm.sh --kconfig numberic arguments
  srcu: Add SRCU_READ_FLAVOR_FAST_UPDOWN CPP macro
  rcu: Mark diagnostic functions as notrace
  rcutorture: Make TREE04 use CONFIG_RCU_DYNTICKS_TORTURE
  rcutorture: Remove redundant rcutorture_one_extend() from rcu_torture_one_read()
  rcutorture: Permit kvm-again.sh to re-use the build directory
  torture: Add kvm-series.sh to test commit/scenario combination
  rcu: use WRITE_ONCE() for ->next and ->pprev of hlist_nulls
  locktorture: Fix memory leak in param_set_cpumask()
  doc: Update for SRCU-fast definitions and initialization
  ...
This commit is contained in:
Linus Torvalds
2025-12-03 12:18:07 -08:00
21 changed files with 1040 additions and 141 deletions

View File

@@ -2637,15 +2637,16 @@ synchronize_srcu() for some other domain ``ss1``, and if an
that was held across as ``ss``-domain synchronize_srcu(), deadlock
would again be possible. Such a deadlock cycle could extend across an
arbitrarily large number of different SRCU domains. Again, with great
power comes great responsibility.
power comes great responsibility, though lockdep is now able to detect
this sort of deadlock.
Unlike the other RCU flavors, SRCU read-side critical sections can run
on idle and even offline CPUs. This ability requires that
srcu_read_lock() and srcu_read_unlock() contain memory barriers,
which means that SRCU readers will run a bit slower than would RCU
readers. It also motivates the smp_mb__after_srcu_read_unlock() API,
which, in combination with srcu_read_unlock(), guarantees a full
memory barrier.
Unlike the other RCU flavors, SRCU read-side critical sections can run on
idle and even offline CPUs, with the exception of srcu_read_lock_fast()
and friends. This ability requires that srcu_read_lock() and
srcu_read_unlock() contain memory barriers, which means that SRCU
readers will run a bit slower than would RCU readers. It also motivates
the smp_mb__after_srcu_read_unlock() API, which, in combination with
srcu_read_unlock(), guarantees a full memory barrier.
Also unlike other RCU flavors, synchronize_srcu() may **not** be
invoked from CPU-hotplug notifiers, due to the fact that SRCU grace
@@ -2681,15 +2682,15 @@ run some tests first. SRCU just might need a few adjustment to deal with
that sort of load. Of course, your mileage may vary based on the speed
of your CPUs and the size of your memory.
The `SRCU
API <https://lwn.net/Articles/609973/#RCU%20Per-Flavor%20API%20Table>`__
The `SRCU API
<https://lwn.net/Articles/609973/#RCU%20Per-Flavor%20API%20Table>`__
includes srcu_read_lock(), srcu_read_unlock(),
srcu_dereference(), srcu_dereference_check(),
synchronize_srcu(), synchronize_srcu_expedited(),
call_srcu(), srcu_barrier(), and srcu_read_lock_held(). It
also includes DEFINE_SRCU(), DEFINE_STATIC_SRCU(), and
init_srcu_struct() APIs for defining and initializing
``srcu_struct`` structures.
srcu_dereference(), srcu_dereference_check(), synchronize_srcu(),
synchronize_srcu_expedited(), call_srcu(), srcu_barrier(),
and srcu_read_lock_held(). It also includes DEFINE_SRCU(),
DEFINE_STATIC_SRCU(), DEFINE_SRCU_FAST(), DEFINE_STATIC_SRCU_FAST(),
init_srcu_struct(), and init_srcu_struct_fast() APIs for defining and
initializing ``srcu_struct`` structures.
More recently, the SRCU API has added polling interfaces:

View File

@@ -417,11 +417,13 @@ over a rather long period of time, but improvements are always welcome!
you should be using RCU rather than SRCU, because RCU is almost
always faster and easier to use than is SRCU.
Also unlike other forms of RCU, explicit initialization and
cleanup is required either at build time via DEFINE_SRCU()
or DEFINE_STATIC_SRCU() or at runtime via init_srcu_struct()
and cleanup_srcu_struct(). These last two are passed a
"struct srcu_struct" that defines the scope of a given
Also unlike other forms of RCU, explicit initialization
and cleanup is required either at build time via
DEFINE_SRCU(), DEFINE_STATIC_SRCU(), DEFINE_SRCU_FAST(),
or DEFINE_STATIC_SRCU_FAST() or at runtime via either
init_srcu_struct() or init_srcu_struct_fast() and
cleanup_srcu_struct(). These last three are passed a
`struct srcu_struct` that defines the scope of a given
SRCU domain. Once initialized, the srcu_struct is passed
to srcu_read_lock(), srcu_read_unlock() synchronize_srcu(),
synchronize_srcu_expedited(), and call_srcu(). A given

View File

@@ -1227,7 +1227,10 @@ SRCU: Initialization/cleanup/ordering::
DEFINE_SRCU
DEFINE_STATIC_SRCU
DEFINE_SRCU_FAST // for srcu_read_lock_fast() and friends
DEFINE_STATIC_SRCU_FAST // for srcu_read_lock_fast() and friends
init_srcu_struct
init_srcu_struct_fast
cleanup_srcu_struct
smp_mb__after_srcu_read_unlock

View File

@@ -18,12 +18,6 @@ enum ctx_state {
CT_STATE_MAX = 4,
};
/* Odd value for watching, else even. */
#define CT_RCU_WATCHING CT_STATE_MAX
#define CT_STATE_MASK (CT_STATE_MAX - 1)
#define CT_RCU_WATCHING_MASK (~CT_STATE_MASK)
struct context_tracking {
#ifdef CONFIG_CONTEXT_TRACKING_USER
/*
@@ -44,9 +38,45 @@ struct context_tracking {
#endif
};
/*
* We cram two different things within the same atomic variable:
*
* CT_RCU_WATCHING_START CT_STATE_START
* | |
* v v
* MSB [ RCU watching counter ][ context_state ] LSB
* ^ ^
* | |
* CT_RCU_WATCHING_END CT_STATE_END
*
* Bits are used from the LSB upwards, so unused bits (if any) will always be in
* upper bits of the variable.
*/
#ifdef CONFIG_CONTEXT_TRACKING
#define CT_SIZE (sizeof(((struct context_tracking *)0)->state) * BITS_PER_BYTE)
#define CT_STATE_WIDTH bits_per(CT_STATE_MAX - 1)
#define CT_STATE_START 0
#define CT_STATE_END (CT_STATE_START + CT_STATE_WIDTH - 1)
#define CT_RCU_WATCHING_MAX_WIDTH (CT_SIZE - CT_STATE_WIDTH)
#define CT_RCU_WATCHING_WIDTH (IS_ENABLED(CONFIG_RCU_DYNTICKS_TORTURE) ? 2 : CT_RCU_WATCHING_MAX_WIDTH)
#define CT_RCU_WATCHING_START (CT_STATE_END + 1)
#define CT_RCU_WATCHING_END (CT_RCU_WATCHING_START + CT_RCU_WATCHING_WIDTH - 1)
#define CT_RCU_WATCHING BIT(CT_RCU_WATCHING_START)
#define CT_STATE_MASK GENMASK(CT_STATE_END, CT_STATE_START)
#define CT_RCU_WATCHING_MASK GENMASK(CT_RCU_WATCHING_END, CT_RCU_WATCHING_START)
#define CT_UNUSED_WIDTH (CT_RCU_WATCHING_MAX_WIDTH - CT_RCU_WATCHING_WIDTH)
static_assert(CT_STATE_WIDTH +
CT_RCU_WATCHING_WIDTH +
CT_UNUSED_WIDTH ==
CT_SIZE);
DECLARE_PER_CPU(struct context_tracking, context_tracking);
#endif
#endif /* CONFIG_CONTEXT_TRACKING */
#ifdef CONFIG_CONTEXT_TRACKING_USER
static __always_inline int __ct_state(void)

View File

@@ -109,7 +109,7 @@ extern void srcu_init_notifier_head(struct srcu_notifier_head *nh);
.mutex = __MUTEX_INITIALIZER(name.mutex), \
.head = NULL, \
.srcuu = __SRCU_USAGE_INIT(name.srcuu), \
.srcu = __SRCU_STRUCT_INIT(name.srcu, name.srcuu, pcpu), \
.srcu = __SRCU_STRUCT_INIT(name.srcu, name.srcuu, pcpu, 0), \
}
#define ATOMIC_NOTIFIER_HEAD(name) \

View File

@@ -138,7 +138,7 @@ static inline void hlist_nulls_add_tail_rcu(struct hlist_nulls_node *n,
if (last) {
WRITE_ONCE(n->next, last->next);
n->pprev = &last->next;
WRITE_ONCE(n->pprev, &last->next);
rcu_assign_pointer(hlist_nulls_next_rcu(last), n);
} else {
hlist_nulls_add_head_rcu(n, h);
@@ -148,8 +148,8 @@ static inline void hlist_nulls_add_tail_rcu(struct hlist_nulls_node *n,
/* after that hlist_nulls_del will work */
static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
{
n->pprev = &n->next;
n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL);
WRITE_ONCE(n->pprev, &n->next);
WRITE_ONCE(n->next, (struct hlist_nulls_node *)NULLS_MARKER(NULL));
}
/**

View File

@@ -25,8 +25,12 @@ struct srcu_struct;
#ifdef CONFIG_DEBUG_LOCK_ALLOC
int __init_srcu_struct(struct srcu_struct *ssp, const char *name,
struct lock_class_key *key);
int __init_srcu_struct(struct srcu_struct *ssp, const char *name, struct lock_class_key *key);
#ifndef CONFIG_TINY_SRCU
int __init_srcu_struct_fast(struct srcu_struct *ssp, const char *name, struct lock_class_key *key);
int __init_srcu_struct_fast_updown(struct srcu_struct *ssp, const char *name,
struct lock_class_key *key);
#endif // #ifndef CONFIG_TINY_SRCU
#define init_srcu_struct(ssp) \
({ \
@@ -35,22 +39,42 @@ int __init_srcu_struct(struct srcu_struct *ssp, const char *name,
__init_srcu_struct((ssp), #ssp, &__srcu_key); \
})
#define init_srcu_struct_fast(ssp) \
({ \
static struct lock_class_key __srcu_key; \
\
__init_srcu_struct_fast((ssp), #ssp, &__srcu_key); \
})
#define init_srcu_struct_fast_updown(ssp) \
({ \
static struct lock_class_key __srcu_key; \
\
__init_srcu_struct_fast_updown((ssp), #ssp, &__srcu_key); \
})
#define __SRCU_DEP_MAP_INIT(srcu_name) .dep_map = { .name = #srcu_name },
#else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
int init_srcu_struct(struct srcu_struct *ssp);
#ifndef CONFIG_TINY_SRCU
int init_srcu_struct_fast(struct srcu_struct *ssp);
int init_srcu_struct_fast_updown(struct srcu_struct *ssp);
#endif // #ifndef CONFIG_TINY_SRCU
#define __SRCU_DEP_MAP_INIT(srcu_name)
#endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
/* Values for SRCU Tree srcu_data ->srcu_reader_flavor, but also used by rcutorture. */
#define SRCU_READ_FLAVOR_NORMAL 0x1 // srcu_read_lock().
#define SRCU_READ_FLAVOR_NMI 0x2 // srcu_read_lock_nmisafe().
// 0x4 // SRCU-lite is no longer with us.
#define SRCU_READ_FLAVOR_FAST 0x8 // srcu_read_lock_fast().
#define SRCU_READ_FLAVOR_ALL (SRCU_READ_FLAVOR_NORMAL | SRCU_READ_FLAVOR_NMI | \
SRCU_READ_FLAVOR_FAST) // All of the above.
#define SRCU_READ_FLAVOR_SLOWGP SRCU_READ_FLAVOR_FAST
#define SRCU_READ_FLAVOR_NORMAL 0x1 // srcu_read_lock().
#define SRCU_READ_FLAVOR_NMI 0x2 // srcu_read_lock_nmisafe().
// 0x4 // SRCU-lite is no longer with us.
#define SRCU_READ_FLAVOR_FAST 0x4 // srcu_read_lock_fast().
#define SRCU_READ_FLAVOR_FAST_UPDOWN 0x8 // srcu_read_lock_fast().
#define SRCU_READ_FLAVOR_ALL (SRCU_READ_FLAVOR_NORMAL | SRCU_READ_FLAVOR_NMI | \
SRCU_READ_FLAVOR_FAST | SRCU_READ_FLAVOR_FAST_UPDOWN)
// All of the above.
#define SRCU_READ_FLAVOR_SLOWGP (SRCU_READ_FLAVOR_FAST | SRCU_READ_FLAVOR_FAST_UPDOWN)
// Flavors requiring synchronize_rcu()
// instead of smp_mb().
void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp);
@@ -259,29 +283,78 @@ static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp)
* @ssp: srcu_struct in which to register the new reader.
*
* Enter an SRCU read-side critical section, but for a light-weight
* smp_mb()-free reader. See srcu_read_lock() for more information.
* smp_mb()-free reader. See srcu_read_lock() for more information. This
* function is NMI-safe, in a manner similar to srcu_read_lock_nmisafe().
*
* If srcu_read_lock_fast() is ever used on an srcu_struct structure,
* then none of the other flavors may be used, whether before, during,
* or after. Note that grace-period auto-expediting is disabled for _fast
* srcu_struct structures because auto-expedited grace periods invoke
* synchronize_rcu_expedited(), IPIs and all.
* For srcu_read_lock_fast() to be used on an srcu_struct structure,
* that structure must have been defined using either DEFINE_SRCU_FAST()
* or DEFINE_STATIC_SRCU_FAST() on the one hand or initialized with
* init_srcu_struct_fast() on the other. Such an srcu_struct structure
* cannot be passed to any non-fast variant of srcu_read_{,un}lock() or
* srcu_{down,up}_read(). In kernels built with CONFIG_PROVE_RCU=y,
* __srcu_check_read_flavor() will complain bitterly if you ignore this
* restriction.
*
* Note that srcu_read_lock_fast() can be invoked only from those contexts
* where RCU is watching, that is, from contexts where it would be legal
* to invoke rcu_read_lock(). Otherwise, lockdep will complain.
* Grace-period auto-expediting is disabled for SRCU-fast srcu_struct
* structures because SRCU-fast expedited grace periods invoke
* synchronize_rcu_expedited(), IPIs and all. If you need expedited
* SRCU-fast grace periods, use synchronize_srcu_expedited().
*
* The srcu_read_lock_fast() function can be invoked only from those
* contexts where RCU is watching, that is, from contexts where it would
* be legal to invoke rcu_read_lock(). Otherwise, lockdep will complain.
*/
static inline struct srcu_ctr __percpu *srcu_read_lock_fast(struct srcu_struct *ssp) __acquires(ssp)
{
struct srcu_ctr __percpu *retval;
RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_read_lock_fast().");
srcu_check_read_flavor_force(ssp, SRCU_READ_FLAVOR_FAST);
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST);
retval = __srcu_read_lock_fast(ssp);
rcu_try_lock_acquire(&ssp->dep_map);
return retval;
}
/**
* srcu_read_lock_fast_updown - register a new reader for an SRCU-fast-updown structure.
* @ssp: srcu_struct in which to register the new reader.
*
* Enter an SRCU read-side critical section, but for a light-weight
* smp_mb()-free reader. See srcu_read_lock() for more information.
* This function is compatible with srcu_down_read_fast(), but is not
* NMI-safe.
*
* For srcu_read_lock_fast_updown() to be used on an srcu_struct
* structure, that structure must have been defined using either
* DEFINE_SRCU_FAST_UPDOWN() or DEFINE_STATIC_SRCU_FAST_UPDOWN() on the one
* hand or initialized with init_srcu_struct_fast_updown() on the other.
* Such an srcu_struct structure cannot be passed to any non-fast-updown
* variant of srcu_read_{,un}lock() or srcu_{down,up}_read(). In kernels
* built with CONFIG_PROVE_RCU=y, __srcu_check_read_flavor() will complain
* bitterly if you ignore this * restriction.
*
* Grace-period auto-expediting is disabled for SRCU-fast-updown
* srcu_struct structures because SRCU-fast-updown expedited grace periods
* invoke synchronize_rcu_expedited(), IPIs and all. If you need expedited
* SRCU-fast-updown grace periods, use synchronize_srcu_expedited().
*
* The srcu_read_lock_fast_updown() function can be invoked only from
* those contexts where RCU is watching, that is, from contexts where
* it would be legal to invoke rcu_read_lock(). Otherwise, lockdep will
* complain.
*/
static inline struct srcu_ctr __percpu *srcu_read_lock_fast_updown(struct srcu_struct *ssp)
__acquires(ssp)
{
struct srcu_ctr __percpu *retval;
RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_read_lock_fast_updown().");
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST_UPDOWN);
retval = __srcu_read_lock_fast_updown(ssp);
rcu_try_lock_acquire(&ssp->dep_map);
return retval;
}
/*
* Used by tracing, cannot be traced and cannot call lockdep.
* See srcu_read_lock_fast() for more information.
@@ -291,7 +364,7 @@ static inline struct srcu_ctr __percpu *srcu_read_lock_fast_notrace(struct srcu_
{
struct srcu_ctr __percpu *retval;
srcu_check_read_flavor_force(ssp, SRCU_READ_FLAVOR_FAST);
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST);
retval = __srcu_read_lock_fast(ssp);
return retval;
}
@@ -305,14 +378,15 @@ static inline struct srcu_ctr __percpu *srcu_read_lock_fast_notrace(struct srcu_
* srcu_down_read() for more information.
*
* The same srcu_struct may be used concurrently by srcu_down_read_fast()
* and srcu_read_lock_fast().
* and srcu_read_lock_fast(). However, the same definition/initialization
* requirements called out for srcu_read_lock_safe() apply.
*/
static inline struct srcu_ctr __percpu *srcu_down_read_fast(struct srcu_struct *ssp) __acquires(ssp)
{
WARN_ON_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && in_nmi());
RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_down_read_fast().");
srcu_check_read_flavor_force(ssp, SRCU_READ_FLAVOR_FAST);
return __srcu_read_lock_fast(ssp);
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST_UPDOWN);
return __srcu_read_lock_fast_updown(ssp);
}
/**
@@ -408,6 +482,23 @@ static inline void srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ct
RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_read_unlock_fast().");
}
/**
* srcu_read_unlock_fast_updown - unregister a old reader from an SRCU-fast-updown structure.
* @ssp: srcu_struct in which to unregister the old reader.
* @scp: return value from corresponding srcu_read_lock_fast_updown().
*
* Exit an SRCU-fast-updown read-side critical section.
*/
static inline void
srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp) __releases(ssp)
{
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST_UPDOWN);
srcu_lock_release(&ssp->dep_map);
__srcu_read_unlock_fast_updown(ssp, scp);
RCU_LOCKDEP_WARN(!rcu_is_watching(),
"RCU must be watching srcu_read_unlock_fast_updown().");
}
/*
* Used by tracing, cannot be traced and cannot call lockdep.
* See srcu_read_unlock_fast() for more information.
@@ -431,9 +522,9 @@ static inline void srcu_up_read_fast(struct srcu_struct *ssp, struct srcu_ctr __
__releases(ssp)
{
WARN_ON_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && in_nmi());
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST);
__srcu_read_unlock_fast(ssp, scp);
RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_up_read_fast().");
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST_UPDOWN);
__srcu_read_unlock_fast_updown(ssp, scp);
RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_up_read_fast_updown().");
}
/**

View File

@@ -31,7 +31,7 @@ struct srcu_struct {
void srcu_drive_gp(struct work_struct *wp);
#define __SRCU_STRUCT_INIT(name, __ignored, ___ignored) \
#define __SRCU_STRUCT_INIT(name, __ignored, ___ignored, ____ignored) \
{ \
.srcu_wq = __SWAIT_QUEUE_HEAD_INITIALIZER(name.srcu_wq), \
.srcu_cb_tail = &name.srcu_cb_head, \
@@ -44,13 +44,25 @@ void srcu_drive_gp(struct work_struct *wp);
* Tree SRCU, which needs some per-CPU data.
*/
#define DEFINE_SRCU(name) \
struct srcu_struct name = __SRCU_STRUCT_INIT(name, name, name)
struct srcu_struct name = __SRCU_STRUCT_INIT(name, name, name, name)
#define DEFINE_STATIC_SRCU(name) \
static struct srcu_struct name = __SRCU_STRUCT_INIT(name, name, name)
static struct srcu_struct name = __SRCU_STRUCT_INIT(name, name, name, name)
#define DEFINE_SRCU_FAST(name) DEFINE_SRCU(name)
#define DEFINE_STATIC_SRCU_FAST(name) \
static struct srcu_struct name = __SRCU_STRUCT_INIT(name, name, name, name)
#define DEFINE_SRCU_FAST_UPDOWN(name) DEFINE_SRCU(name)
#define DEFINE_STATIC_SRCU_FAST_UPDOWN(name) \
static struct srcu_struct name = __SRCU_STRUCT_INIT(name, name, name, name)
// Dummy structure for srcu_notifier_head.
struct srcu_usage { };
#define __SRCU_USAGE_INIT(name) { }
#define __init_srcu_struct_fast __init_srcu_struct
#define __init_srcu_struct_fast_updown __init_srcu_struct
#ifndef CONFIG_DEBUG_LOCK_ALLOC
#define init_srcu_struct_fast init_srcu_struct
#define init_srcu_struct_fast_updown init_srcu_struct
#endif // #ifndef CONFIG_DEBUG_LOCK_ALLOC
void synchronize_srcu(struct srcu_struct *ssp);
@@ -93,6 +105,17 @@ static inline void __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_
__srcu_read_unlock(ssp, __srcu_ptr_to_ctr(ssp, scp));
}
static inline struct srcu_ctr __percpu *__srcu_read_lock_fast_updown(struct srcu_struct *ssp)
{
return __srcu_ctr_to_ptr(ssp, __srcu_read_lock(ssp));
}
static inline
void __srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
{
__srcu_read_unlock(ssp, __srcu_ptr_to_ctr(ssp, scp));
}
static inline void synchronize_srcu_expedited(struct srcu_struct *ssp)
{
synchronize_srcu(ssp);
@@ -103,8 +126,8 @@ static inline void srcu_barrier(struct srcu_struct *ssp)
synchronize_srcu(ssp);
}
static inline void srcu_expedite_current(struct srcu_struct *ssp) { }
#define srcu_check_read_flavor(ssp, read_flavor) do { } while (0)
#define srcu_check_read_flavor_force(ssp, read_flavor) do { } while (0)
/* Defined here to avoid size increase for non-torture kernels. */
static inline void srcu_torture_stats_print(struct srcu_struct *ssp,

View File

@@ -42,6 +42,8 @@ struct srcu_data {
struct timer_list delay_work; /* Delay for CB invoking */
struct work_struct work; /* Context for CB invoking. */
struct rcu_head srcu_barrier_head; /* For srcu_barrier() use. */
struct rcu_head srcu_ec_head; /* For srcu_expedite_current() use. */
int srcu_ec_state; /* State for srcu_expedite_current(). */
struct srcu_node *mynode; /* Leaf srcu_node. */
unsigned long grpmask; /* Mask for leaf srcu_node */
/* ->srcu_data_have_cbs[]. */
@@ -102,6 +104,7 @@ struct srcu_usage {
struct srcu_struct {
struct srcu_ctr __percpu *srcu_ctrp;
struct srcu_data __percpu *sda; /* Per-CPU srcu_data array. */
u8 srcu_reader_flavor;
struct lockdep_map dep_map;
struct srcu_usage *srcu_sup; /* Update-side data. */
};
@@ -135,6 +138,11 @@ struct srcu_struct {
#define SRCU_STATE_SCAN1 1
#define SRCU_STATE_SCAN2 2
/* Values for srcu_expedite_current() state (->srcu_ec_state). */
#define SRCU_EC_IDLE 0
#define SRCU_EC_PENDING 1
#define SRCU_EC_REPOST 2
/*
* Values for initializing gp sequence fields. Higher values allow wrap arounds to
* occur earlier.
@@ -155,20 +163,21 @@ struct srcu_struct {
.work = __DELAYED_WORK_INITIALIZER(name.work, NULL, 0), \
}
#define __SRCU_STRUCT_INIT_COMMON(name, usage_name) \
#define __SRCU_STRUCT_INIT_COMMON(name, usage_name, fast) \
.srcu_sup = &usage_name, \
.srcu_reader_flavor = fast, \
__SRCU_DEP_MAP_INIT(name)
#define __SRCU_STRUCT_INIT_MODULE(name, usage_name) \
#define __SRCU_STRUCT_INIT_MODULE(name, usage_name, fast) \
{ \
__SRCU_STRUCT_INIT_COMMON(name, usage_name) \
__SRCU_STRUCT_INIT_COMMON(name, usage_name, fast) \
}
#define __SRCU_STRUCT_INIT(name, usage_name, pcpu_name) \
#define __SRCU_STRUCT_INIT(name, usage_name, pcpu_name, fast) \
{ \
.sda = &pcpu_name, \
.srcu_ctrp = &pcpu_name.srcu_ctrs[0], \
__SRCU_STRUCT_INIT_COMMON(name, usage_name) \
__SRCU_STRUCT_INIT_COMMON(name, usage_name, fast) \
}
/*
@@ -189,27 +198,45 @@ struct srcu_struct {
* init_srcu_struct(&my_srcu);
*
* See include/linux/percpu-defs.h for the rules on per-CPU variables.
*
* DEFINE_SRCU_FAST() and DEFINE_STATIC_SRCU_FAST create an srcu_struct
* and associated structures whose readers must be of the SRCU-fast variety.
* DEFINE_SRCU_FAST_UPDOWN() and DEFINE_STATIC_SRCU_FAST_UPDOWN() create
* an srcu_struct and associated structures whose readers must be of the
* SRCU-fast-updown variety. The key point (aside from error checking) with
* both varieties is that the grace periods must use synchronize_rcu()
* instead of smp_mb(), and given that the first (for example)
* srcu_read_lock_fast() might race with the first synchronize_srcu(),
* this different must be specified at initialization time.
*/
#ifdef MODULE
# define __DEFINE_SRCU(name, is_static) \
# define __DEFINE_SRCU(name, fast, is_static) \
static struct srcu_usage name##_srcu_usage = __SRCU_USAGE_INIT(name##_srcu_usage); \
is_static struct srcu_struct name = __SRCU_STRUCT_INIT_MODULE(name, name##_srcu_usage); \
is_static struct srcu_struct name = __SRCU_STRUCT_INIT_MODULE(name, name##_srcu_usage, \
fast); \
extern struct srcu_struct * const __srcu_struct_##name; \
struct srcu_struct * const __srcu_struct_##name \
__section("___srcu_struct_ptrs") = &name
#else
# define __DEFINE_SRCU(name, is_static) \
# define __DEFINE_SRCU(name, fast, is_static) \
static DEFINE_PER_CPU(struct srcu_data, name##_srcu_data); \
static struct srcu_usage name##_srcu_usage = __SRCU_USAGE_INIT(name##_srcu_usage); \
is_static struct srcu_struct name = \
__SRCU_STRUCT_INIT(name, name##_srcu_usage, name##_srcu_data)
__SRCU_STRUCT_INIT(name, name##_srcu_usage, name##_srcu_data, fast)
#endif
#define DEFINE_SRCU(name) __DEFINE_SRCU(name, /* not static */)
#define DEFINE_STATIC_SRCU(name) __DEFINE_SRCU(name, static)
#define DEFINE_SRCU(name) __DEFINE_SRCU(name, 0, /* not static */)
#define DEFINE_STATIC_SRCU(name) __DEFINE_SRCU(name, 0, static)
#define DEFINE_SRCU_FAST(name) __DEFINE_SRCU(name, SRCU_READ_FLAVOR_FAST, /* not static */)
#define DEFINE_STATIC_SRCU_FAST(name) __DEFINE_SRCU(name, SRCU_READ_FLAVOR_FAST, static)
#define DEFINE_SRCU_FAST_UPDOWN(name) __DEFINE_SRCU(name, SRCU_READ_FLAVOR_FAST_UPDOWN, \
/* not static */)
#define DEFINE_STATIC_SRCU_FAST_UPDOWN(name) \
__DEFINE_SRCU(name, SRCU_READ_FLAVOR_FAST_UPDOWN, static)
int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp);
void synchronize_srcu_expedited(struct srcu_struct *ssp);
void srcu_barrier(struct srcu_struct *ssp);
void srcu_expedite_current(struct srcu_struct *ssp);
void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf);
// Converts a per-CPU pointer to an ->srcu_ctrs[] array element to that
@@ -289,23 +316,49 @@ __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
atomic_long_inc(raw_cpu_ptr(&scp->srcu_unlocks)); // Z, and implicit RCU reader.
}
void __srcu_check_read_flavor(struct srcu_struct *ssp, int read_flavor);
// Record reader usage even for CONFIG_PROVE_RCU=n kernels. This is
// needed only for flavors that require grace-period smp_mb() calls to be
// promoted to synchronize_rcu().
static inline void srcu_check_read_flavor_force(struct srcu_struct *ssp, int read_flavor)
/*
* Counts the new reader in the appropriate per-CPU element of the
* srcu_struct. Returns a pointer that must be passed to the matching
* srcu_read_unlock_fast_updown(). This type of reader is compatible
* with srcu_down_read_fast() and srcu_up_read_fast().
*
* See the __srcu_read_lock_fast() comment for more details.
*/
static inline
struct srcu_ctr __percpu notrace *__srcu_read_lock_fast_updown(struct srcu_struct *ssp)
{
struct srcu_data *sdp = raw_cpu_ptr(ssp->sda);
struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp);
if (likely(READ_ONCE(sdp->srcu_reader_flavor) & read_flavor))
return;
// Note that the cmpxchg() in __srcu_check_read_flavor() is fully ordered.
__srcu_check_read_flavor(ssp, read_flavor);
if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
this_cpu_inc(scp->srcu_locks.counter); // Y, and implicit RCU reader.
else
atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks)); // Y, and implicit RCU reader.
barrier(); /* Avoid leaking the critical section. */
return scp;
}
// Record non-_lite() usage only for CONFIG_PROVE_RCU=y kernels.
/*
* Removes the count for the old reader from the appropriate
* per-CPU element of the srcu_struct. Note that this may well be a
* different CPU than that which was incremented by the corresponding
* srcu_read_lock_fast(), but it must be within the same task.
*
* Please see the __srcu_read_lock_fast() function's header comment for
* information on implicit RCU readers and NMI safety.
*/
static inline void notrace
__srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
{
barrier(); /* Avoid leaking the critical section. */
if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
this_cpu_inc(scp->srcu_unlocks.counter); // Z, and implicit RCU reader.
else
atomic_long_inc(raw_cpu_ptr(&scp->srcu_unlocks)); // Z, and implicit RCU reader.
}
void __srcu_check_read_flavor(struct srcu_struct *ssp, int read_flavor);
// Record SRCU-reader usage type only for CONFIG_PROVE_RCU=y kernels.
static inline void srcu_check_read_flavor(struct srcu_struct *ssp, int read_flavor)
{
if (IS_ENABLED(CONFIG_PROVE_RCU))

View File

@@ -103,8 +103,8 @@ static const struct kernel_param_ops lt_bind_ops = {
.get = param_get_cpumask,
};
module_param_cb(bind_readers, &lt_bind_ops, &bind_readers, 0644);
module_param_cb(bind_writers, &lt_bind_ops, &bind_writers, 0644);
module_param_cb(bind_readers, &lt_bind_ops, &bind_readers, 0444);
module_param_cb(bind_writers, &lt_bind_ops, &bind_writers, 0444);
long torture_sched_setaffinity(pid_t pid, const struct cpumask *in_mask, bool dowarn);
@@ -1211,6 +1211,10 @@ end:
cxt.cur_ops->exit();
cxt.init_called = false;
}
free_cpumask_var(bind_readers);
free_cpumask_var(bind_writers);
torture_cleanup_end();
}

View File

@@ -213,4 +213,19 @@ config RCU_STRICT_GRACE_PERIOD
when looking for certain types of RCU usage bugs, for example,
too-short RCU read-side critical sections.
config RCU_DYNTICKS_TORTURE
bool "Minimize RCU dynticks counter size"
depends on RCU_EXPERT && !COMPILE_TEST
default n
help
This option sets the width of the dynticks counter to its
minimum usable value. This minimum width greatly increases
the probability of flushing out bugs involving counter wrap,
but it also increases the probability of extending grace period
durations. This Kconfig option should therefore be avoided in
production due to the consequent increased probability of OOMs.
This has no value for production and is only for testing.
endmenu # "RCU Debugging"

View File

@@ -389,6 +389,7 @@ struct rcu_torture_ops {
void (*deferred_free)(struct rcu_torture *p);
void (*sync)(void);
void (*exp_sync)(void);
void (*exp_current)(void);
unsigned long (*get_gp_state_exp)(void);
unsigned long (*start_gp_poll_exp)(void);
void (*start_gp_poll_exp_full)(struct rcu_gp_oldstate *rgosp);
@@ -691,10 +692,29 @@ static struct rcu_torture_ops rcu_busted_ops = {
*/
DEFINE_STATIC_SRCU(srcu_ctl);
DEFINE_STATIC_SRCU_FAST(srcu_ctlf);
DEFINE_STATIC_SRCU_FAST_UPDOWN(srcu_ctlfud);
static struct srcu_struct srcu_ctld;
static struct srcu_struct *srcu_ctlp = &srcu_ctl;
static struct rcu_torture_ops srcud_ops;
static void srcu_torture_init(void)
{
rcu_sync_torture_init();
if (!reader_flavor || (reader_flavor & SRCU_READ_FLAVOR_NORMAL))
VERBOSE_TOROUT_STRING("srcu_torture_init normal SRCU");
if (reader_flavor & SRCU_READ_FLAVOR_NMI)
VERBOSE_TOROUT_STRING("srcu_torture_init NMI-safe SRCU");
if (reader_flavor & SRCU_READ_FLAVOR_FAST) {
srcu_ctlp = &srcu_ctlf;
VERBOSE_TOROUT_STRING("srcu_torture_init fast SRCU");
}
if (reader_flavor & SRCU_READ_FLAVOR_FAST_UPDOWN) {
srcu_ctlp = &srcu_ctlfud;
VERBOSE_TOROUT_STRING("srcu_torture_init fast-up/down SRCU");
}
}
static void srcu_get_gp_data(int *flags, unsigned long *gp_seq)
{
srcutorture_get_gp_data(srcu_ctlp, flags, gp_seq);
@@ -722,6 +742,12 @@ static int srcu_torture_read_lock(void)
scp = srcu_read_lock_fast(srcu_ctlp);
idx = __srcu_ptr_to_ctr(srcu_ctlp, scp);
WARN_ON_ONCE(idx & ~0x1);
ret += idx << 2;
}
if (reader_flavor & SRCU_READ_FLAVOR_FAST_UPDOWN) {
scp = srcu_read_lock_fast_updown(srcu_ctlp);
idx = __srcu_ptr_to_ctr(srcu_ctlp, scp);
WARN_ON_ONCE(idx & ~0x1);
ret += idx << 3;
}
return ret;
@@ -749,8 +775,11 @@ srcu_read_delay(struct torture_random_state *rrsp, struct rt_read_seg *rtrsp)
static void srcu_torture_read_unlock(int idx)
{
WARN_ON_ONCE((reader_flavor && (idx & ~reader_flavor)) || (!reader_flavor && (idx & ~0x1)));
if (reader_flavor & SRCU_READ_FLAVOR_FAST_UPDOWN)
srcu_read_unlock_fast_updown(srcu_ctlp,
__srcu_ctr_to_ptr(srcu_ctlp, (idx & 0x8) >> 3));
if (reader_flavor & SRCU_READ_FLAVOR_FAST)
srcu_read_unlock_fast(srcu_ctlp, __srcu_ctr_to_ptr(srcu_ctlp, (idx & 0x8) >> 3));
srcu_read_unlock_fast(srcu_ctlp, __srcu_ctr_to_ptr(srcu_ctlp, (idx & 0x4) >> 2));
if (reader_flavor & SRCU_READ_FLAVOR_NMI)
srcu_read_unlock_nmisafe(srcu_ctlp, (idx & 0x2) >> 1);
if ((reader_flavor & SRCU_READ_FLAVOR_NORMAL) || !(reader_flavor & SRCU_READ_FLAVOR_ALL))
@@ -784,7 +813,7 @@ static int srcu_torture_down_read(void)
WARN_ON_ONCE(idx & ~0x1);
return idx;
}
if (reader_flavor & SRCU_READ_FLAVOR_FAST) {
if (reader_flavor & SRCU_READ_FLAVOR_FAST_UPDOWN) {
scp = srcu_down_read_fast(srcu_ctlp);
idx = __srcu_ptr_to_ctr(srcu_ctlp, scp);
WARN_ON_ONCE(idx & ~0x1);
@@ -797,7 +826,7 @@ static int srcu_torture_down_read(void)
static void srcu_torture_up_read(int idx)
{
WARN_ON_ONCE((reader_flavor && (idx & ~reader_flavor)) || (!reader_flavor && (idx & ~0x1)));
if (reader_flavor & SRCU_READ_FLAVOR_FAST)
if (reader_flavor & SRCU_READ_FLAVOR_FAST_UPDOWN)
srcu_up_read_fast(srcu_ctlp, __srcu_ctr_to_ptr(srcu_ctlp, (idx & 0x8) >> 3));
else if ((reader_flavor & SRCU_READ_FLAVOR_NORMAL) ||
!(reader_flavor & SRCU_READ_FLAVOR_ALL))
@@ -857,9 +886,14 @@ static void srcu_torture_synchronize_expedited(void)
synchronize_srcu_expedited(srcu_ctlp);
}
static void srcu_torture_expedite_current(void)
{
srcu_expedite_current(srcu_ctlp);
}
static struct rcu_torture_ops srcu_ops = {
.ttype = SRCU_FLAVOR,
.init = rcu_sync_torture_init,
.init = srcu_torture_init,
.readlock = srcu_torture_read_lock,
.read_delay = srcu_read_delay,
.readunlock = srcu_torture_read_unlock,
@@ -871,6 +905,7 @@ static struct rcu_torture_ops srcu_ops = {
.deferred_free = srcu_torture_deferred_free,
.sync = srcu_torture_synchronize,
.exp_sync = srcu_torture_synchronize_expedited,
.exp_current = srcu_torture_expedite_current,
.same_gp_state = same_state_synchronize_srcu,
.get_comp_state = get_completed_synchronize_srcu,
.get_gp_state = srcu_torture_get_gp_state,
@@ -886,14 +921,28 @@ static struct rcu_torture_ops srcu_ops = {
.no_pi_lock = IS_ENABLED(CONFIG_TINY_SRCU),
.debug_objects = 1,
.have_up_down = IS_ENABLED(CONFIG_TINY_SRCU)
? 0 : SRCU_READ_FLAVOR_NORMAL | SRCU_READ_FLAVOR_FAST,
? 0 : SRCU_READ_FLAVOR_NORMAL | SRCU_READ_FLAVOR_FAST_UPDOWN,
.name = "srcu"
};
static void srcu_torture_init(void)
static void srcud_torture_init(void)
{
rcu_sync_torture_init();
WARN_ON(init_srcu_struct(&srcu_ctld));
if (!reader_flavor || (reader_flavor & SRCU_READ_FLAVOR_NORMAL)) {
WARN_ON(init_srcu_struct(&srcu_ctld));
VERBOSE_TOROUT_STRING("srcud_torture_init normal SRCU");
} else if (reader_flavor & SRCU_READ_FLAVOR_NMI) {
WARN_ON(init_srcu_struct(&srcu_ctld));
VERBOSE_TOROUT_STRING("srcud_torture_init NMI-safe SRCU");
} else if (reader_flavor & SRCU_READ_FLAVOR_FAST) {
WARN_ON(init_srcu_struct_fast(&srcu_ctld));
VERBOSE_TOROUT_STRING("srcud_torture_init fast SRCU");
} else if (reader_flavor & SRCU_READ_FLAVOR_FAST_UPDOWN) {
WARN_ON(init_srcu_struct_fast_updown(&srcu_ctld));
VERBOSE_TOROUT_STRING("srcud_torture_init fast-up/down SRCU");
} else {
WARN_ON(init_srcu_struct(&srcu_ctld));
}
srcu_ctlp = &srcu_ctld;
}
@@ -906,7 +955,7 @@ static void srcu_torture_cleanup(void)
/* As above, but dynamically allocated. */
static struct rcu_torture_ops srcud_ops = {
.ttype = SRCU_FLAVOR,
.init = srcu_torture_init,
.init = srcud_torture_init,
.cleanup = srcu_torture_cleanup,
.readlock = srcu_torture_read_lock,
.read_delay = srcu_read_delay,
@@ -919,6 +968,7 @@ static struct rcu_torture_ops srcud_ops = {
.deferred_free = srcu_torture_deferred_free,
.sync = srcu_torture_synchronize,
.exp_sync = srcu_torture_synchronize_expedited,
.exp_current = srcu_torture_expedite_current,
.same_gp_state = same_state_synchronize_srcu,
.get_comp_state = get_completed_synchronize_srcu,
.get_gp_state = srcu_torture_get_gp_state,
@@ -934,7 +984,7 @@ static struct rcu_torture_ops srcud_ops = {
.no_pi_lock = IS_ENABLED(CONFIG_TINY_SRCU),
.debug_objects = 1,
.have_up_down = IS_ENABLED(CONFIG_TINY_SRCU)
? 0 : SRCU_READ_FLAVOR_NORMAL | SRCU_READ_FLAVOR_FAST,
? 0 : SRCU_READ_FLAVOR_NORMAL | SRCU_READ_FLAVOR_FAST_UPDOWN,
.name = "srcud"
};
@@ -1700,6 +1750,8 @@ rcu_torture_writer(void *arg)
ulo[i] = cur_ops->get_comp_state();
gp_snap = cur_ops->start_gp_poll();
rcu_torture_writer_state = RTWS_POLL_WAIT;
if (cur_ops->exp_current && !torture_random(&rand) % 0xff)
cur_ops->exp_current();
while (!cur_ops->poll_gp_state(gp_snap)) {
gp_snap1 = cur_ops->get_gp_state();
for (i = 0; i < ulo_size; i++)
@@ -1720,6 +1772,8 @@ rcu_torture_writer(void *arg)
cur_ops->get_comp_state_full(&rgo[i]);
cur_ops->start_gp_poll_full(&gp_snap_full);
rcu_torture_writer_state = RTWS_POLL_WAIT_FULL;
if (cur_ops->exp_current && !torture_random(&rand) % 0xff)
cur_ops->exp_current();
while (!cur_ops->poll_gp_state_full(&gp_snap_full)) {
cur_ops->get_gp_state_full(&gp_snap1_full);
for (i = 0; i < rgo_size; i++)
@@ -2384,10 +2438,8 @@ static bool rcu_torture_one_read(struct torture_random_state *trsp, long myid)
newstate = rcutorture_extend_mask(rtors.readstate, trsp);
WARN_ON_ONCE(newstate & RCUTORTURE_RDR_UPDOWN);
rcutorture_one_extend(&rtors.readstate, newstate, trsp, rtors.rtrsp++);
if (!rcu_torture_one_read_start(&rtors, trsp, myid)) {
rcutorture_one_extend(&rtors.readstate, 0, trsp, rtors.rtrsp);
if (!rcu_torture_one_read_start(&rtors, trsp, myid))
return false;
}
rtors.rtrsp = rcutorture_loop_extend(&rtors.readstate, trsp, rtors.rtrsp);
rcu_torture_one_read_end(&rtors, trsp);
return true;

View File

@@ -136,6 +136,7 @@ struct ref_scale_ops {
void (*cleanup)(void);
void (*readsection)(const int nloops);
void (*delaysection)(const int nloops, const int udl, const int ndl);
bool enable_irqs;
const char *name;
};
@@ -184,6 +185,8 @@ static const struct ref_scale_ops rcu_ops = {
// Definitions for SRCU ref scale testing.
DEFINE_STATIC_SRCU(srcu_refctl_scale);
DEFINE_STATIC_SRCU_FAST(srcu_fast_refctl_scale);
DEFINE_STATIC_SRCU_FAST_UPDOWN(srcu_fast_updown_refctl_scale);
static struct srcu_struct *srcu_ctlp = &srcu_refctl_scale;
static void srcu_ref_scale_read_section(const int nloops)
@@ -216,6 +219,12 @@ static const struct ref_scale_ops srcu_ops = {
.name = "srcu"
};
static bool srcu_fast_sync_scale_init(void)
{
srcu_ctlp = &srcu_fast_refctl_scale;
return true;
}
static void srcu_fast_ref_scale_read_section(const int nloops)
{
int i;
@@ -240,12 +249,48 @@ static void srcu_fast_ref_scale_delay_section(const int nloops, const int udl, c
}
static const struct ref_scale_ops srcu_fast_ops = {
.init = rcu_sync_scale_init,
.init = srcu_fast_sync_scale_init,
.readsection = srcu_fast_ref_scale_read_section,
.delaysection = srcu_fast_ref_scale_delay_section,
.name = "srcu-fast"
};
static bool srcu_fast_updown_sync_scale_init(void)
{
srcu_ctlp = &srcu_fast_updown_refctl_scale;
return true;
}
static void srcu_fast_updown_ref_scale_read_section(const int nloops)
{
int i;
struct srcu_ctr __percpu *scp;
for (i = nloops; i >= 0; i--) {
scp = srcu_read_lock_fast_updown(srcu_ctlp);
srcu_read_unlock_fast_updown(srcu_ctlp, scp);
}
}
static void srcu_fast_updown_ref_scale_delay_section(const int nloops, const int udl, const int ndl)
{
int i;
struct srcu_ctr __percpu *scp;
for (i = nloops; i >= 0; i--) {
scp = srcu_read_lock_fast_updown(srcu_ctlp);
un_delay(udl, ndl);
srcu_read_unlock_fast_updown(srcu_ctlp, scp);
}
}
static const struct ref_scale_ops srcu_fast_updown_ops = {
.init = srcu_fast_updown_sync_scale_init,
.readsection = srcu_fast_updown_ref_scale_read_section,
.delaysection = srcu_fast_updown_ref_scale_delay_section,
.name = "srcu-fast-updown"
};
#ifdef CONFIG_TASKS_RCU
// Definitions for RCU Tasks ref scale testing: Empty read markers.
@@ -323,6 +368,9 @@ static const struct ref_scale_ops rcu_trace_ops = {
// Definitions for reference count
static atomic_t refcnt;
// Definitions acquire-release.
static DEFINE_PER_CPU(unsigned long, test_acqrel);
static void ref_refcnt_section(const int nloops)
{
int i;
@@ -351,6 +399,184 @@ static const struct ref_scale_ops refcnt_ops = {
.name = "refcnt"
};
static void ref_percpuinc_section(const int nloops)
{
int i;
for (i = nloops; i >= 0; i--) {
this_cpu_inc(test_acqrel);
this_cpu_dec(test_acqrel);
}
}
static void ref_percpuinc_delay_section(const int nloops, const int udl, const int ndl)
{
int i;
for (i = nloops; i >= 0; i--) {
this_cpu_inc(test_acqrel);
un_delay(udl, ndl);
this_cpu_dec(test_acqrel);
}
}
static const struct ref_scale_ops percpuinc_ops = {
.init = rcu_sync_scale_init,
.readsection = ref_percpuinc_section,
.delaysection = ref_percpuinc_delay_section,
.name = "percpuinc"
};
// Note that this can lose counts in preemptible kernels.
static void ref_incpercpu_section(const int nloops)
{
int i;
for (i = nloops; i >= 0; i--) {
unsigned long *tap = this_cpu_ptr(&test_acqrel);
WRITE_ONCE(*tap, READ_ONCE(*tap) + 1);
WRITE_ONCE(*tap, READ_ONCE(*tap) - 1);
}
}
static void ref_incpercpu_delay_section(const int nloops, const int udl, const int ndl)
{
int i;
for (i = nloops; i >= 0; i--) {
unsigned long *tap = this_cpu_ptr(&test_acqrel);
WRITE_ONCE(*tap, READ_ONCE(*tap) + 1);
un_delay(udl, ndl);
WRITE_ONCE(*tap, READ_ONCE(*tap) - 1);
}
}
static const struct ref_scale_ops incpercpu_ops = {
.init = rcu_sync_scale_init,
.readsection = ref_incpercpu_section,
.delaysection = ref_incpercpu_delay_section,
.name = "incpercpu"
};
static void ref_incpercpupreempt_section(const int nloops)
{
int i;
for (i = nloops; i >= 0; i--) {
unsigned long *tap;
preempt_disable();
tap = this_cpu_ptr(&test_acqrel);
WRITE_ONCE(*tap, READ_ONCE(*tap) + 1);
WRITE_ONCE(*tap, READ_ONCE(*tap) - 1);
preempt_enable();
}
}
static void ref_incpercpupreempt_delay_section(const int nloops, const int udl, const int ndl)
{
int i;
for (i = nloops; i >= 0; i--) {
unsigned long *tap;
preempt_disable();
tap = this_cpu_ptr(&test_acqrel);
WRITE_ONCE(*tap, READ_ONCE(*tap) + 1);
un_delay(udl, ndl);
WRITE_ONCE(*tap, READ_ONCE(*tap) - 1);
preempt_enable();
}
}
static const struct ref_scale_ops incpercpupreempt_ops = {
.init = rcu_sync_scale_init,
.readsection = ref_incpercpupreempt_section,
.delaysection = ref_incpercpupreempt_delay_section,
.name = "incpercpupreempt"
};
static void ref_incpercpubh_section(const int nloops)
{
int i;
for (i = nloops; i >= 0; i--) {
unsigned long *tap;
local_bh_disable();
tap = this_cpu_ptr(&test_acqrel);
WRITE_ONCE(*tap, READ_ONCE(*tap) + 1);
WRITE_ONCE(*tap, READ_ONCE(*tap) - 1);
local_bh_enable();
}
}
static void ref_incpercpubh_delay_section(const int nloops, const int udl, const int ndl)
{
int i;
for (i = nloops; i >= 0; i--) {
unsigned long *tap;
local_bh_disable();
tap = this_cpu_ptr(&test_acqrel);
WRITE_ONCE(*tap, READ_ONCE(*tap) + 1);
un_delay(udl, ndl);
WRITE_ONCE(*tap, READ_ONCE(*tap) - 1);
local_bh_enable();
}
}
static const struct ref_scale_ops incpercpubh_ops = {
.init = rcu_sync_scale_init,
.readsection = ref_incpercpubh_section,
.delaysection = ref_incpercpubh_delay_section,
.enable_irqs = true,
.name = "incpercpubh"
};
static void ref_incpercpuirqsave_section(const int nloops)
{
int i;
unsigned long flags;
for (i = nloops; i >= 0; i--) {
unsigned long *tap;
local_irq_save(flags);
tap = this_cpu_ptr(&test_acqrel);
WRITE_ONCE(*tap, READ_ONCE(*tap) + 1);
WRITE_ONCE(*tap, READ_ONCE(*tap) - 1);
local_irq_restore(flags);
}
}
static void ref_incpercpuirqsave_delay_section(const int nloops, const int udl, const int ndl)
{
int i;
unsigned long flags;
for (i = nloops; i >= 0; i--) {
unsigned long *tap;
local_irq_save(flags);
tap = this_cpu_ptr(&test_acqrel);
WRITE_ONCE(*tap, READ_ONCE(*tap) + 1);
un_delay(udl, ndl);
WRITE_ONCE(*tap, READ_ONCE(*tap) - 1);
local_irq_restore(flags);
}
}
static const struct ref_scale_ops incpercpuirqsave_ops = {
.init = rcu_sync_scale_init,
.readsection = ref_incpercpuirqsave_section,
.delaysection = ref_incpercpuirqsave_delay_section,
.name = "incpercpuirqsave"
};
// Definitions for rwlock
static rwlock_t test_rwlock;
@@ -494,9 +720,6 @@ static const struct ref_scale_ops lock_irq_ops = {
.name = "lock-irq"
};
// Definitions acquire-release.
static DEFINE_PER_CPU(unsigned long, test_acqrel);
static void ref_acqrel_section(const int nloops)
{
unsigned long x;
@@ -629,6 +852,133 @@ static const struct ref_scale_ops jiffies_ops = {
.name = "jiffies"
};
static void ref_preempt_section(const int nloops)
{
int i;
migrate_disable();
for (i = nloops; i >= 0; i--) {
preempt_disable();
preempt_enable();
}
migrate_enable();
}
static void ref_preempt_delay_section(const int nloops, const int udl, const int ndl)
{
int i;
migrate_disable();
for (i = nloops; i >= 0; i--) {
preempt_disable();
un_delay(udl, ndl);
preempt_enable();
}
migrate_enable();
}
static const struct ref_scale_ops preempt_ops = {
.readsection = ref_preempt_section,
.delaysection = ref_preempt_delay_section,
.name = "preempt"
};
static void ref_bh_section(const int nloops)
{
int i;
preempt_disable();
for (i = nloops; i >= 0; i--) {
local_bh_disable();
local_bh_enable();
}
preempt_enable();
}
static void ref_bh_delay_section(const int nloops, const int udl, const int ndl)
{
int i;
preempt_disable();
for (i = nloops; i >= 0; i--) {
local_bh_disable();
un_delay(udl, ndl);
local_bh_enable();
}
preempt_enable();
}
static const struct ref_scale_ops bh_ops = {
.readsection = ref_bh_section,
.delaysection = ref_bh_delay_section,
.enable_irqs = true,
.name = "bh"
};
static void ref_irq_section(const int nloops)
{
int i;
preempt_disable();
for (i = nloops; i >= 0; i--) {
local_irq_disable();
local_irq_enable();
}
preempt_enable();
}
static void ref_irq_delay_section(const int nloops, const int udl, const int ndl)
{
int i;
preempt_disable();
for (i = nloops; i >= 0; i--) {
local_irq_disable();
un_delay(udl, ndl);
local_irq_enable();
}
preempt_enable();
}
static const struct ref_scale_ops irq_ops = {
.readsection = ref_irq_section,
.delaysection = ref_irq_delay_section,
.name = "irq"
};
static void ref_irqsave_section(const int nloops)
{
unsigned long flags;
int i;
preempt_disable();
for (i = nloops; i >= 0; i--) {
local_irq_save(flags);
local_irq_restore(flags);
}
preempt_enable();
}
static void ref_irqsave_delay_section(const int nloops, const int udl, const int ndl)
{
unsigned long flags;
int i;
preempt_disable();
for (i = nloops; i >= 0; i--) {
local_irq_save(flags);
un_delay(udl, ndl);
local_irq_restore(flags);
}
preempt_enable();
}
static const struct ref_scale_ops irqsave_ops = {
.readsection = ref_irqsave_section,
.delaysection = ref_irqsave_delay_section,
.name = "irqsave"
};
////////////////////////////////////////////////////////////////////////
//
// Methods leveraging SLAB_TYPESAFE_BY_RCU.
@@ -924,15 +1274,18 @@ repeat:
if (!atomic_dec_return(&n_warmedup))
while (atomic_read_acquire(&n_warmedup))
rcu_scale_one_reader();
// Also keep interrupts disabled. This also has the effect
// of preventing entries into slow path for rcu_read_unlock().
local_irq_save(flags);
// Also keep interrupts disabled when it is safe to do so, which
// it is not for local_bh_enable(). This also has the effect of
// preventing entries into slow path for rcu_read_unlock().
if (!cur_ops->enable_irqs)
local_irq_save(flags);
start = ktime_get_mono_fast_ns();
rcu_scale_one_reader();
duration = ktime_get_mono_fast_ns() - start;
local_irq_restore(flags);
if (!cur_ops->enable_irqs)
local_irq_restore(flags);
rt->last_duration_ns = WARN_ON_ONCE(duration < 0) ? 0 : duration;
// To reduce runtime-skew noise, do maintain-load invocations until
@@ -1163,9 +1516,13 @@ ref_scale_init(void)
long i;
int firsterr = 0;
static const struct ref_scale_ops *scale_ops[] = {
&rcu_ops, &srcu_ops, &srcu_fast_ops, RCU_TRACE_OPS RCU_TASKS_OPS
&refcnt_ops, &rwlock_ops, &rwsem_ops, &lock_ops, &lock_irq_ops,
&acqrel_ops, &sched_clock_ops, &clock_ops, &jiffies_ops,
&rcu_ops, &srcu_ops, &srcu_fast_ops, &srcu_fast_updown_ops,
RCU_TRACE_OPS RCU_TASKS_OPS
&refcnt_ops, &percpuinc_ops, &incpercpu_ops, &incpercpupreempt_ops,
&incpercpubh_ops, &incpercpuirqsave_ops,
&rwlock_ops, &rwsem_ops, &lock_ops, &lock_irq_ops, &acqrel_ops,
&sched_clock_ops, &clock_ops, &jiffies_ops,
&preempt_ops, &bh_ops, &irq_ops, &irqsave_ops,
&typesafe_ref_ops, &typesafe_lock_ops, &typesafe_seqlock_ops,
};

View File

@@ -106,15 +106,15 @@ void __srcu_read_unlock(struct srcu_struct *ssp, int idx)
newval = READ_ONCE(ssp->srcu_lock_nesting[idx]) - 1;
WRITE_ONCE(ssp->srcu_lock_nesting[idx], newval);
preempt_enable();
if (!newval && READ_ONCE(ssp->srcu_gp_waiting) && in_task())
if (!newval && READ_ONCE(ssp->srcu_gp_waiting) && in_task() && !irqs_disabled())
swake_up_one(&ssp->srcu_wq);
}
EXPORT_SYMBOL_GPL(__srcu_read_unlock);
/*
* Workqueue handler to drive one grace period and invoke any callbacks
* that become ready as a result. Single-CPU and !PREEMPTION operation
* means that we get away with murder on synchronization. ;-)
* that become ready as a result. Single-CPU operation and preemption
* disabling mean that we get away with murder on synchronization. ;-)
*/
void srcu_drive_gp(struct work_struct *wp)
{
@@ -141,7 +141,12 @@ void srcu_drive_gp(struct work_struct *wp)
WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
WRITE_ONCE(ssp->srcu_gp_waiting, true); /* srcu_read_unlock() wakes! */
preempt_enable();
swait_event_exclusive(ssp->srcu_wq, !READ_ONCE(ssp->srcu_lock_nesting[idx]));
do {
// Deadlock issues prevent __srcu_read_unlock() from
// doing an unconditional wakeup, so polling is required.
swait_event_timeout_exclusive(ssp->srcu_wq,
!READ_ONCE(ssp->srcu_lock_nesting[idx]), HZ / 10);
} while (READ_ONCE(ssp->srcu_lock_nesting[idx]));
preempt_disable(); // Needed for PREEMPT_LAZY
WRITE_ONCE(ssp->srcu_gp_waiting, false); /* srcu_read_unlock() cheap. */
WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);

View File

@@ -286,32 +286,92 @@ err_free_sup:
#ifdef CONFIG_DEBUG_LOCK_ALLOC
int __init_srcu_struct(struct srcu_struct *ssp, const char *name,
struct lock_class_key *key)
static int
__init_srcu_struct_common(struct srcu_struct *ssp, const char *name, struct lock_class_key *key)
{
/* Don't re-initialize a lock while it is held. */
debug_check_no_locks_freed((void *)ssp, sizeof(*ssp));
lockdep_init_map(&ssp->dep_map, name, key, 0);
return init_srcu_struct_fields(ssp, false);
}
int __init_srcu_struct(struct srcu_struct *ssp, const char *name, struct lock_class_key *key)
{
ssp->srcu_reader_flavor = 0;
return __init_srcu_struct_common(ssp, name, key);
}
EXPORT_SYMBOL_GPL(__init_srcu_struct);
int __init_srcu_struct_fast(struct srcu_struct *ssp, const char *name, struct lock_class_key *key)
{
ssp->srcu_reader_flavor = SRCU_READ_FLAVOR_FAST;
return __init_srcu_struct_common(ssp, name, key);
}
EXPORT_SYMBOL_GPL(__init_srcu_struct_fast);
int __init_srcu_struct_fast_updown(struct srcu_struct *ssp, const char *name,
struct lock_class_key *key)
{
ssp->srcu_reader_flavor = SRCU_READ_FLAVOR_FAST_UPDOWN;
return __init_srcu_struct_common(ssp, name, key);
}
EXPORT_SYMBOL_GPL(__init_srcu_struct_fast_updown);
#else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
/**
* init_srcu_struct - initialize a sleep-RCU structure
* @ssp: structure to initialize.
*
* Must invoke this on a given srcu_struct before passing that srcu_struct
* Use this in place of DEFINE_SRCU() and DEFINE_STATIC_SRCU()
* for non-static srcu_struct structures that are to be passed to
* srcu_read_lock(), srcu_read_lock_nmisafe(), and friends. It is necessary
* to invoke this on a given srcu_struct before passing that srcu_struct
* to any other function. Each srcu_struct represents a separate domain
* of SRCU protection.
*/
int init_srcu_struct(struct srcu_struct *ssp)
{
ssp->srcu_reader_flavor = 0;
return init_srcu_struct_fields(ssp, false);
}
EXPORT_SYMBOL_GPL(init_srcu_struct);
/**
* init_srcu_struct_fast - initialize a fast-reader sleep-RCU structure
* @ssp: structure to initialize.
*
* Use this in place of DEFINE_SRCU_FAST() and DEFINE_STATIC_SRCU_FAST()
* for non-static srcu_struct structures that are to be passed to
* srcu_read_lock_fast() and friends. It is necessary to invoke this on a
* given srcu_struct before passing that srcu_struct to any other function.
* Each srcu_struct represents a separate domain of SRCU protection.
*/
int init_srcu_struct_fast(struct srcu_struct *ssp)
{
ssp->srcu_reader_flavor = SRCU_READ_FLAVOR_FAST;
return init_srcu_struct_fields(ssp, false);
}
EXPORT_SYMBOL_GPL(init_srcu_struct_fast);
/**
* init_srcu_struct_fast_updown - initialize a fast-reader up/down sleep-RCU structure
* @ssp: structure to initialize.
*
* Use this function in place of DEFINE_SRCU_FAST_UPDOWN() and
* DEFINE_STATIC_SRCU_FAST_UPDOWN() for non-static srcu_struct
* structures that are to be passed to srcu_read_lock_fast_updown(),
* srcu_down_read_fast(), and friends. It is necessary to invoke this on a
* given srcu_struct before passing that srcu_struct to any other function.
* Each srcu_struct represents a separate domain of SRCU protection.
*/
int init_srcu_struct_fast_updown(struct srcu_struct *ssp)
{
ssp->srcu_reader_flavor = SRCU_READ_FLAVOR_FAST_UPDOWN;
return init_srcu_struct_fields(ssp, false);
}
EXPORT_SYMBOL_GPL(init_srcu_struct_fast_updown);
#endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
/*
@@ -461,7 +521,7 @@ static bool srcu_readers_lock_idx(struct srcu_struct *ssp, int idx, bool gp, uns
static unsigned long srcu_readers_unlock_idx(struct srcu_struct *ssp, int idx, unsigned long *rdm)
{
int cpu;
unsigned long mask = 0;
unsigned long mask = ssp->srcu_reader_flavor;
unsigned long sum = 0;
for_each_possible_cpu(cpu) {
@@ -734,6 +794,10 @@ void __srcu_check_read_flavor(struct srcu_struct *ssp, int read_flavor)
sdp = raw_cpu_ptr(ssp->sda);
old_read_flavor = READ_ONCE(sdp->srcu_reader_flavor);
WARN_ON_ONCE(ssp->srcu_reader_flavor && read_flavor != ssp->srcu_reader_flavor);
WARN_ON_ONCE(old_read_flavor && ssp->srcu_reader_flavor &&
old_read_flavor != ssp->srcu_reader_flavor);
WARN_ON_ONCE(read_flavor == SRCU_READ_FLAVOR_FAST && !ssp->srcu_reader_flavor);
if (!old_read_flavor) {
old_read_flavor = cmpxchg(&sdp->srcu_reader_flavor, 0, read_flavor);
if (!old_read_flavor)
@@ -1688,6 +1752,64 @@ void srcu_barrier(struct srcu_struct *ssp)
}
EXPORT_SYMBOL_GPL(srcu_barrier);
/* Callback for srcu_expedite_current() usage. */
static void srcu_expedite_current_cb(struct rcu_head *rhp)
{
unsigned long flags;
bool needcb = false;
struct srcu_data *sdp = container_of(rhp, struct srcu_data, srcu_ec_head);
spin_lock_irqsave_sdp_contention(sdp, &flags);
if (sdp->srcu_ec_state == SRCU_EC_IDLE) {
WARN_ON_ONCE(1);
} else if (sdp->srcu_ec_state == SRCU_EC_PENDING) {
sdp->srcu_ec_state = SRCU_EC_IDLE;
} else {
WARN_ON_ONCE(sdp->srcu_ec_state != SRCU_EC_REPOST);
sdp->srcu_ec_state = SRCU_EC_PENDING;
needcb = true;
}
spin_unlock_irqrestore_rcu_node(sdp, flags);
// If needed, requeue ourselves as an expedited SRCU callback.
if (needcb)
__call_srcu(sdp->ssp, &sdp->srcu_ec_head, srcu_expedite_current_cb, false);
}
/**
* srcu_expedite_current - Expedite the current SRCU grace period
* @ssp: srcu_struct to expedite.
*
* Cause the current SRCU grace period to become expedited. The grace
* period following the current one might also be expedited. If there is
* no current grace period, one might be created. If the current grace
* period is currently sleeping, that sleep will complete before expediting
* will take effect.
*/
void srcu_expedite_current(struct srcu_struct *ssp)
{
unsigned long flags;
bool needcb = false;
struct srcu_data *sdp;
migrate_disable();
sdp = this_cpu_ptr(ssp->sda);
spin_lock_irqsave_sdp_contention(sdp, &flags);
if (sdp->srcu_ec_state == SRCU_EC_IDLE) {
sdp->srcu_ec_state = SRCU_EC_PENDING;
needcb = true;
} else if (sdp->srcu_ec_state == SRCU_EC_PENDING) {
sdp->srcu_ec_state = SRCU_EC_REPOST;
} else {
WARN_ON_ONCE(sdp->srcu_ec_state != SRCU_EC_REPOST);
}
spin_unlock_irqrestore_rcu_node(sdp, flags);
// If needed, queue an expedited SRCU callback.
if (needcb)
__call_srcu(ssp, &sdp->srcu_ec_head, srcu_expedite_current_cb, false);
migrate_enable();
}
EXPORT_SYMBOL_GPL(srcu_expedite_current);
/**
* srcu_batches_completed - return batches completed.
* @ssp: srcu_struct on which to report batch completion.

View File

@@ -4017,7 +4017,7 @@ bool rcu_cpu_online(int cpu)
* RCU on an offline processor during initial boot, hence the check for
* rcu_scheduler_fully_active.
*/
bool rcu_lockdep_current_cpu_online(void)
bool notrace rcu_lockdep_current_cpu_online(void)
{
struct rcu_data *rdp;
bool ret = false;

View File

@@ -117,7 +117,7 @@ static bool rcu_read_lock_held_common(bool *ret)
return false;
}
int rcu_read_lock_sched_held(void)
int notrace rcu_read_lock_sched_held(void)
{
bool ret;
@@ -342,7 +342,7 @@ EXPORT_SYMBOL_GPL(debug_lockdep_rcu_enabled);
* Note that rcu_read_lock() is disallowed if the CPU is either idle or
* offline from an RCU perspective, so check for those as well.
*/
int rcu_read_lock_held(void)
int notrace rcu_read_lock_held(void)
{
bool ret;
@@ -367,7 +367,7 @@ EXPORT_SYMBOL_GPL(rcu_read_lock_held);
* Note that rcu_read_lock_bh() is disallowed if the CPU is either idle or
* offline from an RCU perspective, so check for those as well.
*/
int rcu_read_lock_bh_held(void)
int notrace rcu_read_lock_bh_held(void)
{
bool ret;
@@ -377,7 +377,7 @@ int rcu_read_lock_bh_held(void)
}
EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
int rcu_read_lock_any_held(void)
int notrace rcu_read_lock_any_held(void)
{
bool ret;

View File

@@ -31,7 +31,7 @@ fi
if ! cp "$oldrun/scenarios" $T/scenarios.oldrun
then
# Later on, can reconstitute this from console.log files.
echo Prior run batches file does not exist: $oldrun/batches
echo Prior run scenarios file does not exist: $oldrun/scenarios
exit 1
fi
@@ -68,7 +68,7 @@ usage () {
echo " --datestamp string"
echo " --dryrun"
echo " --duration minutes | <seconds>s | <hours>h | <days>d"
echo " --link hard|soft|copy"
echo " --link hard|soft|copy|inplace|inplace-force"
echo " --remote"
echo " --rundir /new/res/path"
echo "Command line: $scriptname $args"
@@ -121,7 +121,7 @@ do
shift
;;
--link)
checkarg --link "hard|soft|copy" "$#" "$2" 'hard\|soft\|copy' '^--'
checkarg --link "hard|soft|copy|inplace|inplace-force" "$#" "$2" 'hard\|soft\|copy\|inplace\|inplace-force' '^--'
case "$2" in
copy)
arg_link="cp -R"
@@ -132,6 +132,14 @@ do
soft)
arg_link="cp -Rs"
;;
inplace)
arg_link="inplace"
rundir="$oldrun"
;;
inplace-force)
arg_link="inplace-force"
rundir="$oldrun"
;;
esac
shift
;;
@@ -172,21 +180,37 @@ fi
echo ---- Re-run results directory: $rundir
# Copy old run directory tree over and adjust.
mkdir -p "`dirname "$rundir"`"
if ! $arg_link "$oldrun" "$rundir"
if test "$oldrun" != "$rundir"
then
echo "Cannot copy from $oldrun to $rundir."
usage
fi
rm -f "$rundir"/*/{console.log,console.log.diags,qemu_pid,qemu-pid,qemu-retval,Warnings,kvm-test-1-run.sh.out,kvm-test-1-run-qemu.sh.out,vmlinux} "$rundir"/log
touch "$rundir/log"
echo $scriptname $args | tee -a "$rundir/log"
echo $oldrun > "$rundir/re-run"
if ! test -d "$rundir/../../bin"
then
$arg_link "$oldrun/../../bin" "$rundir/../.."
# Copy old run directory tree over and adjust.
mkdir -p "`dirname "$rundir"`"
if ! $arg_link "$oldrun" "$rundir"
then
echo "Cannot copy from $oldrun to $rundir."
usage
fi
rm -f "$rundir"/*/{console.log,console.log.diags,qemu_pid,qemu-pid,qemu-retval,Warnings,kvm-test-1-run.sh.out,kvm-test-1-run-qemu.sh.out,vmlinux} "$rundir"/log
touch "$rundir/log"
echo $scriptname $args | tee -a "$rundir/log"
echo $oldrun > "$rundir/re-run"
if ! test -d "$rundir/../../bin"
then
$arg_link "$oldrun/../../bin" "$rundir/../.."
fi
else
# Check for a run having already happened.
find "$rundir" -name console.log -print > $T/oldrun-console.log
if test -s $T/oldrun-console.log
then
echo Run already took place in $rundir
if test "$arg_link" = inplace
then
usage
fi
fi
fi
# Find runs to be done based on their qemu-cmd files.
for i in $rundir/*/qemu-cmd
do
cp "$i" $T

View File

@@ -0,0 +1,116 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0+
#
# Usage: kvm-series.sh config-list commit-id-list [ kvm.sh parameters ]
#
# Tests the specified list of unadorned configs ("TREE01 SRCU-P" but not
# "CFLIST" or "3*TRACE01") and an indication of a set of commits to test,
# then runs each commit through the specified list of commits using kvm.sh.
# The runs are grouped into a -series/config/commit directory tree.
# Each run defaults to a duration of one minute.
#
# Run in top-level Linux source directory. Please note that this is in
# no way a replacement for "git bisect"!!!
#
# This script is intended to replace kvm-check-branches.sh by providing
# ease of use and faster execution.
T="`mktemp -d ${TMPDIR-/tmp}/kvm-series.sh.XXXXXX`"
trap 'rm -rf $T' 0
scriptname=$0
args="$*"
config_list="${1}"
if test -z "${config_list}"
then
echo "$0: Need a quoted list of --config arguments for first argument."
exit 1
fi
if test -z "${config_list}" || echo "${config_list}" | grep -q '\*'
then
echo "$0: Repetition ('*') not allowed in config list."
exit 1
fi
commit_list="${2}"
if test -z "${commit_list}"
then
echo "$0: Need a list of commits (e.g., HEAD^^^..) for second argument."
exit 2
fi
git log --pretty=format:"%h" "${commit_list}" > $T/commits
ret=$?
if test "${ret}" -ne 0
then
echo "$0: Invalid commit list ('${commit_list}')."
exit 2
fi
sha1_list=`cat $T/commits`
shift
shift
RCUTORTURE="`pwd`/tools/testing/selftests/rcutorture"; export RCUTORTURE
PATH=${RCUTORTURE}/bin:$PATH; export PATH
. functions.sh
ret=0
nfail=0
nsuccess=0
faillist=
successlist=
cursha1="`git rev-parse --abbrev-ref HEAD`"
ds="`date +%Y.%m.%d-%H.%M.%S`-series"
startdate="`date`"
starttime="`get_starttime`"
echo " --- " $scriptname $args | tee -a $T/log
echo " --- Results directory: " $ds | tee -a $T/log
for config in ${config_list}
do
sha_n=0
for sha in ${sha1_list}
do
sha1=${sha_n}.${sha} # Enable "sort -k1nr" to list commits in order.
echo Starting ${config}/${sha1} at `date` | tee -a $T/log
git checkout "${sha}"
time tools/testing/selftests/rcutorture/bin/kvm.sh --configs "$config" --datestamp "$ds/${config}/${sha1}" --duration 1 "$@"
curret=$?
if test "${curret}" -ne 0
then
nfail=$((nfail+1))
faillist="$faillist ${config}/${sha1}(${curret})"
else
nsuccess=$((nsuccess+1))
successlist="$successlist ${config}/${sha1}"
# Successful run, so remove large files.
rm -f ${RCUTORTURE}/$ds/${config}/${sha1}/{vmlinux,bzImage,System.map,Module.symvers}
fi
if test "${ret}" -eq 0
then
ret=${curret}
fi
sha_n=$((sha_n+1))
done
done
git checkout "${cursha1}"
echo ${nsuccess} SUCCESSES: | tee -a $T/log
echo ${successlist} | fmt | tee -a $T/log
echo | tee -a $T/log
echo ${nfail} FAILURES: | tee -a $T/log
echo ${faillist} | fmt | tee -a $T/log
if test -n "${faillist}"
then
echo | tee -a $T/log
echo Failures across commits: | tee -a $T/log
echo ${faillist} | tr ' ' '\012' | sed -e 's,^[^/]*/,,' -e 's/([0-9]*)//' |
sort | uniq -c | sort -k2n | tee -a $T/log
fi
echo Started at $startdate, ended at `date`, duration `get_starttime_duration $starttime`. | tee -a $T/log
echo Summary: Successes: ${nsuccess} Failures: ${nfail} | tee -a $T/log
cp $T/log tools/testing/selftests/rcutorture/res/${ds}
exit "${ret}"

View File

@@ -199,7 +199,7 @@ do
fi
;;
--kconfig|--kconfigs)
checkarg --kconfig "(Kconfig options)" $# "$2" '^\(#CHECK#\)\?CONFIG_[A-Z0-9_]\+=\([ynm]\|[0-9]\+\|"[^"]*"\)\( \+\(#CHECK#\)\?CONFIG_[A-Z0-9_]\+=\([ynm]\|[0-9]\+\|"[^"]*"\)\)* *$' '^error$'
checkarg --kconfig "(Kconfig options)" $# "$2" '^\(#CHECK#\)\?CONFIG_[A-Z0-9_]\+=\([ynm]\|-\?[0-9]\+\|"[^"]*"\)\( \+\(#CHECK#\)\?CONFIG_[A-Z0-9_]\+=\([ynm]\|-\?[0-9]\+\|"[^"]*"\)\)* *$' '^error$'
TORTURE_KCONFIG_ARG="`echo "$TORTURE_KCONFIG_ARG $2" | sed -e 's/^ *//' -e 's/ *$//'`"
shift
;;

View File

@@ -16,3 +16,4 @@ CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
CONFIG_RCU_EXPERT=y
CONFIG_RCU_EQS_DEBUG=y
CONFIG_RCU_LAZY=y
CONFIG_RCU_DYNTICKS_TORTURE=y