diff options
author | jhb <jhb@FreeBSD.org> | 2005-04-04 21:53:56 +0000 |
---|---|---|
committer | jhb <jhb@FreeBSD.org> | 2005-04-04 21:53:56 +0000 |
commit | 41cadaa11ed081720fe75d25094f73a53d9bf55c (patch) | |
tree | d48c8aa642d31e026486326f9d281f4d5eff0bdb /sys/sys/mutex.h | |
parent | 7d745ab8664182cd756c16538844d587ed753eb9 (diff) | |
download | FreeBSD-src-41cadaa11ed081720fe75d25094f73a53d9bf55c.zip FreeBSD-src-41cadaa11ed081720fe75d25094f73a53d9bf55c.tar.gz |
Divorce critical sections from spinlocks. Critical sections as denoted by
critical_enter() and critical_exit() are now solely a mechanism for
deferring kernel preemptions. They no longer have any affect on
interrupts. This means that standalone critical sections are now very
cheap as they are simply unlocked integer increments and decrements for the
common case.
Spin mutexes now use a separate KPI implemented in MD code: spinlock_enter()
and spinlock_exit(). This KPI is responsible for providing whatever MD
guarantees are needed to ensure that a thread holding a spin lock won't
be preempted by any other code that will try to lock the same lock. For
now all archs continue to block interrupts in a "spinlock section" as they
did formerly in all critical sections. Note that I've also taken this
opportunity to push a few things into MD code rather than MI. For example,
critical_fork_exit() no longer exists. Instead, MD code ensures that new
threads have the correct state when they are created. Also, we no longer
try to fixup the idlethreads for APs in MI code. Instead, each arch sets
the initial curthread and adjusts the state of the idle thread it borrows
in order to perform the initial context switch.
This change is largely a big NOP, but the cleaner separation it provides
will allow for more efficient alternative locking schemes in other parts
of the kernel (bare critical sections rather than per-CPU spin mutexes
for per-CPU data for example).
Reviewed by: grehan, cognet, arch@, others
Tested on: i386, alpha, sparc64, powerpc, arm, possibly more
Diffstat (limited to 'sys/sys/mutex.h')
-rw-r--r-- | sys/sys/mutex.h | 12 |
1 files changed, 6 insertions, 6 deletions
diff --git a/sys/sys/mutex.h b/sys/sys/mutex.h index 5fe31eb..45eb504 100644 --- a/sys/sys/mutex.h +++ b/sys/sys/mutex.h @@ -167,7 +167,7 @@ void _mtx_assert(struct mtx *m, int what, const char *file, int line); #define _get_spin_lock(mp, tid, opts, file, line) do { \ struct thread *_tid = (tid); \ \ - critical_enter(); \ + spinlock_enter(); \ if (!_obtain_lock((mp), _tid)) { \ if ((mp)->mtx_lock == (uintptr_t)_tid) \ (mp)->mtx_recurse++; \ @@ -179,7 +179,7 @@ void _mtx_assert(struct mtx *m, int what, const char *file, int line); #define _get_spin_lock(mp, tid, opts, file, line) do { \ struct thread *_tid = (tid); \ \ - critical_enter(); \ + spinlock_enter(); \ if ((mp)->mtx_lock == (uintptr_t)_tid) \ (mp)->mtx_recurse++; \ else { \ @@ -207,8 +207,8 @@ void _mtx_assert(struct mtx *m, int what, const char *file, int line); * Since spin locks are not _too_ common, inlining this code is not too big * a deal. * - * Since we always perform a critical_enter() when attempting to acquire a - * spin lock, we need to always perform a matching critical_exit() when + * Since we always perform a spinlock_enter() when attempting to acquire a + * spin lock, we need to always perform a matching spinlock_exit() when * releasing a spin lock. This includes the recursion cases. */ #ifndef _rel_spin_lock @@ -218,7 +218,7 @@ void _mtx_assert(struct mtx *m, int what, const char *file, int line); (mp)->mtx_recurse--; \ else \ _release_lock_quick((mp)); \ - critical_exit(); \ + spinlock_exit(); \ } while (0) #else /* SMP */ #define _rel_spin_lock(mp) do { \ @@ -226,7 +226,7 @@ void _mtx_assert(struct mtx *m, int what, const char *file, int line); (mp)->mtx_recurse--; \ else \ (mp)->mtx_lock = MTX_UNOWNED; \ - critical_exit(); \ + spinlock_exit(); \ } while (0) #endif /* SMP */ #endif |