| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the RTC is adjusted, reevaluate absolute sleep times based on the RTC
POSIX 2008 says this about clock_settime(2):
If the value of the CLOCK_REALTIME clock is set via clock_settime(),
the new value of the clock shall be used to determine the time
of expiration for absolute time services based upon the
CLOCK_REALTIME clock. This applies to the time at which armed
absolute timers expire. If the absolute time requested at the
invocation of such a time service is before the new value of
the clock, the time service shall expire immediately as if the
clock had reached the requested time normally.
Setting the value of the CLOCK_REALTIME clock via clock_settime()
shall have no effect on threads that are blocked waiting for
a relative time service based upon this clock, including the
nanosleep() function; nor on the expiration of relative timers
based upon this clock. Consequently, these time services shall
expire when the requested relative interval elapses, independently
of the new or old value of the clock.
When the real-time clock is adjusted, such as by clock_settime(3),
wake any threads sleeping until an absolute real-clock time.
Such a sleep is indicated by a non-zero td_rtcgen. The sleep functions
will set that field to zero and return zero to tell the caller
to reevaluate its sleep duration based on the new value of the clock.
At present, this affects the following functions:
pthread_cond_timedwait(3)
pthread_mutex_timedlock(3)
pthread_rwlock_timedrdlock(3)
pthread_rwlock_timedwrlock(3)
sem_timedwait(3)
sem_clockwait_np(3)
I'm working on adding clock_nanosleep(2), which will also be affected.
Reported by: Sebastian Huber <sebastian.huber@embedded-brains.de>
Relnotes: yes
Sponsored by: Dell EMC
|
|
|
|
|
|
|
|
|
|
| |
Use atop() instead of OFF_TO_IDX() for convertion of addresses or
addresses offsets, as intended.
MFC r315580 (by alc):
Simplify the logic for clipping the range returned by the pager to fit
within the map entry.
Use atop() rather than OFF_TO_IDX() on addresses.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
r315412:
Don't clear p_ptevents on normal SIGKILL delivery
The ptrace() user has the option of discarding the signal. In such a
case, p_ptevents should not be modified. If the ptrace() user decides to
send a SIGKILL, ptevents will be cleared in ptracestop(). procfs events
do not have the capability to discard the signal, so continue to clear
the mask in that case.
r314852:
don't stop in issignal() if P_SINGLE_EXIT is set
Suppose a traced process is stopped in ptracestop() due to receipt of a
SIGSTOP signal, and is awaiting orders from the tracing process on how
to handle the signal. Before sending any such orders, the tracing
process exits. This should kill the traced process. But suppose a second
thread handles the SIGKILL and proceeds to exit1(), calling
thread_single(). The first thread will now awaken and will have a chance
to check once more if it should go to sleep due to the SIGSTOP. It must
not sleep after P_SINGLE_EXIT has been set; this would prevent the
SIGKILL from taking effect, leaving a stopped orphan behind after the
tracing process dies.
Also add new tests for this condition.
Sponsored by: Dell EMC
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
r315484:
ptrace_test: eliminate assumption about thread scheduling
A couple of the ptrace tests make assumptions about which thread in a
multithreaded process will run after a halt. This makes the tests less
portable across branches, and susceptible to future breakage. Instead,
twiddle thread scheduling and priorities to match the tests'
expectation.
r314118:
Actually fix buildworlds other than i386/amd64/sparc64 after r313992
Disable offending test for platforms without a userspace visible
breakpoint().
r314075:
Fix world build for archs where __builtin_debugtrap() does not work.
The offending code was introduced in r313992.
r313992:
Defer ptracestop() signals that cannot be delivered immediately
When a thread is stopped in ptracestop(), the ptrace(2) user may request
a signal be delivered upon resumption of the thread. Heretofore, those signals
were discarded unless ptracestop()'s caller was issignal(). Fix this by
modifying ptracestop() to queue up signals requested by the ptrace user that
will be delivered when possible. Take special care when the signal is SIGKILL
(usually generated from a PT_KILL request); no new stop events should be
triggered after a PT_KILL.
Add a number of tests for the new functionality. Several tests were authored
by jhb.
PR: 212607
Sponsored by: Dell EMC
|
|
|
|
| |
When clearing altsigstack settings on exec, do it to the right thread.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
nanosleep: plug a kernel memory disclosure
nanosleep() updates rmtp on EINVAL. In that case, kern_nanosleep()
has not updated rmt, so sys_nanosleep() updates the user-space rmtp
by copying garbage from its stack frame. This is not only a kernel
memory disclosure, it's also not POSIX-compliant. Fix it to update
rmtp only on EINTR.
Security: possibly
Sponsored by: Dell EMC
|
|
|
|
|
|
|
|
| |
Consistently handle negative or wrapping offsets in the mmap(2) syscalls.
MFC r315158:
Fix two missed places where vm_object offset to index calculation
should use unsigned shift.
|
|
|
|
| |
Avoid reusing p_ksi while it is on queue.
|
|
|
|
| |
Accept linkers representation for ELF segments with zero on-disk length.
|
|
|
|
| |
Style.
|
|
|
|
|
|
|
| |
Add kern_cpuset_getaffinity() and kern_cpuset_getaffinity(),
and use it in compats instead of their sys_*() counterparts.
Sponsored by: DARPA, AFRL
|
|
|
|
|
|
|
| |
Add kern_cpuset_getid() and kern_cpuset_setid(), and use them
in compat32 instead of their sub_*() counterparts.
Sponsored by: DARPA, AFRL
|
|
|
|
|
|
|
| |
Add kern_pread() and kern_pwrite(), and use it in compats instead
of their sys_*() counterparts. The svr4 is left unchanged.
Sponsored by: DARPA, AFRL
|
|
|
|
|
|
| |
Replace calls to sys_truncate() with kern_truncate().
Sponsored by: DARPA, AFRL
|
|
|
|
|
|
|
| |
Add kern_lseek() and use it instead of sys_lseek() in various compats.
I didn't touch svr4/, there's no point.
Sponsored by: DARPA, AFRL
|
|
|
|
|
|
|
|
| |
Add kern_listen(), kern_shutdown(), and kern_socket(), and use them
instead of their sys_*() counterparts in various compats. The svr4
is left untouched, because there's no point.
Sponsored by: DARPA, AFRL
|
|
|
|
|
|
| |
Replace sys_ftruncate() with kern_ftruncate() in various compats.
Sponsored by: DARPA, AFRL
|
|
|
|
|
|
|
|
|
|
| |
Make root_mount_hold() work after boot. This is important for two
reasons. First is rerooting into USB-mounted device that happens
to be not yet enumerated. The second is when mounting with (non-root)
filesystem on USB device on a hub that's enumerated later than the root
mount: the rc scripts explicitly mount for the root mount holds to be
released, but each USB bus takes the hold asynchronously, and if that
happens after root mount, it would just get ignored.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In r290196 the root mount hold mechanism was changed to make it not wait
for mount hold release if the root device already exists. So, unless your
rootdev is not on USB - ie in the usual case - the root mount won't wait
for USB. However, the old behaviour was sometimes used as "wait until USB
is fully enumerated", and r290196 broke that.
This commit adds vfs.root_mount_always_wait tunable, to force the kernel
to always wait for root mount holds, even if the root is already there.
Relnotes: yes
|
|
|
|
|
|
|
|
|
|
| |
Fix bug that would result in a kernel crash in some cases involving
a symlink and an autofs mount request. The crash was caused by namei()
calling bcopy() with a negative length, caused by numeric underflow:
in lookup(), in the relookup path, the ni_pathlen was decremented too
many times. The bug was introduced in r296715.
Big thanks to Alex Deiter for his help with debugging this.
|
|
|
|
|
|
| |
Style and punctuation fixes.
Simplify the control flow and tidy up a comment in map_insert.
|
|
|
|
|
|
|
| |
Relax the locking requirements for vm_object_page_noreuse(). While
reviewing all uses of OFF_TO_IDX(), I observed that
vm_object_page_noreuse() is requiring an exclusive lock on the object
when, in fact, a shared lock suffices.
|
|
|
|
| |
Use designated initializers for kevent_copyops.
|
|
|
|
|
|
|
|
|
|
| |
Ktracing kevent(2) calls with unusual arguments might leads to an
overly large allocation requests.
PR: 217435
MFC r315237:
Hide kev_iovlen() definition under #ifdef KTRACE.
|
|
|
|
|
|
| |
Fix NULL pointer dereference and panic with shm file pread/pwrite.
Approved by: dchagin
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use inet_ntoa_r() instead of inet_ntoa() throughout the kernel.
inet_ntoa() cannot be used safely in a multithreaded environment
because it uses a static local buffer. Instead, use inet_ntoa_r()
with a buffer on the caller's stack, except for KTR messages.
KTR can correctly log the immediate integral values passed to it,
as well as constant strings, but not non-constant strings,
since they might change by the time ktrdump retrieves them.
Therefore, use hex notation in KTR messages.
Sponsored by: Dell EMC
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vfs: refactor _vn_lock
Stop testing for LK_RETRY and error multiple times. Also postpone the
VI_DOOMED until after LK_RETRY was seen as it reads from the vnode.
No functional changes.
==
vfs: fix whitespace damage in r312600
While here wrap the previously overly long line so that it fits 80 chars.
==
vfs: __predict_false the need to handle F_HASLOCK
Also reorder the check with DTYPE_VNODE. Passed files are vnodes vast
majority of the time, so it is typically true.
==
vfs: fix LK_RETRY logic braino in r312600
==
More style cleanup. Use ANSI C definition for vn_closefile(). Switch
to VNASSERT in _vn_lock(), simplify messages.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vfs: provide fake locking primitives for the crossmp vnode
Since the vnode is only expected to be shared locked, we can save a
little overhead by only pretending we are locking in the first place.
==
Provide fallback VOP methods for crossmp vnode.
In particular, crossmp vnode might leak into rename code.
==
vfs: hide the getvnode NULL mp message behind DIAGNOSTIC
Since crossmp vnode changes the message was being printed on each boot.
==
Improve debugging printf.
|
|
|
|
| |
proc: perform a lockless check in sys_issetugid
|
|
|
|
| |
cache: annotate with __read_mostly and __exclusive_cache_line
|
|
|
|
| |
cache: use vrefact for '.' lookups and refing the rdir in fullpath
|
|
|
|
| |
fd: sprinkle __read_mostly and __exclusive_cache_line
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use a consistent snapshot of the lock state in owner_mtx().
==
Return a non-NULL owner only if the lock is exclusively held in owner_sx().
Fix some whitespace bugs while here.
============
While here the correct commit message for the previous commit:
MFC r313855 r313865 r313875 r313877 r313878 r313901 r313908 r313928 r313944 r314185 r314476 r314187
locks: let primitives for modules unlock without always goging to the slsow path
It is only needed if the LOCK_PROFILING is enabled. It has to always check if
the lock is about to be released which requires an avoidable read if the option
is not specified..
==
sx: fix compilation on UP kernels after r313855
sx primitives use inlines as opposed to macros. Change the tested condition
to LOCK_DEBUG which covers the case, but is slightly overzelaous.
==
mtx: microoptimize lockstat handling in __mtx_lock_sleep
This saves a function call and multiple branches after the lock is acquired.
==
mtx: restrict r313875 to kernels without LOCK_PROFILING
==
mtx: get rid of file/line args from slow paths if they are unused
This denotes changes which went in by accident in r313877.
On most production kernels both said parameters are zeroed and have nothing
reading them in either __mtx_lock_sleep or __mtx_unlock_sleep. Thus this change
stops passing them by internal consumers which this is the case.
Kernel modules use _flags variants which are not affected kbi-wise.
==
sx: fix mips builld after r313855
The namespace in this file really needs cleaning up. In the meantime
let inline primitives be defined as long as LOCK_DEBUG is not enabled.
==
mtx: plug the 'opts' argument when not used
git-svn-id: svn+ssh://svn.freebsd.org/base/head@313908 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
==
locks: clean up trylock primitives
In particular thius reduces accesses of the lock itself.
==
locks: make trylock routines check for 'unowned' value
Since fcmpset can fail without lock contention e.g. on arm, it was possible
to get spurious failures when the caller was expecting the primitive to succeed.
==
mtx: microoptimize lockstat handling in spin mutexes and thread lock
While here make the code compilablle on kernels with LOCK_PROFILING but without
KDTRACE_HOOKS.
==
KDTRACE_HOOKS isn't guaranteed to be defined. Change to check to see
if it is defined or not rather than if it is non-zero.
==
locks: fix compilation with KTR wihout KTR_LOCKS
While here wrap the overly long line.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
r314185,r314476,r314187
locks: let primitives for modules unlock without always goging to the slsow path
It is only needed if the LOCK_PROFILING is enabled. It has to always check if
the lock is about to be released which requires an avoidable read if the option
is not specified..
==
sx: fix compilation on UP kernels after r313855
sx primitives use inlines as opposed to macros. Change the tested condition
to LOCK_DEBUG which covers the case, but is slightly overzelaous.
commit a39b839d16cd72b1df284ccfe6706fcdf362706e
Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f>
Date: Sat Feb 18 22:06:03 2017 +0000
locks: clean up trylock primitives
In particular thius reduces accesses of the lock itself.
git-svn-id: svn+ssh://svn.freebsd.org/base/head@313928 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
commit 013560e742a5a276b0deef039bc18078d51d6eb0
Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f>
Date: Sat Feb 18 01:52:10 2017 +0000
mtx: plug the 'opts' argument when not used
git-svn-id: svn+ssh://svn.freebsd.org/base/head@313908 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
commit 9a507901162fb476b9809da2919905735cd605af
Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f>
Date: Fri Feb 17 22:09:55 2017 +0000
sx: fix mips builld after r313855
The namespace in this file really needs cleaning up. In the meantime
let inline primitives be defined as long as LOCK_DEBUG is not enabled.
Reported by: kib
git-svn-id: svn+ssh://svn.freebsd.org/base/head@313901 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
commit aa6243a5124b9ceb3b1683ea4dbb0a133ce70095
Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f>
Date: Fri Feb 17 15:40:24 2017 +0000
mtx: get rid of file/line args from slow paths if they are unused
This denotes changes which went in by accident in r313877.
On most production kernels both said parameters are zeroed and have nothing
reading them in either __mtx_lock_sleep or __mtx_unlock_sleep. Thus this change
stops passing them by internal consumers which this is the case.
Kernel modules use _flags variants which are not affected kbi-wise.
git-svn-id: svn+ssh://svn.freebsd.org/base/head@313878 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
commit 688545a6af7ed0972653d6e2c6ca406ac511f39d
Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f>
Date: Fri Feb 17 15:34:40 2017 +0000
mtx: restrict r313875 to kernels without LOCK_PROFILING
git-svn-id: svn+ssh://svn.freebsd.org/base/head@313877 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
commit bbe6477138713da2d503f93cb5dd602e14152a08
Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f>
Date: Fri Feb 17 14:55:59 2017 +0000
mtx: microoptimize lockstat handling in __mtx_lock_sleep
This saves a function call and multiple branches after the lock is acquired.
overzelaous
|
|
|
|
|
|
|
| |
The runlock slow path would update wrong variable before restarting the
loop, in effect corrupting the state.
Something was botched in the previous mfc attempt in r315380.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
locks: remove SCHEDULER_STOPPED checks from primitives for modules
They all fallback to the slow path if necessary and the check is there.
This means a panicked kernel executing code from modules will be able to
succeed doing actual lock/unlock, but this was already the case for core code
which has said primitives inlined.
==
Introduce SCHEDULER_STOPPED_TD for use when the thread pointer was already read
Sprinkle in few places.
|
|
|
|
|
|
|
|
|
|
| |
locks: tidy up unlock fallback paths
Update comments to note these functions are reachable if lockstat is
enabled.
Check if the lock has any bits set before attempting unlock, which saves
an unnecessary atomic operation.
|
|
|
|
|
|
| |
sx: implement slock/sunlock fast path
See r313454.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rwlock: implemenet rlock/runlock fast path
This improves singlethreaded throughput on my test machine from ~247 mln
ops/s to ~328 mln.
It is mostly about avoiding the setup cost of lockstat.
==
rwlock: fix r313454
The runlock slow path would update wrong variable before restarting the
loop, in effect corrupting the state.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rwlock: implement RW_LOCK_WRITER_RECURSED bit
This moves recursion handling out of the inlined wunlock path and in
particular saves a read and a branch.
==
rwlock: tidy up r313392
While a new bit was added and thread alignment got shifted to accomodate it,
RW_READERS_SHIFT was not modified accordingly and clashed with the new flag.
This was surprisingly harmless. If the lock was taken for writing, other flags
were tested. If the lock was taken for reading, it would correctly work for
readers > 1 and this was the only relevant test performed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mtx: move lockstat handling out of inline primitives
Lockstat requires checking if it is enabled and if so, calling a 6 argument
function. Further, determining whether to call it on unlock requires
pre-reading the lock value.
This is problematic in at least 3 ways:
- more branches in the hot path than necessary
- additional cacheline ping pong under contention
- bigger code
Instead, check first if lockstat handling is necessary and if so, just fall
back to regular locking routines. For this purpose a new macro is introduced
(LOCKSTAT_PROFILE_ENABLED).
LOCK_PROFILING uninlines all primitives. Fold in the current inline lock
variant into the _mtx_lock_flags to retain the support. With this change
the inline variants are not used when LOCK_PROFILING is defined and thus
can ignore its existence.
This results in:
text data bss dec hex filename
22259667 1303208 4994976 28557851 1b3c21b kernel.orig
21797315 1303208 4994976 28095499 1acb40b kernel.patched
i.e. about 3% reduction in text size.
A remaining action is to remove spurious arguments for internal kernel
consumers.
==
sx: move lockstat handling out of inline primitives
See r313275 for details.
==
rwlock: move lockstat handling out of inline primitives
See r313275 for details.
One difference here is that recursion handling was removed from the fallback
routine. As it is it was never supposed to see a recursed lock in the first
place. Future changes will move it out of inline variants, but right now
there is no easy to way to test if the lock is recursed without reading
additional words.
==
locks: fix recursion support after recent changes
When a relevant lockstat probe is enabled the fallback primitive is called with
a constant signifying a free lock. This works fine for typical cases but breaks
with recursion, since it checks if the passed value is that of the executing
thread.
Read the value if necessary.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mtx: switch to fcmpset
The found value is passed to locking routines in order to reduce cacheline
accesses.
mtx_unlock grows an explicit check for regular unlock. On ll/sc architectures
the routine can fail even if the lock could have been handled by the inline
primitive.
==
rwlock: switch to fcmpset
==
sx: switch to fcmpset
==
sx: uninline slock/sunlock
Shared locking routines explicitly read the value and test it. If the
change attempt fails, they fall back to a regular function which would
retry in a loop.
The problem is that with many concurrent readers the risk of failure is pretty
high and even the value returned by fcmpset is very likely going to be stale
by the time the loop in the fallback routine is reached.
Uninline said primitives. It gives a throughput increase when doing concurrent
slocks/sunlocks with 80 hardware threads from ~50 mln/s to ~56 mln/s.
Interestingly, rwlock primitives are already not inlined.
==
sx: add witness support missed in r313272
==
mtx: fix up _mtx_obtain_lock_fetch usage in thread lock
Since _mtx_obtain_lock_fetch no longer sets the argument to MTX_UNOWNED,
callers have to do it on their own.
==
mtx: fixup r313278, the assignemnt was supposed to go inside the loop
==
mtx: fix spin mutexes interaction with failed fcmpset
While doing so move recursion support down to the fallback routine.
==
locks: ensure proper barriers are used with atomic ops when necessary
Unclear how, but the locking routine for mutexes was using the *release*
barrier instead of acquire. This must have been either a copy-pasto or bad
completion.
Going through other uses of atomics shows no barriers in:
- upgrade routines (addressed in this patch)
- sections protected with turnstile locks - this should be fine as necessary
barriers are in the worst case provided by turnstile unlock
I would like to thank Mark Millard and andreast@ for reporting the problem and
testing previous patches before the issue got identified.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
lockmgr: implement fast path
The main lockmgr routine takes 8 arguments which makes it impossible to
tail-call it by the intermediate vop_stdlock/unlock routines.
The routine itself starts with an if-forest and reads from the lock itself
several times.
This slows things down both single- and multi-threaded. With the patch
single-threaded fstats go 4% up and multithreaded up to ~27%.
Note that there is still a lot of room for improvement.
|
|
|
|
|
|
|
| |
Bump struct thread alignment to 32.
This gives additional bits to use in locking primitives which store
the lock thread pointer in the lock value.
|
|
|
|
| |
vfs: use atomic_fcmpset in vfs_refcount_*
|
|
|
|
| |
fd: switch fget_unlocked to atomic_fcmpse
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mtx: reduce lock accesses
Instead of spuriously re-reading the lock value, read it once.
This change also has a side effect of fixing a performance bug:
on failed _mtx_obtain_lock, it was possible that re-read would find
the lock is unowned, but in this case the primitive would make a trip
through turnstile code.
This is diff reduction to a variant which uses atomic_fcmpset.
==
Reduce lock accesses in thread lock similarly to r311172
==
mtx: plug open-coded mtx_lock access missed in r311172
==
rwlock: reduce lock accesses similarly to r311172
==
sx: reduce lock accesses similarly to r311172
|
|
|
|
| |
locks: add backoff for spin mutexes and thread lock
|