summaryrefslogtreecommitdiffstats
path: root/kernel/locking/qspinlock.c
Commit message (Expand)AuthorAgeFilesLines
* locking/qspinlock: Use smp_cond_acquire() in pending codeWaiman Long2016-02-291-4/+3
* locking/pvqspinlock: Queue node adaptive spinningWaiman Long2015-12-041-2/+3
* locking/pvqspinlock: Allow limited lock stealingWaiman Long2015-12-041-6/+20
* locking, sched: Introduce smp_cond_acquire() and use itPeter Zijlstra2015-12-041-2/+1
* locking/qspinlock: Avoid redundant read of next pointerWaiman Long2015-11-231-3/+6
* locking/qspinlock: Prefetch the next node cachelineWaiman Long2015-11-231-0/+10
* locking/qspinlock: Use _acquire/_release() versions of cmpxchg() & xchg()Waiman Long2015-11-231-5/+24
* locking/qspinlock/x86: Fix performance regression under unaccelerated VMsPeter Zijlstra2015-09-111-1/+1
* locking/pvqspinlock: Only kick CPU at unlock timeWaiman Long2015-08-031-3/+3
* locking/pvqspinlock: Implement simple paravirt support for the qspinlockWaiman Long2015-05-081-1/+67
* locking/qspinlock: Revert to test-and-set on hypervisorsPeter Zijlstra (Intel)2015-05-081-0/+3
* locking/qspinlock: Use a simple write to grab the lockWaiman Long2015-05-081-16/+50
* locking/qspinlock: Optimize for smaller NR_CPUSPeter Zijlstra (Intel)2015-05-081-1/+68
* locking/qspinlock: Extract out code snippets for the next patchWaiman Long2015-05-081-31/+48
* locking/qspinlock: Add pending bitPeter Zijlstra (Intel)2015-05-081-21/+98
* locking/qspinlock: Introduce a simple generic 4-byte queued spinlockWaiman Long2015-05-081-0/+209
OpenPOWER on IntegriCloud