summaryrefslogtreecommitdiffstats
path: root/kernel/profile.c
diff options
context:
space:
mode:
authorIngo Molnar <mingo@elte.hu>2005-06-25 14:57:38 -0700
committerLinus Torvalds <torvalds@ppc970.osdl.org>2005-06-25 16:24:45 -0700
commitf704f56af95bec3c1ca719d64d0becef74d40899 (patch)
treef024287878246703cf28f738d2c553c476c53c34 /kernel/profile.c
parentcc19ca86a023fcd552c78e77a7be6ce271f92a28 (diff)
downloadop-kernel-dev-f704f56af95bec3c1ca719d64d0becef74d40899.zip
op-kernel-dev-f704f56af95bec3c1ca719d64d0becef74d40899.tar.gz
[PATCH] enable PREEMPT_BKL on !PREEMPT+SMP too
The only sane way to clean up the current 3 lock_kernel() variants seems to be to remove the spinlock-based BKL implementations altogether, and to keep the semaphore-based one only. If we dont want to do that for whatever reason then i'm afraid we have to live with the current complexity. (but i'm open for other cleanup suggestions as well.) To explore this possibility we'll (at a minimum) have to know whether the semaphore-based BKL works fine on plain SMP too. The patch below enables this. The patch may make sense in isolation as well, as it might bring performance benefits: code that would formerly spin on the BKL spinlock will now schedule away and give up the CPU. It might introduce performance regressions as well, if any performance-critical code uses the BKL heavily and gets overscheduled due to the semaphore. I very much hope there is no such performance-critical codepath left though. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'kernel/profile.c')
0 files changed, 0 insertions, 0 deletions
OpenPOWER on IntegriCloud