From 12adc443d67286deeee69e764d979c963403497d Mon Sep 17 00:00:00 2001 From: jeff Date: Sat, 15 Dec 2007 23:13:31 +0000 Subject: - Re-implement lock profiling in such a way that it no longer breaks the ABI when enabled. There is no longer an embedded lock_profile_object in each lock. Instead a list of lock_profile_objects is kept per-thread for each lock it may own. The cnt_hold statistic is now always 0 to facilitate this. - Support shared locking by tracking individual lock instances and statistics in the per-thread per-instance lock_profile_object. - Make the lock profiling hash table a per-cpu singly linked list with a per-cpu static lock_prof allocator. This removes the need for an array of spinlocks and reduces cache contention between cores. - Use a seperate hash for spinlocks and other locks so that only a critical_enter() is required and not a spinlock_enter() to modify the per-cpu tables. - Count time spent spinning in the lock statistics. - Remove the LOCK_PROFILE_SHARED option as it is always supported now. - Specifically drop and release the scheduler locks in both schedulers since we track owners now. In collaboration with: Kip Macy Sponsored by: Nokia --- sys/kern/kern_thread.c | 2 ++ 1 file changed, 2 insertions(+) (limited to 'sys/kern/kern_thread.c') diff --git a/sys/kern/kern_thread.c b/sys/kern/kern_thread.c index e176b87..93ff5a7 100644 --- a/sys/kern/kern_thread.c +++ b/sys/kern/kern_thread.c @@ -555,6 +555,8 @@ thread_link(struct thread *td, struct proc *p) td->td_flags = TDF_INMEM; LIST_INIT(&td->td_contested); + LIST_INIT(&td->td_lprof[0]); + LIST_INIT(&td->td_lprof[1]); sigqueue_init(&td->td_sigqueue, p); callout_init(&td->td_slpcallout, CALLOUT_MPSAFE); TAILQ_INSERT_HEAD(&p->p_threads, td, td_plist); -- cgit v1.1