summaryrefslogtreecommitdiffstats
path: root/sys/i386/include/mutex.h
Commit message (Collapse)AuthorAgeFilesLines
* Move clock_lock prototype into <machine/clock.h>, where it is moresobomax2006-05-191-11/+2
| | | | | | appropriate. Discussed with: jhb
* GC #if 0'd assembly mutex micro operations. If someone wants to bringjhb2002-03-281-208/+0
| | | | | these back later then can get them from the attic. Also, GC, some stale macros to acquire and release sleep mutexes in assembly.
* Make MPLOCKED work again in asm files and stringify it explicitlybmilekic2002-02-281-6/+6
| | | | | | where necessary. Reviewed by: jake
* Modify the critical section API as follows:jhb2001-12-181-42/+8
| | | | | | | | | | | | | | | | | | | - The MD functions critical_enter/exit are renamed to start with a cpu_ prefix. - MI wrapper functions critical_enter/exit maintain a per-thread nesting count and a per-thread critical section saved state set when entering a critical section while at nesting level 0 and restored when exiting to nesting level 0. This moves the saved state out of spin mutexes so that interlocking spin mutexes works properly. - Most low-level MD code that used critical_enter/exit now use cpu_critical_enter/exit. MI code such as device drivers and spin mutexes use the MI wrappers. Note that since the MI wrappers store the state in the current thread, they do not have any return values or arguments. - mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is assigned to curthread->td_savecrit during fork_exit(). Tested on: i386, alpha
* KSE Milestone 2julian2001-09-121-1/+1
| | | | | | | | | | | | | | Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is functionally equivalent to teh previousl -current except that there is a thread associated with each process. Sorry john! (your next MFC will be a doosie!) Reviewed by: peter@freebsd.org, dillon@freebsd.org X-MFC after: ha ha ha ha
* Properly wrap mtx_intr_enable() macro in "do $bla while (0)"phk2001-06-021-1/+1
|
* - Switch from using save/disable/restore_intr to using critical_enter/exitjhb2001-03-281-5/+5
| | | | | | | | | | | | | | | | | and change the u_int mtx_saveintr member of struct mtx to a critical_t mtx_savecrit. - On the alpha we no longer need a custom _get_spin_lock() macro to avoid an extra PAL call, so remove it. - Partially fix using mutexes with WITNESS in modules. Change all the _mtx_{un,}lock_{spin,}_flags() macros to accept explicit file and line parameters and rename them to use a prefix of two underscores. Inside of kern_mutex.c, generate wrapper functions for _mtx_{un,}lock_{spin,}_flags() (only using a prefix of one underscore) that are called from modules. The macros mtx_{un,}lock_{spin,}_flags() are mapped to the __mtx_* macros inside of the kernel to inline the usual case of mutex operations and map to the internal _mtx_* functions in the module case so that modules will use WITNESS and KTR logging if the kernel is compiled with support for it.
* Fix mtx_legal2block. The only time that it is bad to block on a mutex isjhb2001-03-091-1/+0
| | | | | | | | | | | | | | | | if we hold a spin mutex, since we can trivially get into deadlocks if we start switching out of processes that hold spinlocks. Checking to see if interrupts were disabled was a sort of cheap way of doing this since most of the time interrupts were only disabled when holding a spin lock. At least on the i386. To fix this properly, use a per-process counter p_spinlocks that counts the number of spin locks currently held, and instead of checking to see if interrupts are disabled in the witness code, check to see if we hold any spin locks. Since child processes always start up with the sched lock magically held in fork_exit(), we initialize p_spinlocks to 1 for child processes. Note that proc0 doesn't go through fork_exit(), so it starts with no spin locks held. Consulting from: cp
* GC unused and now obsolete assertion macros.jhb2001-02-221-8/+0
|
* Add a macro mtx_intr_enable() to alter a spin lock such that interruptsjhb2001-02-101-0/+1
| | | | will be enabled when it is released.
* Change and clean the mutex lock interface.bmilekic2001-02-091-23/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mtx_enter(lock, type) becomes: mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized) similarily, for releasing a lock, we now have: mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN. We change the caller interface for the two different types of locks because the semantics are entirely different for each case, and this makes it explicitly clear and, at the same time, it rids us of the extra `type' argument. The enter->lock and exit->unlock change has been made with the idea that we're "locking data" and not "entering locked code" in mind. Further, remove all additional "flags" previously passed to the lock acquire/release routines with the exception of two: MTX_QUIET and MTX_NOSWITCH The functionality of these flags is preserved and they can be passed to the lock/unlock routines by calling the corresponding wrappers: mtx_{lock, unlock}_flags(lock, flag(s)) and mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN locks, respectively. Re-inline some lock acq/rel code; in the sleep lock case, we only inline the _obtain_lock()s in order to ensure that the inlined code fits into a cache line. In the spin lock case, we inline recursion and actually only perform a function call if we need to spin. This change has been made with the idea that we generally tend to avoid spin locks and that also the spin locks that we do have and are heavily used (i.e. sched_lock) do recurse, and therefore in an effort to reduce function call overhead for some architectures (such as alpha), we inline recursion for this case. Create a new malloc type for the witness code and retire from using the M_DEV type. The new type is called M_WITNESS and is only declared if WITNESS is enabled. Begin cleaning up some machdep/mutex.h code - specifically updated the "optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently need those. Finally, caught up to the interface changes in all sys code. Contributors: jake, jhb, jasone (in no particular order)
* Move most of sys/mutex.h into kern/kern_mutex.c, thereby making the mutexjasone2001-01-211-0/+6
| | | | | | | | | | | inline functions non-inlined. Hide parts of the mutex implementation that should not be exposed. Make sure that WITNESS code is not executed during boot until the mutexes are fully initialized by SI_SUB_MUTEX (the original motivation for this commit). Submitted by: peter
* Simplify the i386 asm MTX_{ENTER,EXIT} macros to just call thejake2001-01-201-102/+16
| | | | | | appropriate function, rather than doing a horse-and-buggy acquire. They now take the mutex type as an arg and can be used with sleep as well as spin mutexes.
* Argh, disable the micro-ops again. I didn't test these adequately andjhb2001-01-161-1/+2
| | | | | | managed to lock up one of my machines in world again. Pointy-hat to: me
* - Use "+a" instead of "=&a" for several constraints. This should fixjhb2001-01-161-25/+19
| | | | | | | | | | compiling errors where gcc would run out of registers. - Add "cc" to the list of clobbers for micro-ops where we perform instructions that alter %eflags. - Use xchgl instead of cmpxchgl to release a spin lock. This could allow for more efficient register allocation as we no longer mandate that %eax be used. - Reenable the optimized mutex micro-ops in the non-i386 case.
* Revert the previous revision now that atomic_store_rel_ptr() actuallyjhb2001-01-141-4/+0
| | | | works.
* Work around the broken atomic_store_rel_ptr() on the i386 arch by justjhb2001-01-141-0/+4
| | | | | | using atomic_cmpset_rel_ptr() instead for _release_lock_quick(). When atomic_store_rel_ptr() is functional and MP safe, then this can be reverted.
* Fix the assembly mutex macros to call the appropriate witness functions ifjhb2000-12-121-3/+38
| | | | | | the witness code is compiled in. Without this, the witness code doesn't notice that sched_lock is released by fork_trampoline() and thus gets all confused about spin lock order later on.
* Fix a jump to the wrong label, <sigh>. Put a period at the end of ajake2000-12-081-2/+2
| | | | | | sentence in a comment. Submitted by: bde
* Argh, revert the clobber changes. Since %ecx and %edx aren't call safe,jhb2000-12-081-9/+9
| | | | | | | calling the C functions mtx_enter_hard() and mtx_exit_hard() clobbers them. Note that %eax is also not call safe, but it is already clobbered due to cmpxchg. However, now we are back to not compiling again, so these macros are still left disabled for now.
* Change the calling conventions of the MTX_ENTER macro to matchjake2000-12-081-11/+13
| | | | | | | that of MTX_EXIT. Don't assume that the reg parameter to MTX_ENTER holds curproc, load it explicitly. Put semi-colons at the end of the macros to be more consistent and so its harder to forget them when these change.
* Well, the previous commit wasn't entirely correct either. For now, justjhb2000-12-081-1/+2
| | | | | disable the optimized mutex micro-operations for the non-I386_CPU case and fall back to the C stubs that call the atomic_foo() inlines.
* Fix broken register restraints that needlessly clobbered registers %ecxjhb2000-12-071-13/+13
| | | | and %edx resulting in gcc not having enough registers left to work with.
* (1) Allow a stray lock prefix to be compiled out with thejake2000-12-041-16/+24
| | | | | | | | | | | | | MPLOCKED macro (2) Use decimal 12 rather than hex 0xc in an addl (3) Implement MTX_ENTER for the I386_CPU case (4) Use semi-colons between instructions to allow MTX_ENTER and MTX_ENTER_WITH_RECURSION to be assembled (5) Use incl instead of incw to increment the recusion count (6) 10 is not a valid label, use 7, 8 and 9 rather than 8, 9 and 10 (7) Sort numeric labels Submitted by: bde (2, 4, and 5)
* Fix a bug with handling of the saved interrupt state for spin mutexes injhb2000-11-131-2/+2
| | | | | | the MTX_EXIT_WITH_RECURSION() assembly macro (currently unused). Submitted by: bde
* Define the mtx_legal2block() macro used in the witness code that managedjhb2000-10-201-0/+2
| | | | | | to get lost during the MI mutex conversion. Reported by: Steve Kargl <sgk@troutmask.apl.washington.edu>
* - Make the mutex code almost completely machine independent. This greatlyjhb2000-10-201-512/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | reducues the maintenance load for the mutex code. The only MD portions of the mutex code are in machine/mutex.h now, which include the assembly macros for handling mutexes as well as optionally overriding the mutex micro-operations. For example, we use optimized micro-ops on the x86 platform #ifndef I386_CPU. - Change the behavior of the SMP_DEBUG kernel option. In the new code, mtx_assert() only depends on INVARIANTS, allowing other kernel developers to have working mutex assertiions without having to include all of the mutex debugging code. The SMP_DEBUG kernel option has been renamed to MUTEX_DEBUG and now just controls extra mutex debugging code. - Abolish the ugly mtx_f hack. Instead, we dynamically allocate seperate mtx_debug structures on the fly in mtx_init, except for mutexes that are initiated very early in the boot process. These mutexes are declared using a special MUTEX_DECLARE() macro, and use a new flag MTX_COLD when calling mtx_init. This is still somewhat hackish, but it is less evil than the mtx_f filler struct, and the mtx struct is now the same size with and without mutex debugging code. - Add some micro-micro-operation macros for doing the actual atomic operations on the mutex mtx_lock field to make it easier for other archs to override/optimize mutex ops if needed. These new tiny ops also clean up the code in some places by replacing long atomic operation function calls that spanned 2-3 lines with a short 1-line macro call. - Don't call mi_switch() from mtx_enter_hard() when we block while trying to obtain a sleep mutex. Calling mi_switch() would bogusly release Giant before switching to the next process. Instead, inline most of the code from mi_switch() in the mtx_enter_hard() function. Note that when we finally kill Giant we can back this out and go back to calling mi_switch().
* - Change fast interrupts on x86 to push a full interrupt frame and tojhb2000-10-061-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | return through doreti to handle ast's. This is necessary for the clock interrupts to work properly. - Change the clock interrupts on the x86 to be fast instead of threaded. This is needed because both hardclock() and statclock() need to run in the context of the current process, not in a separate thread context. - Kill the prevproc hack as it is no longer needed. - We really need Giant when we call psignal(), but we don't want to block during the clock interrupt. Instead, use two p_flag's in the proc struct to mark the current process as having a pending SIGVTALRM or a SIGPROF and let them be delivered during ast() when hardclock() has finished running. - Remove CLKF_BASEPRI, which was #ifdef'd out on the x86 anyways. It was broken on the x86 if it was turned on since cpl is gone. It's only use was to bogusly run softclock() directly during hardclock() rather than scheduling an SWI. - Remove the COM_LOCK simplelock and replace it with a clock_lock spin mutex. Since the spin mutex already handles disabling/restoring interrupts appropriately, this also lets us axe all the *_intr() fu. - Back out the hacks in the APIC_IO x86 cpu_initclocks() code to use temporary fast interrupts for the APIC trial. - Add two new process flags P_ALRMPEND and P_PROFPEND to mark the pending signals in hardclock() that are to be delivered in ast(). Submitted by: jakeb (making statclock safe in a fast interrupt) Submitted by: cp (concept of delaying signals until ast())
* Reduce userland namespace polution.jasone2000-10-041-1/+4
|
* Fix the assmebly mutex macros to handle saving/restoring interrupt statejhb2000-09-241-7/+21
| | | | | properly. Fix the recursive mutex macros to actually compile. At the moment we only use MTX_EXIT anyways.
* #include <sys/proc.h> in order to get curproc. This seems to be the lesserjasone2000-09-231-3/+2
| | | | | of two evils; the greater evil is requiring sys/proc.h to be included before including machine/mutex.h.
* Teach MTX_EXIT_RECURSE that the recursion count is a 32-bit integer,jhb2000-09-221-3/+3
| | | | not a 16-bit one.
* Remove the mtx_t, witness_t, and witness_blessed_t types. Instead, justjhb2000-09-141-25/+23
| | | | | | use struct mtx, struct witness, and struct witness_blessed. Requested by: bde
* Style cleanups. No functional changes.jasone2000-09-091-42/+37
|
* Add file and line arguments to WITNESS_ENTER() and WITNESS_EXIT, sincejasone2000-09-091-14/+14
| | | | | | __FILE__ and __LINE__ don't get expanded usefully in inline functions. Add const to all witness*() arguments that are filenames.
* Rename mtx_enter(), mtx_try_enter(), and mtx_exit() and wrap them with cppjasone2000-09-081-68/+80
| | | | | | | | macros that expand to pass filename and line number information. This is necessary since we're using inline functions instead of macros now. Add const to the filename pointers passed througout the mtx and witness code.
* Major update to the way synchronization is done in the kernel. Highlightsjasone2000-09-071-0/+786
include: * Mutual exclusion is used instead of spl*(). See mutex(9). (Note: The alpha port is still in transition and currently uses both.) * Per-CPU idle processes. * Interrupts are run in their own separate kernel threads and can be preempted (i386 only). Partially contributed by: BSDi (BSD/OS) Submissions by (at least): cp, dfr, dillon, grog, jake, jhb, sheldonh
OpenPOWER on IntegriCloud