summaryrefslogtreecommitdiffstats
path: root/sys/i386/include/globaldata.h
Commit message (Collapse)AuthorAgeFilesLines
* Major update to the way synchronization is done in the kernel. Highlightsjasone2000-09-071-0/+33
| | | | | | | | | | | | | | | include: * Mutual exclusion is used instead of spl*(). See mutex(9). (Note: The alpha port is still in transition and currently uses both.) * Per-CPU idle processes. * Interrupts are run in their own separate kernel threads and can be preempted (i386 only). Partially contributed by: BSDi (BSD/OS) Submissions by (at least): cp, dfr, dillon, grog, jake, jhb, sheldonh
* Commit major SMP cleanups and move the BGL (big giant lock) in thedillon2000-03-281-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | syscall path inward. A system call may select whether it needs the MP lock or not (the default being that it does need it). A great deal of conditional SMP code for various deadended experiments has been removed. 'cil' and 'cml' have been removed entirely, and the locking around the cpl has been removed. The conditional separately-locked fast-interrupt code has been removed, meaning that interrupts must hold the CPL now (but they pretty much had to anyway). Another reason for doing this is that the original separate-lock for interrupts just doesn't apply to the interrupt thread mechanism being contemplated. Modifications to the cpl may now ONLY occur while holding the MP lock. For example, if an otherwise MP safe syscall needs to mess with the cpl, it must hold the MP lock for the duration and must (as usual) save/restore the cpl in a nested fashion. This is precursor work for the real meat coming later: avoiding having to hold the MP lock for common syscalls and I/O's and interrupt threads. It is expected that the spl mechanisms and new interrupt threading mechanisms will be able to run in tandem, allowing a slow piecemeal transition to occur. This patch should result in a moderate performance improvement due to the considerable amount of code that has been removed from the critical path, especially the simplification of the spl*() calls. The real performance gains will come later. Approved by: jkh Reviewed by: current, bde (exception.s) Some work taken from: luoqi's patch
* $Id$ -> $FreeBSD$peter1999-08-281-1/+1
|
* Unifdef VM86.jlemon1999-06-011-3/+1
| | | | Reviewed by: silence on on -current
* Unbreak VESA on SMP.luoqi1999-05-121-1/+2
|
* Enable vmspace sharing on SMP. Major changes are,luoqi1999-04-281-38/+28
| | | | | | | | | | | | | | | | | - %fs register is added to trapframe and saved/restored upon kernel entry/exit. - Per-cpu pages are no longer mapped at the same virtual address. - Each cpu now has a separate gdt selector table. A new segment selector is added to point to per-cpu pages, per-cpu global variables are now accessed through this new selector (%fs). The selectors in gdt table are rearranged for cache line optimization. - fask_vfork is now on as default for both UP and SMP. - Some aio code cleanup. Reviewed by: Alan Cox <alc@cs.rice.edu> John Dyson <dyson@iquest.net> Julian Elischer <julian@whistel.com> Bruce Evans <bde@zeta.org.au> David Greenman <dg@root.com>
* Added a per-cpu variable `switchticks' for use in scheduling.bde1999-02-221-1/+2
|
* Presently there is only one `currentldt' variable for all cpusmsmith1998-08-181-1/+4
| | | | | | | | | | | | in a SMP system. Unexpected things could happen if each cpu has a different ldt setting and one cpu tries to use value of currentldt set by another cpu. The fix is to move currentldt to the per-cpu area. It includes patches I filed in PR i386/6219 which are also user ldt related. PR: i386/7591, i386/6219 Submitted by: Luoqi Chen <luoqi@watermarkgroup.com>
* Some cleanups related to timecounters and weird ifdefs in <sys/time.h>.phk1998-05-281-1/+2
| | | | | | | | | | | | | | | | | | | | Clean up (or if antipodic: down) some of the msgbuf stuff. Use an inline function rather than a macro for timecounter delta. Maintain process "on-cpu" time as 64 bits of microseconds to avoid needless second rollover overhead. Avoid calling microuptime the second time in mi_switch() if we do not pass through _idle in cpu_switch() This should reduce our context-switch overhead a bit, in particular on pre-P5 and SMP systems. WARNING: Programs which muck about with struct proc in userland will have to be fixed. Reviewed, but found imperfect by: bde
* Change simple lock handling to not depend upon having a local apictegge1998-05-171-2/+2
| | | | | | | | | | | available. The per-cpu variable ss_tpr has been replaced by ss_eflags. This reduced the number of interrupts sent to the wrong CPU, due to the cpu having the global lock being inside a critical region. Remove some unneeded manipulation of tpr register in mplock.s. Adjust code in mplock.s to be aware of variables on the stack being destroyed by MPgetlock if GRAB_LOPRIO is defined.
* For SMP, use prv_PPAGE1/prv_PMAP1 instead of PADDR1/PMAP1.tegge1998-05-171-6/+8
| | | | | get_ptbase and pmap_pte_quick no longer generates IPIs. This should reduce the number of IPIs during heavy paging.
* Fix VM86 compiles. a #include "opt_vm86.h" was missing, and the my_trpeter1998-04-061-5/+2
| | | | | | variable was needed in the non-SMP case. Submitted by: Jonathan Lemon <jlemon@americantv.com>
* A pair of C structures used for laying out the SMP per-cpu data space.peter1998-04-061-0/+101
OpenPOWER on IntegriCloud