summaryrefslogtreecommitdiffstats
path: root/sys/i386/isa/ipl.s
Commit message (Collapse)AuthorAgeFilesLines
* Introduce a new potientially cleaner interface for accessing per-cpujake2000-12-131-4/+4
| | | | | | | | | | | variables from i386 assembly language. The syntax is PCPU(member) where member is the capitalized name of the per-cpu variable, without the gd_ prefix. Example: movl %eax,PCPU(CURPROC). The capitalization is due to using the offsets generated by genassym rather than the symbols provided by linking with globals.o. asmacros.h is the wrong place for this but it seemed as good a place as any for now. The old implementation in asnames.h has not been removed because it is still used to de-mangle the symbols used by the C variables for the UP case.
* Remove the last of the MD netisr code. It is now all MI. Removejake2000-12-051-26/+0
| | | | | | | | spending, which was unused now that all software interrupts have their own thread. Make the legacy schednetisr use an atomic op for setting bits in the netisr mask. Reviewed by: jhb
* Change doreti to take a trapframe instead of an intrframe.jake2000-12-011-1/+2
| | | | | | Remove associated pushes of dummy units to convert frame. Reviewed by: jhb
* - Overhaul the software interrupt code to use interrupt threads for eachjhb2000-10-251-33/+1
| | | | | | | | | | | | | | | | | | | type of software interrupt. Roughly, what used to be a bit in spending now maps to a swi thread. Each thread can have multiple handlers, just like a hardware interrupt thread. - Instead of using a bitmask of pending interrupts, we schedule the specific software interrupt thread to run, so spending, NSWI, and the shandlers array are no longer needed. We can now have an arbitrary number of software interrupt threads. When you register a software interrupt thread via sinthand_add(), you get back a struct intrhand that you pass to sched_swi() when you wish to schedule your swi thread to run. - Convert the name of 'struct intrec' to 'struct intrhand' as it is a bit more intuitive. Also, prefix all the members of struct intrhand with 'ih_'. - Make swi_net() a MI function since there is now no point in it being MD. Submitted by: cp
* - Heavyweight interrupt threads on the alpha for device I/O interrupts.jhb2000-10-051-8/+10
| | | | | | | | | | | - Make softinterrupts (SWI's) almost completely MI, and divorce them completely from the x86 hardware interrupt code. - The ihandlers array is now gone. Instead, there is a MI shandlers array that just contains SWI handlers. - Most of the former machine/ipl.h files have moved to a new sys/ipl.h. - Stub out all the spl*() functions on all architectures. Submitted by: dfr
* Major update to the way synchronization is done in the kernel. Highlightsjasone2000-09-071-128/+21
| | | | | | | | | | | | | | | include: * Mutual exclusion is used instead of spl*(). See mutex(9). (Note: The alpha port is still in transition and currently uses both.) * Per-CPU idle processes. * Interrupts are run in their own separate kernel threads and can be preempted (i386 only). Partially contributed by: BSDi (BSD/OS) Submissions by (at least): cp, dfr, dillon, grog, jake, jhb, sheldonh
* AT&T asm syntax requires a leading '*' in front of the operand for indirectobrien2000-05-101-2/+2
| | | | calls and jumps.
* This should fix the lockups people have been experiencing. I muffed updillon2000-03-291-1/+1
| | | | | | giving astpending two flag bits. A cmpl $0 had to turn into a bit test. Many thanks to: Alain Thivillon <Alain.Thivillon@hsc.fr>
* The SMP cleanup commit broke need_resched, this fixes that and alsodillon2000-03-291-3/+3
| | | | | | | removed unncessary MPLOCKED and 'lock' prefixes from the interrupt nesting level, since (A) the MP lock is held at the time, and (B) since the neting level is restored prior to return any interrupted code will see a consistent value.
* Commit major SMP cleanups and move the BGL (big giant lock) in thedillon2000-03-281-91/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | syscall path inward. A system call may select whether it needs the MP lock or not (the default being that it does need it). A great deal of conditional SMP code for various deadended experiments has been removed. 'cil' and 'cml' have been removed entirely, and the locking around the cpl has been removed. The conditional separately-locked fast-interrupt code has been removed, meaning that interrupts must hold the CPL now (but they pretty much had to anyway). Another reason for doing this is that the original separate-lock for interrupts just doesn't apply to the interrupt thread mechanism being contemplated. Modifications to the cpl may now ONLY occur while holding the MP lock. For example, if an otherwise MP safe syscall needs to mess with the cpl, it must hold the MP lock for the duration and must (as usual) save/restore the cpl in a nested fashion. This is precursor work for the real meat coming later: avoiding having to hold the MP lock for common syscalls and I/O's and interrupt threads. It is expected that the spl mechanisms and new interrupt threading mechanisms will be able to run in tandem, allowing a slow piecemeal transition to occur. This patch should result in a moderate performance improvement due to the considerable amount of code that has been removed from the critical path, especially the simplification of the spl*() calls. The real performance gains will come later. Approved by: jkh Reviewed by: current, bde (exception.s) Some work taken from: luoqi's patch
* Optimize two cases in the MP locking code. First, it is not necessarydillon1999-11-191-3/+2
| | | | | | | | | | | | | | | to use a locked cmpexg when unlocking a lock that we already hold, since nobody else can touch the lock while we hold it. Second, it is not necessary to use a locked cmpexg when locking a lock that we already hold, for the same reason. These changes will allow MP locks to be used recursively without impacting performance. Modify two procedures that are called only by assembly and are already NOPROF entries to pass a critical argument in %edx instead of on the stack, removing a significant amount of code from the critical path as a consequence. Reviewed by: Alfred Perlstein <bright@wintelcom.net>, Peter Wemm <peter@netplex.com.au>
* $Id$ -> $FreeBSD$peter1999-08-281-1/+1
|
* Go back to the old (icu.s rev.1.7 1993) way of keeping the AST-pendingbde1999-07-101-64/+20
| | | | | | | bit separate from ipending, since this is simpler and/or necessary for SMP and may even be better for UP. Reviewed by: alc, luoqi, tegge
* An SMP-specific change: Add the lock prefix to RMW operationsalc1999-07-031-2/+3
| | | | on ipending.
* Unifdef VM86.jlemon1999-06-011-7/+1
| | | | Reviewed by: silence on on -current
* Fixed profiling of elf kernels. Made high resolution profiling compilebde1999-05-061-1/+3
| | | | | | | | | | | | for elf kernels (it is broken for all kernels due to lack of egcs support). Renaming of many assembler labels is avoided by declaring by declaring the labels that need to be visible to gprof as having type "function" and depending on the elf version of gprof being zealous about discarding the others. A few type declarations are still missing, mainly for SMP. PR: 9413 Submitted by: Assar Westerlund <assar@sics.se> (initial parts)
* Enable vmspace sharing on SMP. Major changes are,luoqi1999-04-281-6/+12
| | | | | | | | | | | | | | | | | - %fs register is added to trapframe and saved/restored upon kernel entry/exit. - Per-cpu pages are no longer mapped at the same virtual address. - Each cpu now has a separate gdt selector table. A new segment selector is added to point to per-cpu pages, per-cpu global variables are now accessed through this new selector (%fs). The selectors in gdt table are rearranged for cache line optimization. - fask_vfork is now on as default for both UP and SMP. - Some aio code cleanup. Reviewed by: Alan Cox <alc@cs.rice.edu> John Dyson <dyson@iquest.net> Julian Elischer <julian@whistel.com> Bruce Evans <bde@zeta.org.au> David Greenman <dg@root.com>
* Move initialization of SWI's in the tty|net|bio masks from isa.c intopeter1999-04-111-5/+5
| | | | the static initializers in ipl.s.
* Register tty software interrupt handlers at run time using register_swi()bde1998-08-111-28/+1
| | | | | | | instead of at compile time using ifdefs. Use _swi_null instead of dummycamisr. CAM and dpt should call register_swi() instead of hacking on ihandlers[] directly.
* Implemented dynamic registration of software interrupt handlers. Notbde1998-08-111-17/+26
| | | | | | used yet. Use dummy SWI handlers to avoid some checks for null pointers.
* Extend cpl workaround so that it applies when we are returning tojlemon1998-07-271-3/+5
| | | | user-mode as well as vm86 mode.
* Add the ability to make real-mode BIOS calls from the kernel. Currently,jlemon1998-03-231-1/+8
| | | | | | | | | | | everything is contained inside #ifdef VM86, so this option must be present in the config file to use this functionality. Thanks to Tor Egge, these changes should work on SMP machines. However, it may not be throughly SMP-safe. Currently, the only BIOS calls made are memory-sizing routines at bootup, these replace reading the RTC values.
* When entering the apic version of slow interrupt handler, leveltegge1998-03-031-1/+25
| | | | | | | | | interrupts are masked, and EOI is sent iff the corresponding ISR bit is set in the local apic. If the CPU cannot obtain the interrupt service lock (currently the global kernel lock) the interrupt is forwarded to the CPU holding that lock. Clock interrupts now have higher priority than other slow interrupts.
* Add support for low resolution SMP kernel profiling.tegge1997-12-151-3/+7
| | | | | | | | | | | | - A nonprofiling version of s_lock (called s_lock_np) is used by mcount. - When profiling is active, more registers are clobbered in seemingly simple assembly routines. This means that some callers needed to save/restore extra registers. - The stack pointer must have space for a 'fake' return address in idle, to avoid stack underflow.
* Disable the TEST_CIL code till I can commit the complete solution.fsmp1997-10-131-1/+5
| | | | Noticed by: Peter Wemm <peter@netplex.com.au>
* Fixed a foobar on my part that broke non-SMP kernels. (I need some sleep...)fsmp1997-09-291-11/+24
|
* Screwed the debug for the cil deadlock, another 3 hours down the tubes...fsmp1997-09-291-2/+2
|
* Added a couple short-term debugs and a fix to the SPIN_MAX variable.fsmp1997-09-281-2/+17
| | | | Debugs are an attempt to ferret out the PUSHDOWN_LEVEL_3 deadlock.
* aha1542.c aic6360.c cy.c fd.c ft.cgibbs1997-09-211-3/+10
| | | | | | | | | | | | if_ie.c if_wl.c if_zp.c isa.c isa_device.h labpc.c mcd.c ncr5380.c scd.c seagate.c si.c sio.c tw.c ultra14f.c wcd.c wd.c: Update for changes in the callout interface. apic_vector.s icu_vector.s ipl.s ipl_funcs.c: Add CAM software/hardware interrupt support.
* General cleanup of the lock pushdown code. They are grouped and enabledfsmp1997-09-071-27/+21
| | | | | | | | | from machine/smptests.h: #define PUSHDOWN_LEVEL_1 #define PUSHDOWN_LEVEL_2 #define PUSHDOWN_LEVEL_3 #define PUSHDOWN_LEVEL_4_NOT
* Support for the new FAST_HI algorithm, enabled.fsmp1997-08-291-7/+40
| | | | | Preliminary support for the INTR_SIMPLELOCK algorithm, disabled. Note that this code is NOT ready.
* The last of the encapsolation of cpl/spl/ipending things into a criticalfsmp1997-08-241-79/+59
| | | | | | | | | | | | | | | | | | | region protected by the simplelock 'cpl_lock'. Notes: - this code is currently controlled on a section by section basis with defines in machine/param.h. All sections are currently enabled. - this code is not as clean as I would like, but that can wait till later. - the "giant lock" still surrounds most instances of this "cpl region". I still have to do the code that arbitrates setting cpl between the top and bottom halves of the kernel. - the possibility of deadlock exists, I am committing the code at this point so as to exercise it and detect any such cases B4 the "giant lock" is removed.
* Another boo-boo, this file defines cil.fsmp1997-08-211-1/+5
|
* Preperation for moving cpl into critical region access.fsmp1997-08-201-7/+4
| | | | | | | | Several new fine-grained locks. New FAST_INTR() methods: - separate simplelock for FAST_INTR, no more giant lock. - FAST_INTR()s no longer checks ipending on way out of ISR. sio made MP-safe (I hope).
* Oops, fix breakage to UP kernel.fsmp1997-08-101-1/+3
|
* Added trap specific lock calls: get_fpu_lock, etc.fsmp1997-08-101-7/+8
| | | | All resolve to the GIANT_LOCK at this time, it is purely a logical partitioning.
* VM86 kernel support.dyson1997-08-091-1/+27
| | | | | | | Work done by BSDI, Jonathan Lemon <jlemon@americantv.com>, Mike Smith <msmith@gsoft.com.au>, Sean Eric Fagan <sef@kithrup.com>, and probably alot of others. Submitted by: Jnathan Lemon <jlemon@americantv.com>
* Converted the TEST_LOPRIO code to default.fsmp1997-07-311-1/+5
| | | | | removed PEND_INTS 1st try direct call to MPtrylock
* New simple_lock code in asm:fsmp1997-07-231-1/+3
| | | | | | | | | | | | | | | - s_lock_init() - s_lock() - s_lock_try() - s_unlock() Created lock for IO APIC and apic_imen (SMP version of imen) - imen_lock Code to use imen_lock for access from apic_ipl.s and apic_vector.s. Moved this code *outside* of mp_lock. It seems to work!!!
* Store SWI_MASK in a variable so that LKMs can use it portably.bde1997-07-211-1/+3
|
* Store the macro values for SWI_TTY_MASK and SWI_NET_MASK in variables topeter1997-05-311-1/+6
| | | | | | | | | that lkm's can use them for fiddling the masks without being dependent on which mode the kernel is compiled in (SMP or UP). This is particularly for ppp_tty.c which has some domain crossing between the net and tty subsystems. The values are not used in the spl code, they are for reference only (ie: the compiled code uses immediate values rather than an indirect 32 bit address and 32 bit data fetch).
* Split vector.s into UP and SMP specific files:fsmp1997-05-261-0/+320
- vector.s <- stub called by i386/exception.s - icu_vector.s <- UP - apic_vector.s <- SMP Split icu.s into UP and SMP specific files: - ipl.s <- stub called by i386/exception.s (formerly icu.s) - icu_ipl.s <- UP - apic_ipl.s <- SMP This was done in preparation for massive changes to the SMP INTerrupt mechanisms. More fine tuning, such as merging ipl.s into exception.s, may be appropriate.
OpenPOWER on IntegriCloud