summaryrefslogtreecommitdiffstats
path: root/sys/i386
diff options
context:
space:
mode:
authordg <dg@FreeBSD.org>1994-04-02 07:00:53 +0000
committerdg <dg@FreeBSD.org>1994-04-02 07:00:53 +0000
commit5717a39a7a1699799ee10f5e9dc5f1efb4e9f265 (patch)
tree31dfe4c27cd3574a0535c7c111b7843438b6f050 /sys/i386
parentebb3d9b98b2038710570fef688cabc157921c42f (diff)
downloadFreeBSD-src-5717a39a7a1699799ee10f5e9dc5f1efb4e9f265.zip
FreeBSD-src-5717a39a7a1699799ee10f5e9dc5f1efb4e9f265.tar.gz
New interrupt code from Bruce Evans. In additional to Bruce's attached
list of changes, I've made the following additional changes: 1) i386/include/ipl.h renamed to spl.h as the name conflicts with the file of the same name in i386/isa/ipl.h. 2) changed all use of *mask (i.e. netmask, biomask, ttymask, etc) to *_imask (net_imask, etc). 3) changed vestige of splnet use in if_is to splimp. 4) got rid of "impmask" completely (Bruce had gotten rid of netmask), and are now using net_imask instead. 5) dozens of minor cruft to glue in Bruce's changes. These require changes I made to config(8) as well, and thus it must be rebuilt. -DG from Bruce Evans: sio: o No diff is supplied. Remove the define of setsofttty(). I hope that is enough. *.s: o i386/isa/debug.h no longer exists. The event counters became too much trouble to maintain. All function call entry and exception entry counters can be recovered by using profiling kernel (the new profiling supports all entry points; however, it is too slow to leave enabled all the time; it also). Only BDBTRAP() from debug.h is now used. That is moved to exception.s. It might be worth preserving SHOW_BITS() and calling it from _mcount() (if enabled). o T_ASTFLT is now only set just before calling trap(). o All exception handlers set SWI_AST_MASK in cpl as soon as possible after entry and arrange for _doreti to restore it atomically with exiting. It is not possible to set it atomically with entering the kernel, so it must be checked against the user mode bits in the trap frame before committing to using it. There is no place to store the old value of cpl for syscalls or traps, so there are some complications restoring it. Profiling stuff (mostly in *.s): o Changes to kern/subr_mcount.c, gcc and gprof are not supplied yet. o All interesting labels `foo' are renamed `_foo' and all uninteresting labels `_bar' are renamed `bar'. A small change to gprof allows ignoring labels not starting with underscores. o MCOUNT_LABEL() is to provide names for counters for times spent in exception handlers. o FAKE_MCOUNT() is a version of MCOUNT() suitable for exception handlers. Its arg is the pc where the exception occurred. The new mcount() pretends that this was a call from that pc to a suitable MCOUNT_LABEL(). o MEXITCOUNT is to turn off any timer started by MCOUNT(). /usr/src/sys/i386/i386/exception.s: o The non-BDB BPTTRAP() macros were doing a sti even when interrupts were disabled when the trap occurred. The sti (fixed) sti is actually a no-op unless you have my changes to machdep.c that make the debugger trap gates interrupt gates, but fixing that would make the ifdefs messier. ddb seems to be unharmed by both interrupts always disabled and always enabled (I had the branch in the fix back to front for some time :-(). o There is no known pushal bug. o tf_err can be left as garbage for syscalls. /usr/src/sys/i386/i386/locore.s: o Fix and update BDE_DEBUGGER support. o ENTRY(btext) before initialization was dangerous. o Warm boot shot was longer than intended. /usr/src/sys/i386/i386/machdep.c: o DON'T APPLY ALL OF THIS DIFF. It's what I'm using, but may require other changes. Use the following: o Remove aston() and setsoftclock(). Maybe use the following: o No netisr.h. o Spelling fix. o Delay to read the Rebooting message. o Fix for vm system unmapping a reduced area of memory after bounds_check_with_label() reduces the size of a physical i/o for a partition boundary. A similar fix is required in kern_physio.c. o Correct use of __CONCAT. It never worked here for non- ANSI cpp's. Is it time to drop support for non-ANSI? o gdt_segs init. 0xffffffffUL is bogus because ssd_limit is not 32 bits. The replacement may have the same value :-), but is more natural. o physmem was one page too low. Confusing variable names. Don't use the following: o Better numbers of buffers. Each 8K page requires up to 16 buffer headers. On my system, this results in 5576 buffers containing [up to] 2854912 bytes of memory. The usual allocation of about 384 buffers only holds 192K of disk if you use it on an fs with a block size of 512. o gdt changes for bdb. o *TGT -> *IDT changes for bdb. o #ifdefed changes for bdb. /usr/src/sys/i386/i386/microtime.s: o Use the correct asm macros. I think asm.h was copied from Mach just for microtime and isn't used now. It certainly doesn't belong in <sys>. Various macros are also duplicated in sys/i386/boot.h and libc/i386/*.h. o Don't switch to and from the IRR; it is guaranteed to be selected (default after ICU init and explicitly selected in isa.c too, and never changed until the old microtime clobbered it). /usr/src/sys/i386/i386/support.s: o Non-essential changes (none related to spls or profiling). o Removed slow loads of %gs again. The LDT support may require not relying on %gs, but loading it is not the way to fix it! Some places (copyin ...) forgot to load it. Loading it clobbers the user %gs. trap() still loads it after certain types of faults so that fuword() etc can rely on it without loading it explicitly. Exception handlers don't restore it. If we want to preserve the user %gs, then the fastest method is to not touch it except for context switches. Comparing with VM_MAXUSER_ADDRESS and branching takes only 2 or 4 cycles on a 486, while loading %gs takes 9 cycles and using it takes another. o Fixed a signed branch to unsigned. /usr/src/sys/i386/i386/swtch.s: o Move spl0() outside of idle loop. o Remove cli/sti from idle loop. sw1 does a cli, and in the unlikely event of an interrupt occurring and whichqs becoming zero, sw1 will just jump back to _idle. o There's no spl0() function in asm any more, so use splz(). o swtch() doesn't need to be superaligned, at least with the new mcounting. o Fixed a signed branch to unsigned. o Removed astoff(). /usr/src/sys/i386/i386/trap.c: o The decentralized extern decls were inconsistent, of course. o Fixed typo MATH_EMULTATE in comments. */ o Removed unused variables. o Old netmask is now impmask; print it instead. Perhaps we should print some of the new masks. o BTW, trap() should not print anything for normal debugger traps. /usr/src/sys/i386/include/asmacros.h: o DON'T APPLY ALL OF THIS DIFF. Just use some of the null macros as necessary. /usr/src/sys/i386/include/cpu.h: o CLKF_BASEPRI() changes since cpl == SWI_AST_MASK is now normal while the kernel is running. o Don't use var++ to set boolean variables. It fails after a mere 4G times :-) and is slower than storing a constant on [3-4]86s. /usr/src/sys/i386/include/cpufunc.h: o DON'T APPLY ALL OF THIS DIFF. You need mainly the include of <machine/ipl.h>. Unfortunately, <machine/ipl.h> is needed by almost everything for the inlines. /usr/src/sys/i386/include/ipl.h: o New file. Defines spl inlines and SWI macros and declares most variables related to hard and soft interrupt masks. /usr/src/sys/i386/isa/icu.h: o Moved definitions to <machine/ipl.h> /usr/src/sys/i386/isa/icu.s: o Software interrupts (SWIs) and delayed hardware interrupts (HWIs) are now handled uniformally, and dispatching them from splx() is more like dispatching them from _doreti. The dispatcher is essentially *(handler[ffs(ipending & ~cpl)](). o More care (not quite enough) is taken to avoid unbounded nesting of interrupts. o The interface to softclock() is changed so that a trap frame is not required. o Fast interrupt handlers are now handled more uniformally. Configuration is still too early (new handlers would require bits in <machine/ipl.h> and functions to vector.s). o splnnn() and splx() are no longer here; they are inline functions (could be macros for other compilers). splz() is the nontrivial part of the old splx(). /usr/src/sys/i386/isa/ipl.h o New file. Supposed to have only bus-dependent stuff. Perhaps the h/w masks should be declared here. /usr/src/sys/i386/isa/isa.c: o DON'T APPLY ALL OF THIS DIFF. You need only things involving *mask and *MASK and comments about them. netmask is now a pure software mask. It works like the softclock mask. /usr/src/sys/i386/isa/vector.s: o Reorganize AUTO_EOI* macros. o Option FAST_INTR_HANDLER_USERS_ES for people who don't trust fastintr handlers. o fastintr handlers need to metamorphose into ordinary interrupt handlers if their SWI bit has become set. Previously, sio had unintended latency for handling output completions and input of SLIP framing characters because this was not done. /usr/src/sys/net/netisr.h: o The machine-dependent stuff is now imported from <machine/ipl.h>. /usr/src/sys/sys/systm.h o DON'T APPLY ALL OF THIS DIFF. You need mainly the different splx() prototype. The spl*() prototypes are duplicated as inlines in <machine/ipl.h> but they need to be duplicated here in case there are no inlines. I sent systm.h and cpufunc.h to Garrett. We agree that spl0 should be replaced by splnone and not the other way around like I've done. /usr/src/sys/kern/kern_clock.c o splsoftclock() now lowers cpl so the direct call to softclock() works as intended. o softclock() interface changed to avoid passing the whole frame (some machines may need another change for profile_tick()). o profiling renamed _profiling to avoid ANSI namespace pollution. (I had to improve the mcount() interface and may as well fix it.) The GUPROF variant doesn't actually reference profiling here, but the 'U' in GUPROF should mean to select the microtimer mcount() and not change the interface.
Diffstat (limited to 'sys/i386')
-rw-r--r--sys/i386/conf/Makefile.i3866
-rw-r--r--sys/i386/i386/exception.s138
-rw-r--r--sys/i386/i386/locore.s26
-rw-r--r--sys/i386/i386/machdep.c33
-rw-r--r--sys/i386/i386/microtime.s8
-rw-r--r--sys/i386/i386/support.s39
-rw-r--r--sys/i386/i386/swtch.s42
-rw-r--r--sys/i386/i386/trap.c17
-rw-r--r--sys/i386/include/asmacros.h6
-rw-r--r--sys/i386/include/cpu.h13
-rw-r--r--sys/i386/include/cpufunc.h4
-rw-r--r--sys/i386/include/ipl.h7
-rw-r--r--sys/i386/include/spl.h104
-rw-r--r--sys/i386/isa/icu.h12
-rw-r--r--sys/i386/isa/icu.s492
-rw-r--r--sys/i386/isa/if_is.c4
-rw-r--r--sys/i386/isa/isa.c50
-rw-r--r--sys/i386/isa/npx.c16
-rw-r--r--sys/i386/isa/sio.c3
-rw-r--r--sys/i386/isa/sound/soundcard.c18
-rw-r--r--sys/i386/isa/vector.s196
21 files changed, 627 insertions, 607 deletions
diff --git a/sys/i386/conf/Makefile.i386 b/sys/i386/conf/Makefile.i386
index ac375cd..db28a34 100644
--- a/sys/i386/conf/Makefile.i386
+++ b/sys/i386/conf/Makefile.i386
@@ -1,6 +1,6 @@
# Copyright 1990 W. Jolitz
# from: @(#)Makefile.i386 7.1 5/10/91
-# $Id: Makefile.i386,v 1.22 1994/02/17 06:51:15 rgrimes Exp $
+# $Id: Makefile.i386,v 1.23 1994/03/21 20:48:47 ats Exp $
#
# Makefile for FreeBSD
#
@@ -90,7 +90,7 @@ symbols.sort: ${I386}/i386/symbols.raw
locore.o: assym.s ${I386}/i386/locore.s machine/trap.h machine/psl.h \
machine/pte.h ${I386}/isa/vector.s ${I386}/isa/icu.s \
- $S/sys/errno.h machine/specialreg.h ${I386}/isa/debug.h \
+ $S/sys/errno.h machine/specialreg.h \
${I386}/isa/icu.h ${I386}/isa/isa.h vector.h $S/net/netisr.h \
machine/asmacros.h
${CPP} -I. -DLOCORE ${COPTS} ${I386}/i386/locore.s | \
@@ -104,7 +104,7 @@ exception.o: assym.s ${I386}/i386/exception.s machine/trap.h \
${AS} ${ASFLAGS} -o exception.o
swtch.o: assym.s ${I386}/i386/swtch.s \
- $S/sys/errno.h ${I386}/isa/debug.h machine/asmacros.h
+ $S/sys/errno.h machine/asmacros.h
${CPP} -I. ${COPTS} ${I386}/i386/swtch.s | \
${AS} ${ASFLAGS} -o swtch.o
diff --git a/sys/i386/i386/exception.s b/sys/i386/i386/exception.s
index 93aed94..30bc164 100644
--- a/sys/i386/i386/exception.s
+++ b/sys/i386/i386/exception.s
@@ -30,7 +30,7 @@
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
- * $Id: exception.s,v 1.1 1993/11/13 02:24:57 davidg Exp $
+ * $Id: exception.s,v 1.2 1994/01/03 07:55:20 davidg Exp $
*/
#include "npx.h" /* NNPX */
@@ -39,7 +39,9 @@
#include "errno.h" /* error return codes */
-#include "i386/isa/debug.h" /* BDE debugging macros */
+#include "machine/spl.h" /* SWI_AST_MASK ... */
+
+#include "machine/psl.h" /* PSL_I */
#include "machine/trap.h" /* trap codes */
#include "syscall.h" /* syscall numbers */
@@ -57,31 +59,49 @@
/*****************************************************************************/
/*
* Trap and fault vector routines
- *
+ */
+#define IDTVEC(name) ALIGN_TEXT ; .globl _X/**/name ; _X/**/name:
+#define TRAP(a) pushl $(a) ; jmp _alltraps
+
+/*
* XXX - debugger traps are now interrupt gates so at least bdb doesn't lose
* control. The sti's give the standard losing behaviour for ddb and kgdb.
*/
-#define IDTVEC(name) ALIGN_TEXT; .globl _X/**/name; _X/**/name:
-#define TRAP(a) pushl $(a) ; jmp alltraps
+#ifdef BDE_DEBUGGER
+#define BDBTRAP(name) \
+ ss ; \
+ cmpb $0,_bdb_exists ; \
+ je 1f ; \
+ testb $SEL_RPL_MASK,4(%esp) ; \
+ jne 1f ; \
+ ss ; \
+ .globl bdb_/**/name/**/_ljmp ; \
+bdb_/**/name/**/_ljmp: ; \
+ ljmp $0,$0 ; \
+1:
+#else
+#define BDBTRAP(name)
+#endif
+
#ifdef KGDB
-# define BPTTRAP(a) sti; pushl $(a) ; jmp bpttraps
+# define BPTTRAP(a) testl $PSL_I,4+8(%esp) ; je 1f ; sti ; 1: ; \
+ pushl $(a) ; jmp _bpttraps
#else
-# define BPTTRAP(a) sti; TRAP(a)
+# define BPTTRAP(a) testl $PSL_I,4+8(%esp) ; je 1f ; sti ; 1: ; TRAP(a)
#endif
+MCOUNT_LABEL(user)
+MCOUNT_LABEL(btrap)
+
IDTVEC(div)
pushl $0; TRAP(T_DIVIDE)
IDTVEC(dbg)
-#if defined(BDE_DEBUGGER) && defined(BDBTRAP)
BDBTRAP(dbg)
-#endif
pushl $0; BPTTRAP(T_TRCTRAP)
IDTVEC(nmi)
pushl $0; TRAP(T_NMI)
IDTVEC(bpt)
-#if defined(BDE_DEBUGGER) && defined(BDBTRAP)
BDBTRAP(bpt)
-#endif
pushl $0; BPTTRAP(T_BPTFLT)
IDTVEC(ofl)
pushl $0; TRAP(T_OFLOW)
@@ -114,22 +134,24 @@ IDTVEC(fpu)
* error. It would be better to handle npx interrupts as traps but
* this is difficult for nested interrupts.
*/
- pushl $0 /* dummy error code */
- pushl $T_ASTFLT
+ pushl $0 /* dumby error code */
+ pushl $0 /* dumby trap type */
pushal
- nop /* silly, the bug is for popal and it only
- * bites when the next instruction has a
- * complicated address mode */
pushl %ds
pushl %es /* now the stack frame is a trap frame */
movl $KDSEL,%eax
movl %ax,%ds
movl %ax,%es
- pushl _cpl
+ FAKE_MCOUNT(12*4(%esp))
+ movl _cpl,%eax
+ pushl %eax
pushl $0 /* dummy unit to finish building intr frame */
incl _cnt+V_TRAP
+ orl $SWI_AST_MASK,%eax
+ movl %eax,_cpl
call _npxintr
- jmp doreti
+ MEXITCOUNT
+ jmp _doreti
#else /* NNPX > 0 */
pushl $0; TRAP(T_ARITHTRAP)
#endif /* NNPX > 0 */
@@ -166,25 +188,37 @@ IDTVEC(rsvd14)
pushl $0; TRAP(31)
SUPERALIGN_TEXT
-alltraps:
+_alltraps:
pushal
- nop
pushl %ds
pushl %es
movl $KDSEL,%eax
movl %ax,%ds
movl %ax,%es
+ FAKE_MCOUNT(12*4(%esp))
calltrap:
+ FAKE_MCOUNT(_btrap) /* init "from" _btrap -> calltrap */
incl _cnt+V_TRAP
+ orl $SWI_AST_MASK,_cpl
call _trap
/*
- * Return through doreti to handle ASTs. Have to change trap frame
+ * There was no place to save the cpl so we have to recover it
+ * indirectly. For traps from user mode it was 0, and for traps
+ * from kernel mode Oring SWI_AST_MASK into it didn't change it.
+ */
+ subl %eax,%eax
+ testb $SEL_RPL_MASK,TRAPF_CS_OFF(%esp)
+ jne 1f
+ movl _cpl,%eax
+1:
+ /*
+ * Return via _doreti to handle ASTs. Have to change trap frame
* to interrupt frame.
*/
- movl $T_ASTFLT,TF_TRAPNO(%esp) /* new trap type (err code not used) */
- pushl _cpl
- pushl $0 /* dummy unit */
- jmp doreti
+ pushl %eax
+ subl $4,%esp
+ MEXITCOUNT
+ jmp _doreti
#ifdef KGDB
/*
@@ -192,17 +226,18 @@ calltrap:
* to the regular trap code.
*/
SUPERALIGN_TEXT
-bpttraps:
+_bpttraps:
pushal
- nop
pushl %ds
pushl %es
movl $KDSEL,%eax
movl %ax,%ds
movl %ax,%es
+ FAKE_MCOUNT(12*4(%esp))
testb $SEL_RPL_MASK,TRAPF_CS_OFF(%esp) /* non-kernel mode? */
jne calltrap /* yes */
call _kgdb_trap_glue
+ MEXITCOUNT
jmp calltrap
#endif
@@ -214,7 +249,6 @@ IDTVEC(syscall)
pushfl /* Room for tf_err */
pushfl /* Room for tf_trapno */
pushal
- nop
pushl %ds
pushl %es
movl $KDSEL,%eax /* switch to kernel segments */
@@ -222,51 +256,17 @@ IDTVEC(syscall)
movl %ax,%es
movl TF_ERR(%esp),%eax /* copy eflags from tf_err to fs_eflags */
movl %eax,TF_EFLAGS(%esp)
- movl $0,TF_ERR(%esp) /* zero tf_err */
+ FAKE_MCOUNT(12*4(%esp))
incl _cnt+V_SYSCALL
+ movl $SWI_AST_MASK,_cpl
call _syscall
/*
- * Return through doreti to handle ASTs.
+ * Return via _doreti to handle ASTs.
*/
- movl $T_ASTFLT,TF_TRAPNO(%esp) /* new trap type (err code not used) */
- pushl _cpl
- pushl $0
- jmp doreti
-
-#ifdef SHOW_A_LOT
-/*
- * 'show_bits' was too big when defined as a macro. The line length for some
- * enclosing macro was too big for gas. Perhaps the code would have blown
- * the cache anyway.
- */
- ALIGN_TEXT
-show_bits:
- pushl %eax
- SHOW_BIT(0)
- SHOW_BIT(1)
- SHOW_BIT(2)
- SHOW_BIT(3)
- SHOW_BIT(4)
- SHOW_BIT(5)
- SHOW_BIT(6)
- SHOW_BIT(7)
- SHOW_BIT(8)
- SHOW_BIT(9)
- SHOW_BIT(10)
- SHOW_BIT(11)
- SHOW_BIT(12)
- SHOW_BIT(13)
- SHOW_BIT(14)
- SHOW_BIT(15)
- popl %eax
- ret
-
- .data
-bit_colors:
- .byte GREEN,RED,0,0
- .text
-
-#endif /* SHOW_A_LOT */
+ pushl $0 /* cpl to restore */
+ subl $4,%esp
+ MEXITCOUNT
+ jmp _doreti
/*
* include generated interrupt vectors and ISA intr code
diff --git a/sys/i386/i386/locore.s b/sys/i386/i386/locore.s
index f488503..8da8438 100644
--- a/sys/i386/i386/locore.s
+++ b/sys/i386/i386/locore.s
@@ -34,7 +34,7 @@
* SUCH DAMAGE.
*
* from: @(#)locore.s 7.3 (Berkeley) 5/13/91
- * $Id: locore.s,v 1.14 1994/01/31 04:39:37 davidg Exp $
+ * $Id: locore.s,v 1.15 1994/02/01 04:08:54 davidg Exp $
*/
/*
@@ -51,7 +51,6 @@
#include "machine/pte.h" /* page table entry definitions */
#include "errno.h" /* error return codes */
#include "machine/specialreg.h" /* x86 special registers */
-#include "i386/isa/debug.h" /* BDE debugging macros */
#include "machine/cputypes.h" /* x86 cpu type definitions */
#include "syscall.h" /* system call numbers */
#include "machine/asmacros.h" /* miscellaneous asm macros */
@@ -123,7 +122,7 @@ _proc0paddr: .long 0 /* address of proc 0 address space */
#ifdef BDE_DEBUGGER
.globl _bdb_exists /* flag to indicate BDE debugger is available */
-_bde_exists: .long 0
+_bdb_exists: .long 0
#endif
.globl tmpstk
@@ -140,10 +139,10 @@ tmpstk:
* btext: beginning of text section.
* Also the entry point (jumped to directly from the boot blocks).
*/
-ENTRY(btext)
+NON_GPROF_ENTRY(btext)
movw $0x1234,0x472 /* warm boot */
jmp 1f
- .space 0x500 /* skip over warm boot shit */
+ .org 0x500 /* space for BIOS variables */
/*
* pass parameters on stack (howto, bootdev, unit, cyloffset, esym)
@@ -182,7 +181,7 @@ ENTRY(btext)
andl $1,%eax
push %ecx
popfl
-
+
cmpl $0,%eax
jne 1f
movl $CPU_386,_cpu-KERNBASE
@@ -217,7 +216,7 @@ ENTRY(btext)
movl $_end-KERNBASE,%ecx
addl $NBPG-1,%ecx /* page align up */
andl $~(NBPG-1),%ecx
- movl %ecx,%esi /* esi=start of tables */
+ movl %ecx,%esi /* esi = start of free memory */
movl %ecx,_KERNend-KERNBASE /* save end of kernel */
/* clear bss */
@@ -296,7 +295,7 @@ ENTRY(btext)
shrl $PGSHIFT,%ecx
orl $PG_V|PG_KW,%eax /* valid, kernel read/write */
fillkpt
-#endif
+#endif /* KGDB || BDE_DEBUGGER */
/* now initialize the page dir, upages, p0stack PT, and page tables */
@@ -309,7 +308,7 @@ ENTRY(btext)
addl %esi,%ebx /* address of page directory */
addl $((1+UPAGES+1)*NBPG),%ebx /* offset to kernel page tables */
fillkpt
-
+
/* map I/O memory map */
movl _KPTphys-KERNBASE,%ebx /* base of kernel page tables */
@@ -397,7 +396,7 @@ ENTRY(btext)
addl $2*6,%esp
popal
-#endif
+#endif /* BDE_DEBUGGER */
/* load base of page directory and enable mapping */
movl %esi,%eax /* phys address of ptd in proc 0 */
@@ -436,7 +435,7 @@ begin: /* now running relocated at KERNBASE where the system is linked to run */
movl $_gdt+8*9,%eax /* adjust slots 9-17 */
movl $9,%ecx
reloc_gdt:
- movb $0xfe,7(%eax) /* top byte of base addresses, was 0, */
+ movb $KERNBASE>>24,7(%eax) /* top byte of base addresses, was 0, */
addl $8,%eax /* now KERNBASE>>24 */
loop reloc_gdt
@@ -444,7 +443,7 @@ reloc_gdt:
je 1f
int $3
1:
-#endif
+#endif /* BDE_DEBUGGER */
/*
* Skip over the page tables and the kernel stack
@@ -494,7 +493,7 @@ lretmsg1:
.asciz "lret: toinit\n"
-#define LCALL(x,y) .byte 0x9a ; .long y; .word x
+#define LCALL(x,y) .byte 0x9a ; .long y ; .word x
/*
* Icode is copied out to process 1 and executed in user mode:
* execve("/sbin/init", argv, envp); exit(0);
@@ -551,4 +550,3 @@ NON_GPROF_ENTRY(sigcode)
.globl _szsigcode
_szsigcode:
.long _szsigcode-_sigcode
-
diff --git a/sys/i386/i386/machdep.c b/sys/i386/i386/machdep.c
index a5224b5..eab1075 100644
--- a/sys/i386/i386/machdep.c
+++ b/sys/i386/i386/machdep.c
@@ -35,7 +35,7 @@
* SUCH DAMAGE.
*
* from: @(#)machdep.c 7.4 (Berkeley) 6/3/91
- * $Id: machdep.c,v 1.40 1994/03/23 09:15:03 davidg Exp $
+ * $Id: machdep.c,v 1.41 1994/03/30 02:31:11 davidg Exp $
*/
#include "npx.h"
@@ -58,7 +58,6 @@
#include "malloc.h"
#include "mbuf.h"
#include "msgbuf.h"
-#include "net/netisr.h"
#ifdef SYSVSHM
#include "sys/shm.h"
@@ -130,12 +129,9 @@ int _udatasel, _ucodesel;
/*
* Machine-dependent startup code
*/
-int boothowto = 0, Maxmem = 0, maxmem = 0, badpages = 0, physmem = 0;
+int boothowto = 0, Maxmem = 0, badpages = 0, physmem = 0;
long dumplo;
extern int bootdev;
-#ifdef SMALL
-extern int forcemaxmem;
-#endif
int biosmem;
vm_offset_t phys_avail[6];
@@ -272,6 +268,7 @@ again:
panic("startup: no room for tables");
goto again;
}
+
/*
* End of second pass, addresses have been assigned
*/
@@ -528,7 +525,7 @@ sendsig(catcher, sig, mask, code)
* Return to previous pc and psl as specified by
* context left by sendsig. Check carefully to
* make sure that the user has not modified the
- * psl to gain improper priviledges or to cause
+ * psl to gain improper privileges or to cause
* a machine fault.
*/
struct sigreturn_args {
@@ -734,7 +731,7 @@ boot(arghowto)
#endif
die:
printf("Rebooting...\n");
- DELAY (100000); /* wait 100ms for printf's to complete */
+ DELAY(1000000); /* wait 1 sec for printf's to complete and be read */
cpu_reset();
for(;;) ;
/* NOTREACHED */
@@ -996,7 +993,7 @@ setidt(idx, func, typ, dpl)
ip->gd_hioffset = ((int)func)>>16 ;
}
-#define IDTVEC(name) __CONCAT(X, name)
+#define IDTVEC(name) __CONCAT(X,name)
typedef void idtvec_t();
extern idtvec_t
@@ -1039,8 +1036,9 @@ init386(first)
* the address space
*/
gdt_segs[GCODE_SEL].ssd_limit = i386_btop(i386_round_page(&etext)) - 1;
- gdt_segs[GDATA_SEL].ssd_limit = 0xffffffffUL; /* XXX constant? */
+ gdt_segs[GDATA_SEL].ssd_limit = i386_btop(0) - 1;
for (x=0; x < NGDT; x++) ssdtosd(gdt_segs+x, gdt+x);
+
/* make ldt memory segments */
/*
* The data segment limit must not cover the user area because we
@@ -1242,9 +1240,9 @@ init386(first)
}
}
printf("done.\n");
-
- maxmem = Maxmem - 1; /* highest page of usable memory */
- avail_end = (maxmem << PAGE_SHIFT) - i386_round_page(sizeof(struct msgbuf));
+
+ avail_end = (Maxmem << PAGE_SHIFT)
+ - i386_round_page(sizeof(struct msgbuf));
/*
* Initialize pointers to the two chunks of memory; for use
@@ -1310,15 +1308,6 @@ test_page(address, pattern)
return(0);
}
-/*aston() {
- schednetisr(NETISR_AST);
-}*/
-
-void
-setsoftclock() {
- schednetisr(NETISR_SCLK);
-}
-
/*
* insert an element into a queue
*/
diff --git a/sys/i386/i386/microtime.s b/sys/i386/i386/microtime.s
index 99f8601..06c2475 100644
--- a/sys/i386/i386/microtime.s
+++ b/sys/i386/i386/microtime.s
@@ -31,10 +31,10 @@
* SUCH DAMAGE.
*
* from: Steve McCanne's microtime code
- * $Id$
+ * $Id: microtime.s,v 1.2 1993/10/16 14:15:08 rgrimes Exp $
*/
-#include "asm.h"
+#include "machine/asmacros.h"
#include "../isa/isa.h"
#include "../isa/timerreg.h"
@@ -99,16 +99,20 @@ ENTRY(microtime)
#
cmpl $11890,%ebx
jle 2f
+#if 0 /* rest of kernel guarantees to keep IRR selected */
movl $0x0a,%eax # tell ICU we want IRR
outb %al,$IO_ICU1
+#endif
inb $IO_ICU1,%al # read IRR in ICU
testb $1,%al # is a timer interrupt pending?
je 1f
addl $-11932,%ebx # yes, subtract one clock period
1:
+#if 0 /* rest of kernel doesn't expect ISR */
movl $0x0b,%eax # tell ICU we want ISR
outb %al,$IO_ICU1 # (rest of kernel expects this)
+#endif
2:
sti # enable interrupts
diff --git a/sys/i386/i386/support.s b/sys/i386/i386/support.s
index 190b835..1a2b9cb 100644
--- a/sys/i386/i386/support.s
+++ b/sys/i386/i386/support.s
@@ -30,7 +30,7 @@
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
- * $Id: support.s,v 1.4 1994/02/01 04:09:07 davidg Exp $
+ * $Id: support.s,v 1.5 1994/03/07 11:47:32 davidg Exp $
*/
#include "assym.s" /* system definitions */
@@ -278,7 +278,6 @@ ENTRY(fillw)
/* filli(pat, base, cnt) */
ENTRY(filli)
-filli:
pushl %edi
movl 8(%esp),%eax
movl 12(%esp),%edi
@@ -365,7 +364,7 @@ ENTRY(bcopyx)
cmpl $2,%eax
je bcopyw /* not _bcopyw, to avoid multiple mcounts */
cmpl $4,%eax
- je bcopy
+ je bcopy /* XXX the shared ret's break mexitcount */
jmp bcopyb
/*
@@ -491,6 +490,12 @@ ENTRY(copyout) /* copyout(from_kernel, to_user, len) */
movl %edi,%eax
addl %ebx,%eax
jc copyout_fault
+/*
+ * XXX STOP USING VM_MAXUSER_ADDRESS.
+ * It is an end address, not a max, so every time it is used correctly it
+ * looks like there is an off by one error, and of course it caused an off
+ * by one error in several places.
+ */
cmpl $VM_MAXUSER_ADDRESS,%eax
ja copyout_fault
@@ -551,7 +556,7 @@ ENTRY(copyout) /* copyout(from_kernel, to_user, len) */
rep
movsl
movb %bl,%cl
- andb $3,%cl /* XXX can we trust the rest of %ecx on clones? */
+ andb $3,%cl
rep
movsb
@@ -613,12 +618,10 @@ copyin_fault:
ret
/*
- * fu{byte,sword,word} : fetch a byte(sword, word) from user memory
+ * fu{byte,sword,word} : fetch a byte (sword, word) from user memory
*/
ALTENTRY(fuiword)
ENTRY(fuword)
- movl __udatasel,%ax
- movl %ax,%gs
movl _curpcb,%ecx
movl $fusufault,PCB_ONFAULT(%ecx)
movl 4(%esp),%edx
@@ -628,8 +631,6 @@ ENTRY(fuword)
ret
ENTRY(fusword)
- movl __udatasel,%ax
- movl %ax,%gs
movl _curpcb,%ecx
movl $fusufault,PCB_ONFAULT(%ecx)
movl 4(%esp),%edx
@@ -640,8 +641,6 @@ ENTRY(fusword)
ALTENTRY(fuibyte)
ENTRY(fubyte)
- movl __udatasel,%ax
- movl %ax,%gs
movl _curpcb,%ecx
movl $fusufault,PCB_ONFAULT(%ecx)
movl 4(%esp),%edx
@@ -659,15 +658,10 @@ fusufault:
ret
/*
- * su{byte,sword,word}: write a byte(word, longword) to user memory
- */
-/*
- * we only have to set the right segment selector.
+ * su{byte,sword,word}: write a byte (word, longword) to user memory
*/
ALTENTRY(suiword)
ENTRY(suword)
- movl __udatasel,%ax
- movl %ax,%gs
movl _curpcb,%ecx
movl $fusufault,PCB_ONFAULT(%ecx)
movl 4(%esp),%edx
@@ -676,9 +670,10 @@ ENTRY(suword)
#if defined(I486_CPU) || defined(I586_CPU)
cmpl $CPUCLASS_386,_cpu_class
- jne 2f
+ jne 2f /* we only have to set the right segment selector */
#endif /* I486_CPU || I586_CPU */
+ /* XXX - page boundary crossing is still not handled */
movl %edx,%eax
shrl $IDXSHIFT,%edx
andb $0xfc,%dl
@@ -707,8 +702,6 @@ ENTRY(suword)
ret
ENTRY(susword)
- movl __udatasel,%eax
- movl %ax,%gs
movl _curpcb,%ecx
movl $fusufault,PCB_ONFAULT(%ecx)
movl 4(%esp),%edx
@@ -720,6 +713,7 @@ ENTRY(susword)
jne 2f
#endif /* I486_CPU || I586_CPU */
+ /* XXX - page boundary crossing is still not handled */
movl %edx,%eax
shrl $IDXSHIFT,%edx
andb $0xfc,%dl
@@ -843,7 +837,7 @@ ENTRY(copyoutstr)
movl $NBPG,%ecx
subl %eax,%ecx /* ecx = NBPG - (src % NBPG) */
cmpl %ecx,%edx
- jge 3f
+ jae 3f
movl %edx,%ecx /* ecx = min(ecx, edx) */
3:
orl %ecx,%ecx
@@ -916,8 +910,6 @@ ENTRY(copyinstr)
movl 12(%esp),%esi /* %esi = from */
movl 16(%esp),%edi /* %edi = to */
movl 20(%esp),%edx /* %edx = maxlen */
- movl __udatasel,%eax
- movl %ax,%gs
incl %edx
1:
@@ -1133,4 +1125,3 @@ ENTRY(longjmp)
xorl %eax,%eax /* return(1); */
incl %eax
ret
-
diff --git a/sys/i386/i386/swtch.s b/sys/i386/i386/swtch.s
index ab5cb7a..17f246c 100644
--- a/sys/i386/i386/swtch.s
+++ b/sys/i386/i386/swtch.s
@@ -33,15 +33,17 @@
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
- * $Id: swtch.s,v 1.3 1994/01/17 09:32:27 davidg Exp $
+ * $Id: swtch.s,v 1.4 1994/01/31 10:26:59 davidg Exp $
*/
#include "npx.h" /* for NNPX */
#include "assym.s" /* for preprocessor defines */
#include "errno.h" /* for error codes */
-#include "i386/isa/debug.h" /* for SHOW macros */
#include "machine/asmacros.h" /* for miscellaneous assembly macros */
+#define LOCORE /* XXX inhibit C declarations */
+#include "machine/spl.h" /* for SWI_AST_MASK ... */
+
/*****************************************************************************/
/* Scheduling */
@@ -132,24 +134,30 @@ rem3: .asciz "remrq"
sw0: .asciz "swtch"
/*
- * When no processes are on the runq, Swtch branches to idle
+ * When no processes are on the runq, swtch() branches to _idle
* to wait for something to come ready.
*/
ALIGN_TEXT
-Idle:
+_idle:
+ MCOUNT
movl _IdlePTD,%ecx
movl %ecx,%cr3
movl $tmpstk-4,%esp
sti
- SHOW_STI
+
+ /*
+ * XXX callers of swtch() do a bogus splclock(). Locking should
+ * be left to swtch().
+ */
+ movl $SWI_AST_MASK,_cpl
+ testl $~SWI_AST_MASK,_ipending
+ je idle_loop
+ call _splz
ALIGN_TEXT
idle_loop:
- call _spl0
- cli
cmpl $0,_whichqs
jne sw1
- sti
hlt /* wait for interrupt */
jmp idle_loop
@@ -161,9 +169,7 @@ badsw:
/*
* Swtch()
*/
- SUPERALIGN_TEXT /* so profiling doesn't lump Idle with swtch().. */
ENTRY(swtch)
-
incl _cnt+V_SWTCH
/* switch to new process. first, save context as needed */
@@ -208,14 +214,13 @@ ENTRY(swtch)
/* save is done, now choose a new process or idle */
sw1:
cli
- SHOW_CLI
movl _whichqs,%edi
2:
/* XXX - bsf is sloow */
bsfl %edi,%eax /* find a full q */
- je Idle /* if none, idle */
+ je _idle /* if none, idle */
+
/* XX update whichqs? */
-swfnd:
btrl %eax,%edi /* clear q full status */
jnb 2b /* if it was clear, look for another */
movl %eax,%ebx /* save which one we are using */
@@ -296,7 +301,6 @@ swfnd:
*/
pushl PCB_IML(%edx)
sti
- SHOW_STI
#if 0
call _splx
#endif
@@ -312,7 +316,7 @@ ENTRY(mvesp)
movl %esp,%eax
ret
/*
- * struct proc *swtch_to_inactive(p) ; struct proc *p;
+ * struct proc *swtch_to_inactive(struct proc *p);
*
* At exit of a process, move off the address space of the
* process and onto a "safe" one. Then, on a temporary stack
@@ -327,6 +331,7 @@ ENTRY(swtch_to_inactive)
movl %ecx,%cr3 /* good bye address space */
#write buffer?
movl $tmpstk-4,%esp /* temporary stack, compensated for call */
+ MEXITCOUNT
jmp %edx /* return, execute remainder of cleanup */
/*
@@ -418,7 +423,7 @@ ENTRY(addupc)
movl 8(%ebp),%eax /* pc */
subl PR_OFF(%edx),%eax /* pc -= up->pr_off */
- jl L1 /* if (pc < 0) return */
+ jb L1 /* if (pc was < off) return */
shrl $1,%eax /* praddr = pc >> 1 */
imull PR_SCALE(%edx),%eax /* praddr *= up->pr_scale */
@@ -448,8 +453,3 @@ proffault:
movl $0,PR_SCALE(%ecx) /* up->pr_scale = 0 */
leave
ret
-
-/* To be done: */
-ENTRY(astoff)
- ret
-
diff --git a/sys/i386/i386/trap.c b/sys/i386/i386/trap.c
index dad751a..e372ac0 100644
--- a/sys/i386/i386/trap.c
+++ b/sys/i386/i386/trap.c
@@ -34,7 +34,7 @@
* SUCH DAMAGE.
*
* from: @(#)trap.c 7.4 (Berkeley) 5/13/91
- * $Id: trap.c,v 1.19 1994/03/14 21:54:03 davidg Exp $
+ * $Id: trap.c,v 1.20 1994/03/24 23:12:34 davidg Exp $
*/
/*
@@ -88,8 +88,6 @@ extern int grow(struct proc *,int);
struct sysent sysent[];
int nsysent;
-extern unsigned cpl;
-extern unsigned netmask, ttymask, biomask;
#define MAX_TRAP_MSG 27
char *trap_msg[] = {
@@ -226,9 +224,9 @@ skiptoswitch:
#ifdef MATH_EMULATE
i = math_emulate(&frame);
if (i == 0) return;
-#else /* MATH_EMULTATE */
+#else /* MATH_EMULATE */
panic("trap: math emulation necessary!");
-#endif /* MATH_EMULTATE */
+#endif /* MATH_EMULATE */
ucode = FPE_FPU_NP_TRAP;
break;
@@ -261,7 +259,7 @@ skiptoswitch:
vm_map_t map = 0;
int rv = 0, oldflags;
vm_prot_t ftype;
- unsigned nss, v;
+ unsigned v;
extern vm_map_t kernel_map;
va = trunc_page((vm_offset_t)eva);
@@ -435,11 +433,11 @@ nogo:
printf("Idle\n");
}
printf("interrupt mask = ");
- if ((cpl & netmask) == netmask)
+ if ((cpl & net_imask) == net_imask)
printf("net ");
- if ((cpl & ttymask) == ttymask)
+ if ((cpl & tty_imask) == tty_imask)
printf("tty ");
- if ((cpl & biomask) == biomask)
+ if ((cpl & bio_imask) == bio_imask)
printf("bio ");
if (cpl == 0)
printf("none");
@@ -514,7 +512,6 @@ out:
int trapwrite(addr)
unsigned addr;
{
- unsigned nss;
struct proc *p;
vm_offset_t va, v;
struct vmspace *vm;
diff --git a/sys/i386/include/asmacros.h b/sys/i386/include/asmacros.h
index f0f2c01..4af0b97 100644
--- a/sys/i386/include/asmacros.h
+++ b/sys/i386/include/asmacros.h
@@ -5,6 +5,11 @@
#define GEN_ENTRY(name) ALIGN_TEXT; .globl name; name:
#define NON_GPROF_ENTRY(name) GEN_ENTRY(_/**/name)
+/* These three are place holders for future changes to the profiling code */
+#define MCOUNT_LABEL(name)
+#define MEXITCOUNT
+#define FAKE_MCOUNT(caller)
+
#ifdef GPROF
/*
* ALTENTRY() must be before a corresponding ENTRY() so that it can jump
@@ -30,6 +35,7 @@
*/
#define ALTENTRY(name) GEN_ENTRY(_/**/name)
#define ENTRY(name) GEN_ENTRY(_/**/name)
+#define MCOUNT
#endif
diff --git a/sys/i386/include/cpu.h b/sys/i386/include/cpu.h
index 3fe003f..a2df023 100644
--- a/sys/i386/include/cpu.h
+++ b/sys/i386/include/cpu.h
@@ -34,7 +34,7 @@
* SUCH DAMAGE.
*
* from: @(#)cpu.h 5.4 (Berkeley) 5/9/91
- * $Id: cpu.h,v 1.3 1993/10/08 20:50:57 rgrimes Exp $
+ * $Id: cpu.h,v 1.4 1993/11/07 17:42:46 wollman Exp $
*/
#ifndef _MACHINE_CPU_H_
@@ -58,18 +58,21 @@
* Arguments to hardclock, softclock and gatherstats
* encapsulate the previous machine state in an opaque
* clockframe; for now, use generic intrframe.
+ * XXX softclock() has been fixed. It never needed a
+ * whole frame, only a usermode flag, at least on this
+ * machine. Fix the rest.
*/
typedef struct intrframe clockframe;
#define CLKF_USERMODE(framep) (ISPL((framep)->if_cs) == SEL_UPL)
-#define CLKF_BASEPRI(framep) ((framep)->if_ppl == 0)
+#define CLKF_BASEPRI(framep) (((framep)->if_ppl & ~SWI_AST_MASK) == 0)
#define CLKF_PC(framep) ((framep)->if_eip)
/*
* Preempt the current process if in interrupt from user mode,
* or after the current trap/syscall if in system mode.
*/
-#define need_resched() { want_resched++; aston(); }
+#define need_resched() { want_resched = 1; aston(); }
/*
* Give a profiling tick to the current process from the softclock
@@ -84,7 +87,8 @@ typedef struct intrframe clockframe;
*/
#define signotify(p) aston()
-#define aston() (astpending++)
+#define aston() setsoftast()
+#define astoff()
/*
* pull in #defines for kinds of processors
@@ -97,7 +101,6 @@ struct cpu_nameclass {
};
#ifdef KERNEL
-extern int astpending; /* want a trap before returning to user mode */
extern int want_resched; /* resched was called */
extern int cpu;
diff --git a/sys/i386/include/cpufunc.h b/sys/i386/include/cpufunc.h
index d4b3b0d..3c2dcc9 100644
--- a/sys/i386/include/cpufunc.h
+++ b/sys/i386/include/cpufunc.h
@@ -2,7 +2,7 @@
* Functions to provide access to special i386 instructions.
* XXX - bezillions more are defined in locore.s but are not declared anywhere.
*
- * $Id: cpufunc.h,v 1.8 1994/01/31 04:18:45 davidg Exp $
+ * $Id: cpufunc.h,v 1.9 1994/01/31 23:48:23 davidg Exp $
*/
#ifndef _MACHINE_CPUFUNC_H_
@@ -11,6 +11,8 @@
#include <sys/cdefs.h>
#include <sys/types.h>
+#include "machine/spl.h"
+
#ifdef __GNUC__
static inline int bdb(void)
diff --git a/sys/i386/include/ipl.h b/sys/i386/include/ipl.h
new file mode 100644
index 0000000..248ca56
--- /dev/null
+++ b/sys/i386/include/ipl.h
@@ -0,0 +1,7 @@
+#ifndef _ISA_IPL_H_
+#define _ISA_IPL_H_
+
+#define NHWI 16 /* number of h/w interrupts */
+#define HWI_MASK 0xffff /* bits corresponding to h/w interrupts */
+
+#endif /* _ISA_IPL_H_ */
diff --git a/sys/i386/include/spl.h b/sys/i386/include/spl.h
new file mode 100644
index 0000000..0be9364
--- /dev/null
+++ b/sys/i386/include/spl.h
@@ -0,0 +1,104 @@
+#ifndef _MACHINE_IPL_H_
+#define _MACHINE_IPL_H_
+
+#include "machine/../isa/ipl.h" /* XXX "machine" means cpu for i386 */
+
+/*
+ * Software interrupt bit numbers in priority order. The priority only
+ * determines which swi will be dispatched next; a higher priority swi
+ * may be dispatched when a nested h/w interrupt handler returns.
+ */
+#define SWI_TTY (NHWI + 0)
+#define SWI_NET (NHWI + 1)
+#define SWI_CLOCK 30
+#define SWI_AST 31
+
+/*
+ * Corresponding interrupt-pending bits for ipending.
+ */
+#define SWI_TTY_PENDING (1 << SWI_TTY)
+#define SWI_NET_PENDING (1 << SWI_NET)
+#define SWI_CLOCK_PENDING (1 << SWI_CLOCK)
+#define SWI_AST_PENDING (1 << SWI_AST)
+
+/*
+ * Corresponding interrupt-disable masks for cpl. The ordering is now by
+ * inclusion (where each mask is considered as a set of bits). Everything
+ * except SWI_AST_MASK includes SWI_CLOCK_MASK so that softclock() doesn't
+ * run while other swi handlers are running and timeout routines can call
+ * swi handlers. Everything includes SWI_AST_MASK so that AST's are masked
+ * until just before return to user mode.
+ */
+#define SWI_TTY_MASK (SWI_TTY_PENDING | SWI_CLOCK_MASK)
+#define SWI_NET_MASK (SWI_NET_PENDING | SWI_CLOCK_MASK)
+#define SWI_CLOCK_MASK (SWI_CLOCK_PENDING | SWI_AST_MASK)
+#define SWI_AST_MASK SWI_AST_PENDING
+#define SWI_MASK (~HWI_MASK)
+
+#ifndef LOCORE
+
+extern unsigned bio_imask; /* group of interrupts masked with splbio() */
+extern unsigned cpl; /* current priority level mask */
+extern unsigned high_imask; /* group of interrupts masked with splhigh() */
+extern unsigned net_imask; /* group of interrupts masked with splimp() */
+extern volatile unsigned ipending; /* active interrupts masked by cpl */
+extern volatile unsigned netisr;
+extern unsigned tty_imask; /* group of interrupts masked with spltty() */
+
+/*
+ * ipending has to be volatile so that it is read every time it is accessed
+ * in splx() and spl0(), but we don't want it to be read nonatomically when
+ * it is changed. Pretending that ipending is a plain int happens to give
+ * suitable atomic code for "ipending |= constant;".
+ */
+#define setsoftast() (*(unsigned *)&ipending |= SWI_AST_PENDING)
+#define setsoftclock() (*(unsigned *)&ipending |= SWI_CLOCK_PENDING)
+#define setsoftnet() (*(unsigned *)&ipending |= SWI_NET_PENDING)
+#define setsofttty() (*(unsigned *)&ipending |= SWI_TTY_PENDING)
+
+void unpend_V __P((void));
+
+#ifdef __GNUC__
+
+void splz __P((void));
+
+#define GENSPL(name, set_cpl) \
+static __inline int name(void) \
+{ \
+ unsigned x; \
+ \
+ x = cpl; \
+ set_cpl; \
+ return (x); \
+}
+
+GENSPL(splbio, cpl |= bio_imask)
+GENSPL(splclock, cpl = HWI_MASK | SWI_MASK)
+GENSPL(splhigh, cpl = HWI_MASK | SWI_MASK)
+GENSPL(splimp, cpl |= net_imask)
+GENSPL(splnet, cpl |= SWI_NET_MASK)
+GENSPL(splsoftclock, cpl = SWI_CLOCK_MASK)
+GENSPL(splsofttty, cpl |= SWI_TTY_MASK)
+GENSPL(spltty, cpl |= tty_imask)
+
+static __inline void
+spl0(void)
+{
+ cpl = SWI_AST_MASK;
+ if (ipending & ~SWI_AST_MASK)
+ splz();
+}
+
+static __inline void
+splx(int ipl)
+{
+ cpl = ipl;
+ if (ipending & ~ipl)
+ splz();
+}
+
+#endif /* __GNUC__ */
+
+#endif /* LOCORE */
+
+#endif /* _MACHINE_IPL_H_ */
diff --git a/sys/i386/isa/icu.h b/sys/i386/isa/icu.h
index 488ad3e..13216b0 100644
--- a/sys/i386/isa/icu.h
+++ b/sys/i386/isa/icu.h
@@ -34,7 +34,7 @@
* SUCH DAMAGE.
*
* from: @(#)icu.h 5.6 (Berkeley) 5/9/91
- * $Id$
+ * $Id: icu.h,v 1.2 1993/10/16 13:45:51 rgrimes Exp $
*/
/*
@@ -51,12 +51,6 @@
* Interrupt "level" mechanism variables, masks, and macros
*/
extern unsigned imen; /* interrupt mask enable */
-extern unsigned cpl; /* current priority level mask */
-
-extern unsigned highmask; /* group of interrupts masked with splhigh() */
-extern unsigned ttymask; /* group of interrupts masked with spltty() */
-extern unsigned biomask; /* group of interrupts masked with splbio() */
-extern unsigned netmask; /* group of interrupts masked with splimp() */
#define INTREN(s) (imen &= ~(s), SET_ICUS())
#define INTRDIS(s) (imen |= (s), SET_ICUS())
@@ -74,7 +68,7 @@ extern unsigned netmask; /* group of interrupts masked with splimp() */
#endif
/*
- * Interrupt enable bits -- in order of priority
+ * Interrupt enable bits - in normal order of priority (which we change)
*/
#define IRQ0 0x0001 /* highest priority - timer */
#define IRQ1 0x0002
@@ -88,7 +82,7 @@ extern unsigned netmask; /* group of interrupts masked with splimp() */
#define IRQ13 0x2000
#define IRQ14 0x4000
#define IRQ15 0x8000
-#define IRQ3 0x0008
+#define IRQ3 0x0008 /* this is highest after rotation */
#define IRQ4 0x0010
#define IRQ5 0x0020
#define IRQ6 0x0040
diff --git a/sys/i386/isa/icu.s b/sys/i386/isa/icu.s
index 5af03e7..b8bf1a8 100644
--- a/sys/i386/isa/icu.s
+++ b/sys/i386/isa/icu.s
@@ -36,7 +36,7 @@
*
* @(#)icu.s 7.2 (Berkeley) 5/21/91
*
- * $Id: icu.s,v 1.6 1993/12/19 00:50:35 wollman Exp $
+ * $Id: icu.s,v 1.7 1993/12/20 14:58:21 wollman Exp $
*/
/*
@@ -45,215 +45,131 @@
*/
/*
- * XXX - this file is now misnamed. All spls are now soft and the only thing
- * related to the hardware icu is that the bit numbering is the same in the
- * soft priority masks as in the hard ones.
+ * XXX this file should be named ipl.s. All spls are now soft and the
+ * only thing related to the hardware icu is that the h/w interrupt
+ * numbers are used without translation in the masks.
*/
-#include "sio.h"
-#define HIGHMASK 0xffff
-#define SOFTCLOCKMASK 0x8000
+#include "../net/netisr.h"
.data
-
.globl _cpl
-_cpl: .long 0xffff /* current priority (all off) */
-
+_cpl: .long HWI_MASK | SWI_MASK /* current priority (all off) */
.globl _imen
-_imen: .long 0xffff /* interrupt mask enable (all off) */
-
-/* .globl _highmask */
-_highmask: .long HIGHMASK
-
- .globl _ttymask, _biomask, _netmask
-_ttymask: .long 0
-_biomask: .long 0
-_netmask: .long 0
-
- .globl _ipending, _astpending
+_imen: .long HWI_MASK /* interrupt mask enable (all h/w off) */
+_high_imask: .long HWI_MASK | SWI_MASK
+ .globl _tty_imask
+_tty_imask: .long 0
+ .globl _bio_imask
+_bio_imask: .long 0
+ .globl _net_imask
+_net_imask: .long 0
+ .globl _ipending
_ipending: .long 0
+ .globl _astpending
_astpending: .long 0 /* tells us an AST needs to be taken */
-
.globl _netisr
_netisr: .long 0 /* set with bits for which queue to service */
-
vec:
.long vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7
.long vec8, vec9, vec10, vec11, vec12, vec13, vec14, vec15
-#define GENSPL(name, mask, event) \
- .globl _spl/**/name ; \
- ALIGN_TEXT ; \
-_spl/**/name: ; \
- COUNT_EVENT(_intrcnt_spl, event) ; \
- movl _cpl,%eax ; \
- movl %eax,%edx ; \
- orl mask,%edx ; \
- movl %edx,_cpl ; \
- SHOW_CPL ; \
- ret
-
-#define FASTSPL(mask) \
- movl mask,_cpl ; \
- SHOW_CPL
-
-#define FASTSPL_VARMASK(varmask) \
- movl varmask,%eax ; \
- movl %eax,_cpl ; \
- SHOW_CPL
-
.text
- ALIGN_TEXT
-unpend_v:
- COUNT_EVENT(_intrcnt_spl, 0)
- bsfl %eax,%eax # slow, but not worth optimizing
- btrl %eax,_ipending
- jnc unpend_v_next # some intr cleared the in-memory bit
- SHOW_IPENDING
- movl Vresume(,%eax,4),%eax
- testl %eax,%eax
- je noresume
- jmp %eax
-
- ALIGN_TEXT
-/*
- * XXX - must be some fastintr, need to register those too.
- */
-noresume:
-#if NSIO > 0
- call _softsio1
-#endif
-unpend_v_next:
- movl _cpl,%eax
- movl %eax,%edx
- notl %eax
- andl _ipending,%eax
- je none_to_unpend
- jmp unpend_v
-
/*
- * Handle return from interrupt after device handler finishes
- */
- ALIGN_TEXT
-doreti:
- COUNT_EVENT(_intrcnt_spl, 1)
- addl $4,%esp # discard unit arg
- popl %eax # get previous priority
-/*
- * Now interrupt frame is a trap frame!
- *
- * XXX - setting up the interrupt frame to be almost a stack frame is mostly
- * a waste of time.
+ * Handle return from interrupts, traps and syscalls.
*/
+ SUPERALIGN_TEXT
+_doreti:
+ FAKE_MCOUNT(_bintr) /* init "from" _bintr -> _doreti */
+ addl $4,%esp /* discard unit number */
+ popl %eax /* cpl to restore */
+doreti_next:
+ /*
+ * Check for pending HWIs and SWIs atomically with restoring cpl
+ * and exiting. The check has to be atomic with exiting to stop
+ * (ipending & ~cpl) changing from zero to nonzero while we're
+ * looking at it (this wouldn't be fatal but it would increase
+ * interrupt latency). Restoring cpl has to be atomic with exiting
+ * so that the stack cannot pile up (the nesting level of interrupt
+ * handlers is limited by the number of bits in cpl).
+ */
+ movl %eax,%ecx
+ notl %ecx
+ cli
+ andl _ipending,%ecx
+ jne doreti_unpend
+doreti_exit:
movl %eax,_cpl
- SHOW_CPL
- movl %eax,%edx
- notl %eax
- andl _ipending,%eax
- jne unpend_v
-none_to_unpend:
- testl %edx,%edx # returning to zero priority?
- jne 1f # nope, going to non-zero priority
- movl _netisr,%eax
- testl %eax,%eax # check for softint s/traps
- jne 2f # there are some
- jmp test_resched # XXX - schedule jumps better
- COUNT_EVENT(_intrcnt_spl, 2) # XXX
-
- ALIGN_TEXT # XXX
-1: # XXX
- COUNT_EVENT(_intrcnt_spl, 3)
+ MEXITCOUNT
popl %es
popl %ds
popal
addl $8,%esp
iret
-#include "../net/netisr.h"
-
-#define DONET(s, c, event) ; \
- .globl c ; \
- btrl $s,_netisr ; \
- jnc 1f ; \
- COUNT_EVENT(_intrcnt_spl, event) ; \
- call c ; \
-1:
-
ALIGN_TEXT
-2:
- COUNT_EVENT(_intrcnt_spl, 4)
-/*
- * XXX - might need extra locking while testing reg copy of netisr, but
- * interrupt routines setting it would not cause any new problems (since we
- * don't loop, fresh bits will not be processed until the next doreti or spl0).
- */
- testl $~((1 << NETISR_SCLK) | (1 << NETISR_AST)),%eax
- je test_ASTs # no net stuff, just temporary AST's
- FASTSPL_VARMASK(_netmask)
-#if 0
- DONET(NETISR_RAW, _rawintr, 5)
-#endif
-
-#ifdef INET
- DONET(NETISR_IP, _ipintr, 6)
-#endif /* INET */
-
-#ifdef IMP
- DONET(NETISR_IMP, _impintr, 7)
-#endif /* IMP */
-
-#ifdef NS
- DONET(NETISR_NS, _nsintr, 8)
-#endif /* NS */
-
-#ifdef ISO
- DONET(NETISR_ISO, _clnlintr, 9)
-#endif /* ISO */
+doreti_unpend:
+ /*
+ * Enabling interrupts is safe because we haven't restored cpl yet.
+ * The locking from the "btrl" test is probably no longer necessary.
+ * We won't miss any new pending interrupts because we will check
+ * for them again.
+ */
+ sti
+ bsfl %ecx,%ecx /* slow, but not worth optimizing */
+ btrl %ecx,_ipending
+ jnc doreti_next /* some intr cleared memory copy */
+ movl ihandlers(,%ecx,4),%edx
+ testl %edx,%edx
+ je doreti_next /* "can't happen" */
+ cmpl $NHWI,%ecx
+ jae doreti_swi
+ cli
+ movl %eax,_cpl
+ MEXITCOUNT
+ jmp %edx
-#ifdef CCITT
- DONET(NETISR_X25, _pkintr, 29)
- DONET(NETISR_HDLC, _hdintr, 30)
-#endif /* CCITT */
+ ALIGN_TEXT
+doreti_swi:
+ pushl %eax
+ /*
+ * The SWI_AST handler has to run at cpl = SWI_AST_MASK and the
+ * SWI_CLOCK handler at cpl = SWI_CLOCK_MASK, so we have to restore
+ * all the h/w bits in cpl now and have to worry about stack growth.
+ * The worst case is currently (30 Jan 1994) 2 SWI handlers nested
+ * in dying interrupt frames and about 12 HWIs nested in active
+ * interrupt frames. There are only 4 different SWIs and the HWI
+ * and SWI masks limit the nesting further.
+ */
+ orl imasks(,%ecx,4),%eax
+ movl %eax,_cpl
+ call %edx
+ popl %eax
+ jmp doreti_next
- FASTSPL($0)
-test_ASTs:
- btrl $NETISR_SCLK,_netisr
- jnc test_resched
- COUNT_EVENT(_intrcnt_spl, 10)
- FASTSPL($SOFTCLOCKMASK)
-/*
- * Back to an interrupt frame for a moment.
- */
- pushl $0 # previous cpl (probably not used)
- pushl $0x7f # dummy unit number
- call _softclock
- addl $8,%esp # discard dummies
- FASTSPL($0)
-test_resched:
-#ifdef notused1
- btrl $NETISR_AST,_netisr
- jnc 2f
-#endif
-#ifdef notused2
- cmpl $0,_want_resched
- je 2f
-#endif
- cmpl $0,_astpending # XXX - put it back in netisr to
- je 2f # reduce the number of tests
+ ALIGN_TEXT
+swi_ast:
+ addl $8,%esp /* discard raddr & cpl to get trap frame */
testb $SEL_RPL_MASK,TRAPF_CS_OFF(%esp)
- # to non-kernel (i.e., user)?
- je 2f # nope, leave
- COUNT_EVENT(_intrcnt_spl, 11)
- movl $0,_astpending
+ je swi_ast_phantom
+ movl $T_ASTFLT,(2+8+0)*4(%esp)
call _trap
-2:
- COUNT_EVENT(_intrcnt_spl, 12)
- popl %es
- popl %ds
- popal
- addl $8,%esp
- iret
+ subl %eax,%eax /* recover cpl */
+ jmp doreti_next
+
+ ALIGN_TEXT
+swi_ast_phantom:
+ /*
+ * These happen when there is an interrupt in a trap handler before
+ * ASTs can be masked or in an lcall handler before they can be
+ * masked or after they are unmasked. They could be avoided for
+ * trap entries by using interrupt gates, and for lcall exits by
+ * using by using cli, but they are unavoidable for lcall entries.
+ */
+ cli
+ orl $SWI_AST_PENDING,_ipending
+ jmp doreti_exit /* SWI_AST is highest so we must be done */
/*
* Interrupt priority mechanism
@@ -262,121 +178,84 @@ test_resched:
* -- ipending = active interrupts currently masked by cpl
*/
- GENSPL(bio, _biomask, 13)
- GENSPL(clock, $HIGHMASK, 14) /* splclock == splhigh ex for count */
- GENSPL(high, $HIGHMASK, 15)
- GENSPL(imp, _netmask, 16) /* splimp == splnet except for count */
- GENSPL(net, _netmask, 17)
- GENSPL(softclock, $SOFTCLOCKMASK, 18)
- GENSPL(tty, _ttymask, 19)
-
- .globl _splnone
- .globl _spl0
- ALIGN_TEXT
-_splnone:
-_spl0:
- COUNT_EVENT(_intrcnt_spl, 20)
-in_spl0:
+ENTRY(splz)
+ /*
+ * The caller has restored cpl and checked that (ipending & ~cpl)
+ * is nonzero. We have to repeat the check since if there is an
+ * interrupt while we're looking, _doreti processing for the
+ * interrupt will handle all the unmasked pending interrupts
+ * because we restored early. We're repeating the calculation
+ * of (ipending & ~cpl) anyway so that the caller doesn't have
+ * to pass it, so this only costs one "jne". "bsfl %ecx,%ecx"
+ * is undefined when %ecx is 0 so we can't rely on the secondary
+ * btrl tests.
+ */
movl _cpl,%eax
- pushl %eax # save old priority
- testl $(1 << NETISR_RAW) | (1 << NETISR_IP),_netisr
- je over_net_stuff_for_spl0
- movl _netmask,%eax # mask off those network devices
- movl %eax,_cpl # set new priority
- SHOW_CPL
-/*
- * XXX - what about other net intrs?
- */
-#if 0
- DONET(NETISR_RAW, _rawintr, 21)
-#endif
-
-#ifdef INET
- DONET(NETISR_IP, _ipintr, 22)
-#endif /* INET */
-
-#ifdef IMP
- DONET(NETISR_IMP, _impintr, 23)
-#endif /* IMP */
-
-#ifdef NS
- DONET(NETISR_NS, _nsintr, 24)
-#endif /* NS */
-
-#ifdef ISO
- DONET(NETISR_ISO, _clnlintr, 25)
-#endif /* ISO */
-
-over_net_stuff_for_spl0:
- movl $0,_cpl # set new priority
- SHOW_CPL
- movl _ipending,%eax
- testl %eax,%eax
- jne unpend_V
- popl %eax # return old priority
+splz_next:
+ /*
+ * We don't need any locking here. (ipending & ~cpl) cannot grow
+ * while we're looking at it - any interrupt will shrink it to 0.
+ */
+ movl %eax,%ecx
+ notl %ecx
+ andl _ipending,%ecx
+ jne splz_unpend
ret
- .globl _splx
ALIGN_TEXT
-_splx:
- COUNT_EVENT(_intrcnt_spl, 26)
- movl 4(%esp),%eax # new priority
- testl %eax,%eax
- je in_spl0 # going to "zero level" is special
- COUNT_EVENT(_intrcnt_spl, 27)
- movl _cpl,%edx # save old priority
- movl %eax,_cpl # set new priority
- SHOW_CPL
- notl %eax
- andl _ipending,%eax
- jne unpend_V_result_edx
- movl %edx,%eax # return old priority
- ret
-
- ALIGN_TEXT
-unpend_V_result_edx:
- pushl %edx
-unpend_V:
- COUNT_EVENT(_intrcnt_spl, 28)
- bsfl %eax,%eax
- btrl %eax,_ipending
- jnc unpend_V_next
- SHOW_IPENDING
- movl Vresume(,%eax,4),%edx
+splz_unpend:
+ bsfl %ecx,%ecx
+ btrl %ecx,_ipending
+ jnc splz_next
+ movl ihandlers(,%ecx,4),%edx
testl %edx,%edx
- je noresumeV
-/*
- * We would prefer to call the intr handler directly here but that doesn't
- * work for badly behaved handlers that want the interrupt frame. Also,
- * there's a problem determining the unit number. We should change the
- * interface so that the unit number is not determined at config time.
- */
- jmp *vec(,%eax,4)
+ je splz_next /* "can't happen" */
+ cmpl $NHWI,%ecx
+ jae splz_swi
+ /*
+ * We would prefer to call the intr handler directly here but that
+ * doesn't work for badly behaved handlers that want the interrupt
+ * frame. Also, there's a problem determining the unit number.
+ * We should change the interface so that the unit number is not
+ * determined at config time.
+ */
+ jmp *vec(,%ecx,4)
ALIGN_TEXT
+splz_swi:
+ cmpl $SWI_AST,%ecx
+ je splz_next /* "can't happen" */
+ pushl %eax
+ orl imasks(,%ecx,4),%eax
+ movl %eax,_cpl
+ call %edx
+ popl %eax
+ movl %eax,_cpl
+ jmp splz_next
+
/*
- * XXX - must be some fastintr, need to register those too.
+ * Fake clock IRQ so that it appears to come from our caller and not from
+ * vec0, so that kernel profiling works.
+ * XXX do this more generally (for all vectors; look up the C entry point).
+ * XXX frame bogusness stops us from just jumping to the C entry point.
*/
-noresumeV:
-#if NSIO > 0
- call _softsio1
-#endif
-unpend_V_next:
- movl _cpl,%eax
- notl %eax
- andl _ipending,%eax
- jne unpend_V
- popl %eax
- ret
+ ALIGN_TEXT
+vec0:
+ popl %eax /* return address */
+ pushfl
+#define KCSEL 8
+ pushl $KCSEL
+ pushl %eax
+ cli
+ MEXITCOUNT
+ jmp _Vclk
#define BUILD_VEC(irq_num) \
ALIGN_TEXT ; \
vec/**/irq_num: ; \
int $ICU_OFFSET + (irq_num) ; \
- popl %eax ; \
ret
- BUILD_VEC(0)
BUILD_VEC(1)
BUILD_VEC(2)
BUILD_VEC(3)
@@ -392,3 +271,58 @@ vec/**/irq_num: ; \
BUILD_VEC(13)
BUILD_VEC(14)
BUILD_VEC(15)
+
+ ALIGN_TEXT
+swi_clock:
+ MCOUNT
+ subl %eax,%eax
+ cmpl $_splz,(%esp) /* XXX call from splz()? */
+ jae 1f /* yes, usermode = 0 */
+ movl 4+4+TRAPF_CS_OFF(%esp),%eax /* no, check trap frame */
+ andl $SEL_RPL_MASK,%eax
+1:
+ pushl %eax
+ call _softclock
+ addl $4,%esp
+ ret
+
+#define DONET(s, c, event) ; \
+ .globl c ; \
+ btrl $s,_netisr ; \
+ jnc 9f ; \
+ call c ; \
+9:
+
+ ALIGN_TEXT
+swi_net:
+ MCOUNT
+#if 0
+ DONET(NETISR_RAW, _rawintr,netisr_raw)
+#endif
+#ifdef INET
+ DONET(NETISR_IP, _ipintr,netisr_ip)
+#endif
+#ifdef IMP
+ DONET(NETISR_IMP, _impintr,netisr_imp)
+#endif
+#ifdef NS
+ DONET(NETISR_NS, _nsintr,netisr_ns)
+#endif
+#ifdef ISO
+ DONET(NETISR_ISO, _clnlintr,netisr_iso)
+#endif
+#ifdef CCITT
+ DONET(NETISR_X25, _pkintr, 29)
+ DONET(NETISR_HDLC, _hdintr, 30)
+#endif
+ ret
+
+ ALIGN_TEXT
+swi_tty:
+ MCOUNT
+#include "sio.h"
+#if NSIO > 0
+ jmp _siopoll
+#else
+ ret
+#endif
diff --git a/sys/i386/isa/if_is.c b/sys/i386/isa/if_is.c
index 341885f..9f0e6ad 100644
--- a/sys/i386/isa/if_is.c
+++ b/sys/i386/isa/if_is.c
@@ -483,7 +483,7 @@ is_init(unit)
/* Address not known */
if (ifp->if_addrlist == (struct ifaddr *)0) return;
- s = splnet();
+ s = splimp();
/*
* Lance must be stopped
@@ -984,7 +984,7 @@ is_ioctl(ifp, cmd, data)
struct ifreq *ifr = (struct ifreq *)data;
int s, error = 0;
- s = splnet();
+ s = splimp();
switch (cmd) {
diff --git a/sys/i386/isa/isa.c b/sys/i386/isa/isa.c
index 972c2f7..835aa98 100644
--- a/sys/i386/isa/isa.c
+++ b/sys/i386/isa/isa.c
@@ -34,7 +34,7 @@
* SUCH DAMAGE.
*
* from: @(#)isa.c 7.2 (Berkeley) 5/13/91
- * $Id: isa.c,v 1.13 1994/01/17 05:49:20 rgrimes Exp $
+ * $Id: isa.c,v 1.14 1994/01/22 21:52:04 rgrimes Exp $
*/
/*
@@ -213,38 +213,45 @@ isa_configure() {
printf("Probing for devices on the ISA bus:\n");
for (dvp = isa_devtab_tty; dvp->id_driver; dvp++) {
if (!haveseen_isadev(dvp))
- config_isadev(dvp,&ttymask);
+ config_isadev(dvp,&tty_imask);
}
for (dvp = isa_devtab_bio; dvp->id_driver; dvp++) {
if (!haveseen_isadev(dvp))
- config_isadev(dvp,&biomask);
+ config_isadev(dvp,&bio_imask);
}
for (dvp = isa_devtab_net; dvp->id_driver; dvp++) {
if (!haveseen_isadev(dvp))
- config_isadev(dvp,&netmask);
+ config_isadev(dvp,&net_imask);
}
for (dvp = isa_devtab_null; dvp->id_driver; dvp++) {
if (!haveseen_isadev(dvp))
config_isadev(dvp,(u_int *) NULL);
}
+ bio_imask |= SWI_CLOCK_MASK;
+ net_imask |= SWI_NET_MASK;
+ tty_imask |= SWI_TTY_MASK;
+
/*
- * XXX We should really add the tty device to netmask when the line is
+ * XXX we should really add the tty device to net_imask when the line is
* switched to SLIPDISC, and then remove it when it is switched away from
- * SLIPDISC. No need to block out ALL ttys during a splnet when only one
+ * SLIPDISC. No need to block out ALL ttys during a splimp when only one
* of them is running slip.
+ *
+ * XXX actually, blocking all ttys during a splimp doesn't matter so much
+ * with sio because the serial interrupt layer doesn't use tty_imask. Only
+ * non-serial ttys suffer. It's more stupid that ALL 'net's are blocked
+ * during spltty.
*/
#include "sl.h"
#if NSL > 0
- netmask |= ttymask;
- ttymask |= netmask;
+ net_imask |= tty_imask;
+ tty_imask = net_imask;
+#endif
+ /* bio_imask |= tty_imask ; can some tty devices use buffers? */
+#ifdef DIAGNOSTIC
+ printf("bio_imask %x tty_imask %x net_imask %x\n",
+ bio_imask, tty_imask, net_imask);
#endif
- /* if netmask == 0, then the loopback code can do some really
- * bad things.
- */
- if (netmask == 0)
- netmask = 0x10000;
- /* biomask |= ttymask ; can some tty devices use buffers? */
- printf("biomask %x ttymask %x netmask %x\n", biomask, ttymask, netmask);
splnone();
}
@@ -337,15 +344,12 @@ extern inthand_t
IDTVEC(intr8), IDTVEC(intr9), IDTVEC(intr10), IDTVEC(intr11),
IDTVEC(intr12), IDTVEC(intr13), IDTVEC(intr14), IDTVEC(intr15);
-static inthand_func_t defvec[16] = {
+static inthand_func_t defvec[ICU_LEN] = {
&IDTVEC(intr0), &IDTVEC(intr1), &IDTVEC(intr2), &IDTVEC(intr3),
&IDTVEC(intr4), &IDTVEC(intr5), &IDTVEC(intr6), &IDTVEC(intr7),
&IDTVEC(intr8), &IDTVEC(intr9), &IDTVEC(intr10), &IDTVEC(intr11),
&IDTVEC(intr12), &IDTVEC(intr13), &IDTVEC(intr14), &IDTVEC(intr15) };
-/* out of range default interrupt vector gate entry */
-extern inthand_t IDTVEC(intrdefault);
-
/*
* Fill in default interrupt table (in case of spuruious interrupt
* during configuration of kernel, setup interrupt control unit
@@ -356,12 +360,8 @@ isa_defaultirq()
int i;
/* icu vectors */
- for (i = NRSVIDT ; i < NRSVIDT+ICU_LEN ; i++)
- setidt(i, defvec[i], SDT_SYS386IGT, SEL_KPL);
-
- /* out of range vectors */
- for (i = NRSVIDT; i < NIDT; i++)
- setidt(i, &IDTVEC(intrdefault), SDT_SYS386IGT, SEL_KPL);
+ for (i = 0; i < ICU_LEN; i++)
+ setidt(ICU_OFFSET + i, defvec[i], SDT_SYS386IGT, SEL_KPL);
/* initialize 8259's */
outb(IO_ICU1, 0x11); /* reset; program device, four bytes */
diff --git a/sys/i386/isa/npx.c b/sys/i386/isa/npx.c
index 796dfbb..00424bf 100644
--- a/sys/i386/isa/npx.c
+++ b/sys/i386/isa/npx.c
@@ -32,7 +32,7 @@
* SUCH DAMAGE.
*
* from: @(#)npx.c 7.2 (Berkeley) 5/12/91
- * $Id: npx.c,v 1.5 1993/11/03 23:32:35 paul Exp $
+ * $Id: npx.c,v 1.6 1994/01/03 07:55:43 davidg Exp $
*/
#include "npx.h"
@@ -114,7 +114,7 @@ struct isa_driver npxdriver = {
npxprobe, npxattach, "npx",
};
-u_int npx0mask;
+u_int npx0_imask;
struct proc *npxproc;
static bool_t npx_ex16;
@@ -292,7 +292,7 @@ npxprobe1(dvp)
* Bad, we are stuck with IRQ13.
*/
npx_irq13 = 1;
- npx0mask = dvp->id_irq; /* npxattach too late */
+ npx0_imask = dvp->id_irq; /* npxattach too late */
return (IO_NPXSIZE);
}
/*
@@ -528,8 +528,8 @@ npxsave(addr)
old_icu1_mask = inb(IO_ICU1 + 1);
old_icu2_mask = inb(IO_ICU2 + 1);
save_idt_npxintr = idt[npx_intrno];
- outb(IO_ICU1 + 1, old_icu1_mask & ~(IRQ_SLAVE | npx0mask));
- outb(IO_ICU2 + 1, old_icu2_mask & ~(npx0mask >> 8));
+ outb(IO_ICU1 + 1, old_icu1_mask & ~(IRQ_SLAVE | npx0_imask));
+ outb(IO_ICU2 + 1, old_icu2_mask & ~(npx0_imask >> 8));
idt[npx_intrno] = npx_idt_probeintr;
enable_intr();
stop_emulating();
@@ -541,10 +541,10 @@ npxsave(addr)
icu1_mask = inb(IO_ICU1 + 1); /* masks may have changed */
icu2_mask = inb(IO_ICU2 + 1);
outb(IO_ICU1 + 1,
- (icu1_mask & ~npx0mask) | (old_icu1_mask & npx0mask));
+ (icu1_mask & ~npx0_imask) | (old_icu1_mask & npx0_imask));
outb(IO_ICU2 + 1,
- (icu2_mask & ~(npx0mask >> 8))
- | (old_icu2_mask & (npx0mask >> 8)));
+ (icu2_mask & ~(npx0_imask >> 8))
+ | (old_icu2_mask & (npx0_imask >> 8)));
idt[npx_intrno] = save_idt_npxintr;
enable_intr(); /* back to usual state */
}
diff --git a/sys/i386/isa/sio.c b/sys/i386/isa/sio.c
index 305256d..603c963 100644
--- a/sys/i386/isa/sio.c
+++ b/sys/i386/isa/sio.c
@@ -31,7 +31,7 @@
* SUCH DAMAGE.
*
* from: @(#)com.c 7.5 (Berkeley) 5/16/91
- * $Id: sio.c,v 1.40 1994/03/26 13:40:18 ache Exp $
+ * $Id: sio.c,v 1.41 1994/04/01 16:47:01 ache Exp $
*/
#include "sio.h"
@@ -98,7 +98,6 @@
#endif
#define com_scr 7 /* scratch register for 16450-16550 (R/W) */
-#define setsofttty() (ipending |= 1 << 4) /* XXX */
/*
* Input buffer watermarks.
diff --git a/sys/i386/isa/sound/soundcard.c b/sys/i386/isa/sound/soundcard.c
index dae5a54..c2b39d9 100644
--- a/sys/i386/isa/sound/soundcard.c
+++ b/sys/i386/isa/sound/soundcard.c
@@ -34,15 +34,15 @@
#include "dev_table.h"
-u_int snd1mask;
-u_int snd2mask;
-u_int snd3mask;
-u_int snd4mask;
-u_int snd5mask;
-u_int snd6mask;
-u_int snd7mask;
-u_int snd8mask;
-u_int snd9mask;
+u_int snd1_imask;
+u_int snd2_imask;
+u_int snd3_imask;
+u_int snd4_imask;
+u_int snd5_imask;
+u_int snd6_imask;
+u_int snd7_imask;
+u_int snd8_imask;
+u_int snd9_imask;
#define FIX_RETURN(ret) {if ((ret)<0) return -(ret); else return 0;}
diff --git a/sys/i386/isa/vector.s b/sys/i386/isa/vector.s
index d434c76..7135ae7 100644
--- a/sys/i386/isa/vector.s
+++ b/sys/i386/isa/vector.s
@@ -1,6 +1,6 @@
/*
* from: vector.s, 386BSD 0.1 unknown origin
- * $Id: vector.s,v 1.5 1993/12/20 15:08:33 wollman Exp $
+ * $Id: vector.s,v 1.6 1994/01/10 23:15:09 ache Exp $
*/
#include "i386/isa/icu.h"
@@ -12,24 +12,44 @@
#define IRQ_BIT(irq_num) (1 << ((irq_num) % 8))
#define IRQ_BYTE(irq_num) ((irq_num) / 8)
+#ifdef AUTO_EOI_1
+#define ENABLE_ICU1 /* use auto-EOI to reduce i/o */
+#else
#define ENABLE_ICU1 \
movb $ICU_EOI,%al ; /* as soon as possible send EOI ... */ \
FASTER_NOP ; /* ... ASAP ... */ \
outb %al,$IO_ICU1 /* ... to clear in service bit */
-#ifdef AUTO_EOI_1
-#undef ENABLE_ICU1 /* we now use auto-EOI to reduce i/o */
-#define ENABLE_ICU1
#endif
+#ifdef AUTO_EOI_2
+/*
+ * The data sheet says no auto-EOI on slave, but it sometimes works.
+ */
+#define ENABLE_ICU1_AND_2 ENABLE_ICU1
+#else
#define ENABLE_ICU1_AND_2 \
movb $ICU_EOI,%al ; /* as above */ \
FASTER_NOP ; \
outb %al,$IO_ICU2 ; /* but do second icu first */ \
FASTER_NOP ; \
outb %al,$IO_ICU1 /* then first icu */
-#ifdef AUTO_EOI_2
-#undef ENABLE_ICU1_AND_2 /* data sheet says no auto-EOI on slave ... */
-#define ENABLE_ICU1_AND_2 /* ... but it works */
+#endif
+
+#ifdef FAST_INTR_HANDLER_USES_ES
+#define ACTUALLY_PUSHED 1
+#define MAYBE_MOVW_AX_ES movl %ax,%es
+#define MAYBE_POPL_ES popl %es
+#define MAYBE_PUSHL_ES pushl %es
+#else
+/*
+ * We can usually skip loading %es for fastintr handlers. %es should
+ * only be used for string instructions, and fastintr handlers shouldn't
+ * do anything slow enough to justify using a string instruction.
+ */
+#define ACTUALLY_PUSHED 0
+#define MAYBE_MOVW_AX_ES
+#define MAYBE_POPL_ES
+#define MAYBE_PUSHL_ES
#endif
/*
@@ -82,39 +102,63 @@
pushl %ecx ; \
pushl %edx ; \
pushl %ds ; \
- /* pushl %es ; know compiler doesn't do string insns */ \
+ MAYBE_PUSHL_ES ; \
movl $KDSEL,%eax ; \
movl %ax,%ds ; \
- /* movl %ax,%es ; */ \
- SHOW_CLI ; /* although it interferes with "ASAP" */ \
+ MAYBE_MOVW_AX_ES ; \
+ FAKE_MCOUNT((4+ACTUALLY_PUSHED)*4(%esp)) ; \
pushl $unit ; \
call handler ; /* do the work ASAP */ \
enable_icus ; /* (re)enable ASAP (helps edge trigger?) */ \
addl $4,%esp ; \
incl _cnt+V_INTR ; /* book-keeping can wait */ \
- COUNT_EVENT(_intrcnt_actv, id_num) ; \
- SHOW_STI ; \
- /* popl %es ; */ \
+ incl _intrcnt_actv + (id_num) * 4 ; \
+ movl _cpl,%eax ; /* are we unmasking pending HWIs or SWIs? */ \
+ notl %eax ; \
+ andl _ipending,%eax ; \
+ jne 1f ; /* yes, handle them */ \
+ MEXITCOUNT ; \
+ MAYBE_POPL_ES ; \
popl %ds ; \
- popl %edx; \
- popl %ecx; \
- popl %eax; \
- iret
+ popl %edx ; \
+ popl %ecx ; \
+ popl %eax ; \
+ iret ; \
+; \
+ ALIGN_TEXT ; \
+1: ; \
+ movl _cpl,%eax ; \
+ movl $HWI_MASK|SWI_MASK,_cpl ; /* limit nesting ... */ \
+ sti ; /* ... to do this as early as possible */ \
+ MAYBE_POPL_ES ; /* discard most of thin frame ... */ \
+ popl %ecx ; /* ... original %ds ... */ \
+ popl %edx ; \
+ xchgl %eax,(1+ACTUALLY_PUSHED)*4(%esp) ; /* orig %eax; save cpl */ \
+ pushal ; /* build fat frame (grrr) ... */ \
+ pushl %ecx ; /* ... actually %ds ... */ \
+ pushl %es ; \
+ movl $KDSEL,%eax ; \
+ movl %ax,%es ; \
+ movl (2+8+0)*4(%esp),%ecx ; /* ... %ecx from thin frame ... */ \
+ movl %ecx,(2+6)*4(%esp) ; /* ... to fat frame ... */ \
+ movl (2+8+1)*4(%esp),%eax ; /* ... cpl from thin frame */ \
+ pushl %eax ; \
+ subl $4,%esp ; /* junk for unit number */ \
+ MEXITCOUNT ; \
+ jmp _doreti
#define INTR(unit, irq_num, id_num, mask, handler, icu, enable_icus, reg, stray) \
- pushl $0 ; /* dummy error code */ \
- pushl $T_ASTFLT ; \
+ pushl $0 ; /* dumby error code */ \
+ pushl $0 ; /* dumby trap type */ \
pushal ; \
- pushl %ds ; /* save our data and extra segments ... */ \
+ pushl %ds ; /* save our data and extra segments ... */ \
pushl %es ; \
movl $KDSEL,%eax ; /* ... and reload with kernel's own ... */ \
- movl %ax,%ds ; /* ... early in case SHOW_A_LOT is on */ \
+ movl %ax,%ds ; /* ... early for obsolete reasons */ \
movl %ax,%es ; \
- SHOW_CLI ; /* interrupt did an implicit cli */ \
movb _imen + IRQ_BYTE(irq_num),%al ; \
orb $IRQ_BIT(irq_num),%al ; \
movb %al,_imen + IRQ_BYTE(irq_num) ; \
- SHOW_IMEN ; \
FASTER_NOP ; \
outb %al,$icu+1 ; \
enable_icus ; \
@@ -123,32 +167,32 @@
testb $IRQ_BIT(irq_num),%reg ; \
jne 2f ; \
1: ; \
- COUNT_EVENT(_intrcnt_actv, id_num) ; \
+ FAKE_MCOUNT(12*4(%esp)) ; /* XXX late to avoid double count */ \
+ incl _intrcnt_actv + (id_num) * 4 ; \
movl _cpl,%eax ; \
pushl %eax ; \
pushl $unit ; \
orl mask,%eax ; \
movl %eax,_cpl ; \
- SHOW_CPL ; \
- SHOW_STI ; \
sti ; \
call handler ; \
movb _imen + IRQ_BYTE(irq_num),%al ; \
andb $~IRQ_BIT(irq_num),%al ; \
movb %al,_imen + IRQ_BYTE(irq_num) ; \
- SHOW_IMEN ; \
FASTER_NOP ; \
outb %al,$icu+1 ; \
- jmp doreti ; \
+ MEXITCOUNT ; \
+ /* We could usually avoid the following jmp by inlining some of */ \
+ /* _doreti, but it's probably better to use less cache. */ \
+ jmp _doreti ; \
; \
ALIGN_TEXT ; \
2: ; \
- COUNT_EVENT(_intrcnt_pend, id_num) ; \
+ /* XXX skip mcounting here to avoid double count */ \
movl $1b,%eax ; /* register resume address */ \
/* XXX - someday do it at attach time */ \
- movl %eax,Vresume + (irq_num) * 4 ; \
+ movl %eax,ihandlers + (irq_num) * 4 ; \
orb $IRQ_BIT(irq_num),_ipending + IRQ_BYTE(irq_num) ; \
- SHOW_IPENDING ; \
popl %es ; \
popl %ds ; \
popal ; \
@@ -191,7 +235,7 @@
.globl _V/**/name ; \
SUPERALIGN_TEXT ; \
_V/**/name: ; \
- FAST_INTR(unit, irq_num, id_num, handler, ENABLE_ICU/**/icu_enables)
+ FAST_INTR(unit, irq_num,id_num, handler, ENABLE_ICU/**/icu_enables)
#undef BUILD_VECTOR
#define BUILD_VECTOR(name, unit, irq_num, id_num, mask, handler, \
@@ -201,9 +245,10 @@ _V/**/name: ; \
.globl _V/**/name ; \
SUPERALIGN_TEXT ; \
_V/**/name: ; \
- INTR(unit,irq_num,id_num, mask, handler, IO_ICU/**/icu_num, \
+ INTR(unit,irq_num, id_num, mask, handler, IO_ICU/**/icu_num, \
ENABLE_ICU/**/icu_enables, reg,)
+MCOUNT_LABEL(bintr)
BUILD_VECTORS
/* hardware interrupt catcher (IDT 32 - 47) */
@@ -211,7 +256,7 @@ _V/**/name: ; \
#define STRAYINTR(irq_num, icu_num, icu_enables, reg) \
IDTVEC(intr/**/irq_num) ; \
- INTR(irq_num,irq_num,irq_num, _highmask, _isa_strayintr, \
+ INTR(irq_num,irq_num,irq_num, _high_imask, _isa_strayintr, \
IO_ICU/**/icu_num, ENABLE_ICU/**/icu_enables, reg,stray)
/*
@@ -241,6 +286,7 @@ IDTVEC(intr/**/irq_num) ; \
STRAYINTR(4,1,1, al)
STRAYINTR(5,1,1, al)
STRAYINTR(6,1,1, al)
+ STRAYINTR(7,1,1, al)
STRAYINTR(8,2,1_AND_2, ah)
STRAYINTR(9,2,1_AND_2, ah)
STRAYINTR(10,2,1_AND_2, ah)
@@ -249,11 +295,11 @@ IDTVEC(intr/**/irq_num) ; \
STRAYINTR(13,2,1_AND_2, ah)
STRAYINTR(14,2,1_AND_2, ah)
STRAYINTR(15,2,1_AND_2, ah)
-IDTVEC(intrdefault)
- STRAYINTR(7,1,1, al) /* XXX */
#if 0
INTRSTRAY(255, _highmask, 255) ; call _isa_strayintr ; INTREXIT2
#endif
+MCOUNT_LABEL(eintr)
+
/*
* These are the interrupt counters, I moved them here from icu.s so that
* they are with the name table. rgrimes
@@ -263,7 +309,15 @@ IDTVEC(intrdefault)
* work with vmstat.
*/
.data
-Vresume: .space 32 * 4 /* where to resume intr handler after unpend */
+ihandlers: /* addresses of interrupt handlers */
+ .space NHWI*4 /* actually resumption addresses for HWI's */
+ .long swi_tty, swi_net, 0, 0, 0, 0, 0, 0
+ .long 0, 0, 0, 0, 0, 0, swi_clock, swi_ast
+imasks: /* masks for interrupt handlers */
+ .space NHWI*4 /* padding; HWI masks are elsewhere */
+ .long SWI_TTY_MASK, SWI_NET_MASK, 0, 0, 0, 0, 0, 0
+ .long 0, 0, 0, 0, 0, 0, SWI_CLOCK_MASK, SWI_AST_MASK
+
.globl _intrcnt
_intrcnt: /* used by vmstat to calc size of table */
.globl _intrcnt_bad7
@@ -274,14 +328,8 @@ _intrcnt_bad15: .space 4 /* glitches on irq 15 */
_intrcnt_stray: .space 4 /* total count of stray interrupts */
.globl _intrcnt_actv
_intrcnt_actv: .space NR_REAL_INT_HANDLERS * 4 /* active interrupts */
- .globl _intrcnt_pend
-_intrcnt_pend: .space NR_REAL_INT_HANDLERS * 4 /* pending interrupts */
.globl _eintrcnt
_eintrcnt: /* used by vmstat to calc size of table */
- .globl _intrcnt_spl
-_intrcnt_spl: .space 32 * 4 /* XXX 32 should not be hard coded ? */
- .globl _intrcnt_show
-_intrcnt_show: .space 8 * 4 /* XXX 16 should not be hard coded ? */
/*
* Build the interrupt name table for vmstat
@@ -296,8 +344,9 @@ _intrcnt_show: .space 8 * 4 /* XXX 16 should not be hard coded ? */
.ascii "name irq" ; \
.asciz "irq_num"
/*
- * XXX - use the STRING and CONCAT macros from <sys/cdefs.h> to stringize
- * and concatenate names above and elsewhere.
+ * XXX - use the __STRING and __CONCAT macros from <sys/cdefs.h> to stringize
+ * and concatenate names above and elsewhere. Note that __CONCAT doesn't
+ * work when nested.
*/
.text
@@ -308,61 +357,4 @@ _intrnames:
BUILD_VECTOR(stray,,,,,,,,)
BUILD_VECTORS
-#undef BUILD_FAST_VECTOR
-#define BUILD_FAST_VECTOR BUILD_VECTOR
-
-#undef BUILD_VECTOR
-#define BUILD_VECTOR(name, unit, irq_num, id_num, mask, handler, \
- icu_num, icu_enables, reg) \
- .asciz "name pend"
-
- BUILD_VECTORS
_eintrnames:
-
-/*
- * now the spl names
- */
- .asciz "unpend_v"
- .asciz "doreti"
- .asciz "p0!ni"
- .asciz "!p0!ni"
- .asciz "p0ni"
- .asciz "netisr_raw"
- .asciz "netisr_ip"
- .asciz "netisr_imp"
- .asciz "netisr_ns"
- .asciz "netisr_iso"
- .asciz "softclock" /* 10 */
- .asciz "trap"
- .asciz "doreti_exit2"
- .asciz "splbio"
- .asciz "splclock"
- .asciz "splhigh"
- .asciz "splimp"
- .asciz "splnet"
- .asciz "splsoftclock"
- .asciz "spltty"
- .asciz "spl0" /* 20 */
- .asciz "netisr_raw2"
- .asciz "netisr_ip2"
- .asciz "netisr_imp2"
- .asciz "netisr_ns2"
- .asciz "netisr_iso2"
- .asciz "splx"
- .asciz "splx!0"
- .asciz "unpend_V"
- .asciz "netisr_x25"
- .asciz "netisr_hdlc"
- .asciz "spl31"
-/*
- * now the mask names
- */
- .asciz "cli"
- .asciz "cpl"
- .asciz "imen"
- .asciz "ipending"
- .asciz "sti"
- .asciz "mask5" /* mask5-mask7 are spares */
- .asciz "mask6"
- .asciz "mask7"
-
OpenPOWER on IntegriCloud