summaryrefslogtreecommitdiffstats
path: root/sys/i386/isa/npx.c
Commit message (Collapse)AuthorAgeFilesLines
* Clear any pending exceptions before using frstor (in the non-FXSR case)bde2004-06-191-0/+4
| | | | | | | | | | in npxsetregs() too. npxsetregs() must overwrite the previous state, and it is never paired with an npxgetregs() that would defuse the previous state (since npxgetregs() would have fninit'ed the state, leaving nothing to do). PR: 68058 (this should complete the fix) Tested by: Simon Barner <barner@in.tum.de>
* Fixed a panic caused by over-optimizing npxdrop() in the non-FXSR case.bde2004-06-181-0/+9
| | | | | | | | | | | | | | | | frstor can trap despite it being a control instruction, since it bogusly checks for pending exceptions in the state that it is overwriting. This used to be a non-problem because frstor was always paired with a previous fnsave, and fnsave does an implicit fninit so any pending exceptions only remain live in the saved state. Now frstor is sometimes paired with npxdrop() and we must do a little more than just forget that the npx was used in npxdrop() to avoid a trap later. This is a non-problem in the FXSR case because fxrstor doesn't do the bogus check. FXSR is part of SSE, and npxdrop() is only in FreeBSD-5.x, so this bug only affected old machines running FreeBSD-5.x. PR: 68058
* Fixed misclassification of npx interrupts caused by npx_probe().bde2004-06-061-12/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Dividing by 0 in order to check for irq13/exception16 delivery apparently always causes an irq13 even if we have configured for exception16 (by setting CR0_NE). This was expected, but the timing of the irq13 was unexpected. Without CR0_NE, the irq13 is delivered synchronously at least on my test machine, but with CR0_NE it is delivered a little later (about 250 nsec) in PIC mode and much later (5000-10000 nsec) in APIC mode. So especially in APIC mode, the irq13 may arrive after it is supposed to be shut down. It should then be masked, but the shutdown is incomplete, so the irq goes to a null handler that just reports it as stray. The fix is to wait a bit after dividing by 0 to give a good chance of the irq13 being handled by its proper handler. Removed the hack that was supposed to recover from the incomplete shutdown of irq13. The shutdown is now even more incomplete, or perhaps just incomplete in a different way, but the hack now has no effect because irq13 is edge triggered and handling of edge triggered interrupts is now optimized by skipping their masking. The hack only worked due to it accidentally not losing races. The incomplete shutdown of irq13 still allows unprivileged users to generate a stray irq13 (except on systems where irq13 is actually used) by unmasking an npx exception and causing one. The exception gets handled properly by the exception 16 handler. A spurious irq13 is delivered asynchronously but is harmless (as in the probe) because it is almost perfectly not handled by the null interrupt handler. Perfectly not handling it involves mainly not resetting the npx busy latch. This prevents further irq13's despite them not being masked in the [A]PIC.
* Trim unused includes.jhb2004-05-111-1/+0
|
* Remove advertising clause from University of California Regent'simp2004-04-071-4/+0
| | | | | | | license, per letter dated July 22, 1999 and email from Peter Wemm, Alan Cox and Robert Watson. Approved by: core, peter, alc, rwatson
* These are changes to allow to use the Intel C/C++ compiler (lang/icc)trhodes2004-03-121-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | to build the kernel. It doesn't affect the operation if gcc. Most of the changes are just adding __INTEL_COMPILER to #ifdef's, as icc v8 may define __GNUC__ some parts may look strange but are necessary. Additional changes: - in_cksum.[ch]: * use a generic C version instead of the assembly version in the !gcc case (ASM code breaks with the optimizations icc does) -> no bad checksums with an icc compiled kernel Help from: andre, grehan, das Stolen from: alpha version via ppc version The entire checksum code should IMHO be replaced with the DragonFly version (because it isn't guaranteed future revisions of gcc will include similar optimizations) as in: ---snip--- Revision Changes Path 1.12 +1 -0 src/sys/conf/files.i386 1.4 +142 -558 src/sys/i386/i386/in_cksum.c 1.5 +33 -69 src/sys/i386/include/in_cksum.h 1.5 +2 -0 src/sys/netinet/igmp.c 1.6 +0 -1 src/sys/netinet/in.h 1.6 +2 -0 src/sys/netinet/ip_icmp.c 1.4 +3 -4 src/contrib/ipfilter/ip_compat.h 1.3 +1 -2 src/sbin/natd/icmp.c 1.4 +0 -1 src/sbin/natd/natd.c 1.48 +1 -0 src/sys/conf/files 1.2 +0 -1 src/sys/conf/files.amd64 1.13 +0 -1 src/sys/conf/files.i386 1.5 +0 -1 src/sys/conf/files.pc98 1.7 +1 -1 src/sys/contrib/ipfilter/netinet/fil.c 1.10 +2 -3 src/sys/contrib/ipfilter/netinet/ip_compat.h 1.10 +1 -1 src/sys/contrib/ipfilter/netinet/ip_fil.c 1.7 +1 -1 src/sys/dev/netif/txp/if_txp.c 1.7 +1 -1 src/sys/net/ip_mroute/ip_mroute.c 1.7 +1 -2 src/sys/net/ipfw/ip_fw2.c 1.6 +1 -2 src/sys/netinet/igmp.c 1.4 +158 -116 src/sys/netinet/in_cksum.c 1.6 +1 -1 src/sys/netinet/ip_gre.c 1.7 +1 -2 src/sys/netinet/ip_icmp.c 1.10 +1 -1 src/sys/netinet/ip_input.c 1.10 +1 -2 src/sys/netinet/ip_output.c 1.13 +1 -2 src/sys/netinet/tcp_input.c 1.9 +1 -2 src/sys/netinet/tcp_output.c 1.10 +1 -1 src/sys/netinet/tcp_subr.c 1.10 +1 -1 src/sys/netinet/tcp_syncache.c 1.9 +1 -2 src/sys/netinet/udp_usrreq.c 1.5 +1 -2 src/sys/netinet6/ipsec.c 1.5 +1 -2 src/sys/netproto/ipsec/ipsec.c 1.5 +1 -1 src/sys/netproto/ipsec/ipsec_input.c 1.4 +1 -2 src/sys/netproto/ipsec/ipsec_output.c and finally remove sys/i386/i386 in_cksum.c sys/i386/include in_cksum.h ---snip--- - endian.h: * DTRT in C++ mode - quad.h: * we don't use gcc v1 anymore, remove support for it Suggested by: bde (long ago) - assym.h: * avoid zero-length arrays (remove dependency on a gcc specific feature) This change changes the contents of the object file, but as it's only used to generate some values for a header, and the generator knows how to handle this, there's no impact in the gcc case. Explained by: bde Submitted by: Marius Strobl <marius@alchemy.franken.de> - aicasm.c: * minor change to teach it about the way icc spells "-nostdinc" Not approved by: gibbs (no reply to my mail) - bump __FreeBSD_version (lang/icc needs to know about the changes) Incarnations of this patch survive gcc compiles since a loooong time, I use it on my desktop. An icc compiled kernel works since Nov. 2003 (exceptions: snd_* if used as modules), it survives a build of the entire ports collection with icc. Parts of this commit contains suggestions or submissions from Marius Strobl <marius@alchemy.franken.de>. Reviewed by: -arch Submitted by: netchild
* Fixed a misplaced ifdef that prevented npx.c building without "device isa"bde2004-02-131-1/+1
| | | | | | | | | ISA. npx has few isa dependencies, but it does unconditional outb()'s to the isa bus in the !SMP case, and it attaches to isa if "device isa" is configured in order to support PNP-ISA. The ifdef for the latter was misplaced. PR: 62595
* New APIC support code:jhb2003-11-031-37/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - The apic interrupt entry points have been rewritten so that each entry point can serve 32 different vectors. When the entry is executed, it uses one of the 32-bit ISR registers to determine which vector in its assigned range was triggered. Thus, the apic code can support 159 different interrupt vectors with only 5 entry points. - We now always to disable the local APIC to work around an errata in certain PPros and then re-enable it again if we decide to use the APICs to route interrupts. - We no longer map IO APICs or local APICs using special page table entries. Instead, we just use pmap_mapdev(). We also no longer export the virtual address of the local APIC as a global symbol to the rest of the system, but only in local_apic.c. To aid this, the APIC ID of each CPU is exported as a per-CPU variable. - Interrupt sources are provided for each intpin on each IO APIC. Currently, each source is given a unique interrupt vector meaning that PCI interrupts are not shared on most machines with an I/O APIC. That mapping for interrupt sources to interrupt vectors is up to the APIC enumerator driver however. - We no longer probe to see if we need to use mixed mode to route IRQ 0, instead we always use mixed mode to route IRQ 0 for now. This can be disabled via the 'NO_MIXED_MODE' kernel option. - The npx(4) driver now always probes to see if a built-in FPU is present since this test can now be performed with the new APIC code. However, an SMP kernel will panic if there is more than one CPU and a built-in FPU is not found. - PCI interrupts are now properly routed when using APICs to route interrupts, so remove the hack to psuedo-route interrupts when the intpin register was read. - The apic.h header was moved to apicreg.h and a new apicvar.h header that declares the APIs used by the new APIC code was added.
* Add constants for entries in the IDT and use those instead of magicjhb2003-09-101-4/+5
| | | | numbers.
* Initiate de-orbit burn for fpu-less operation. 386+387 is stillpeter2003-07-221-25/+3
| | | | | theoretically supportable, but you'd really be happier with FreeBSD 2.1.8 on it.
* Use __FBSDID().obrien2003-06-021-1/+3
|
* Define ovbcopy() as a macro which expands to the equivalent bcopy() call,des2003-04-041-4/+2
| | | | | | | | | | | | | | to take care of the KAME IPv6 code which needs ovbcopy() because NetBSD's bcopy() doesn't handle overlap like ours. Remove all implementations of ovbcopy(). Previously, bzero was a function pointer on i386, to save a jmp to bzero_vector. Get rid of this microoptimization as it only confuses things, adds machine-dependent code to an MD header, and doesn't really save all that much. This commit does not add my pagezero() / pagecopy() code.
* - In npxgetregs() use the td argument to save the fpu state from and notjeff2003-04-011-2/+1
| | | | | | | curthread. Nothing currently depends on this behavior. - Clean up an extra newline. Obtained from: bde
* - In npxsetregs don't set the floating point if td == fpcurthread not ifjeff2003-03-311-1/+1
| | | | | curthread == fpcurthread. This is important when we're saving the fp state for a thread other than curthread as in from set_mcontext.
* Move a bunch of flags from the KSE to the thread.julian2003-02-171-1/+1
| | | | | | | | I was in two minds as to where to put them in the first case.. I should have listenned to the other mind. Submitted by: parts by davidxu@ Reviewed by: jeff@ mini@
* Add getcontext, setcontext, and swapcontext as system calls.deischen2002-11-161-2/+2
| | | | | | | | | | | Previously these were libc functions but were requested to be made into system calls for atomicity and to coalesce what might be two entrances into the kernel (signal mask setting and floating point trap) into one. A few style nits and comments from bde are also included. Tested on alpha by: gallatin
* Fix typo. ioport_rid should be irq_rid.davidxu2002-11-051-1/+1
|
* Finish fixing the 5.x FPU code for dealing with signal handlers.peter2002-10-251-0/+1
| | | | Obtained from: bde
* Hide inline assembly if lint is defined.phk2002-10-201-1/+1
|
* Be consistent about "static" functions: if the function is markedphk2002-09-281-1/+1
| | | | | | static in its prototype, mark it static at the definition too. Inspired by: FlexeLint warning #512
* Add kernel support needed for the KSE-aware libpthread:mini2002-09-161-35/+152
| | | | | | | | - Maintain fpu state across signals. - Save and restore FPU state properly in ucontext_t's. Reviewed by: bde, deischen, julian Approved by: -arch
* Automatically enable CPU_ENABLE_SSE (detect and enable SSE instructions)peter2002-09-071-0/+7
| | | | | | | | if compiling with I686_CPU as a target. CPU_DISABLE_SSE will prevent this from happening and will guarantee the code is not compiled in. I am still not happy with this, but gcc is now generating code that uses these instructions if you set CPUTYPE to p3/p4 or athlon-4/mp/xp or higher.
* Compromise for critical*()/cpu_critical*() recommit. Cleanup the interruptdillon2002-03-271-3/+9
| | | | | | | | | | | | | | | | | | | disablement assumptions in kern_fork.c by adding another API call, cpu_critical_fork_exit(). Cleanup the td_savecrit field by moving it from MI to MD. Temporarily move cpu_critical*() from <arch>/include/cpufunc.h to <arch>/<arch>/critical.c (stage-2 will clean this up). Implement interrupt deferral for i386 that allows interrupts to remain enabled inside critical sections. This also fixes an IPI interlock bug, and requires uses of icu_lock to be enclosed in a true interrupt disablement. This is the stage-1 commit. Stage-2 will occur after stage-1 has stabilized, and will move cpu_critical*() into its own header file(s) + other things. This commit may break non-i386 architectures in trivial ways. This should be temporary. Reviewed by: core Approved by: core
* Fixed some style bugs in the removal of __P(()). The main ones werebde2002-03-231-20/+20
| | | | | | not removing tabs before "__P((", and not outdenting continuation lines to preserve non-KNF lining up of code with parentheses. Switch to KNF formatting and/or rewrap the whole prototype in some cases.
* Fix abuses of cpu_critical_{enter,exit} by converting toimp2002-03-211-12/+12
| | | | | | | intr_{disable,restore} as well as providing an implemenation of intr_{disable,restore}. Reviewed by: jake, rwatson, jhb
* Remove __P.alfred2002-03-201-21/+21
|
* revert last commit temporarily due to whining on the lists.dillon2002-02-261-9/+3
|
* STAGE-1 of 3 commit - allow (but do not require) interrupts to remaindillon2002-02-261-3/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | enabled in critical sections and streamline critical_enter() and critical_exit(). This commit allows an architecture to leave interrupts enabled inside critical sections if it so wishes. Architectures that do not wish to do this are not effected by this change. This commit implements the feature for the I386 architecture and provides a sysctl, debug.critical_mode, which defaults to 1 (use the feature). For now you can turn the sysctl on and off at any time in order to test the architectural changes or track down bugs. This commit is just the first stage. Some areas of the code, specifically the MACHINE_CRITICAL_ENTER #ifdef'd code, is strictly temporary and will be cleaned up in the STAGE-2 commit when the critical_*() functions are moved entirely into MD files. The following changes have been made: * critical_enter() and critical_exit() for I386 now simply increment and decrement curthread->td_critnest. They no longer disable hard interrupts. When critical_exit() decrements the counter to 0 it effectively calls a routine to deal with whatever interrupts were deferred during the time the code was operating in a critical section. Other architectures are unaffected. * fork_exit() has been conditionalized to remove MD assumptions for the new code. Old code will still use the old MD assumptions in regards to hard interrupt disablement. In STAGE-2 this will be turned into a subroutine call into MD code rather then hardcoded in MI code. The new code places the burden of entering the critical section in the trampoline code where it belongs. * I386: interrupts are now enabled while we are in a critical section. The interrupt vector code has been adjusted to deal with the fact. If it detects that we are in a critical section it currently defers the interrupt by adding the appropriate bit to an interrupt mask. * In order to accomplish the deferral, icu_lock is required. This is i386-specific. Thus icu_lock can only be obtained by mainline i386 code while interrupts are hard disabled. This change has been made. * Because interrupts may or may not be hard disabled during a context switch, cpu_switch() can no longer simply assume that PSL_I will be in a consistent state. Therefore, it now saves and restores eflags. * FAST INTERRUPT PROVISION. Fast interrupts are currently deferred. The intention is to eventually allow them to operate either while we are in a critical section or, if we are able to restrict the use of sched_lock, while we are not holding the sched_lock. * ICU and APIC vector assembly for I386 cleaned up. The ICU code has been cleaned up to match the APIC code in regards to format and macro availability. Additionally, the code has been adjusted to deal with deferred interrupts. * Deferred interrupts use a per-cpu boolean int_pending, and masks ipending, spending, and fpending. Being per-cpu variables it is not currently necessary to lock; bus cycles modifying them. Note that the same mechanism will enable preemption to be incorporated as a true software interrupt without having to further hack up the critical nesting code. * Note: the old critical_enter() code in kern/kern_switch.c is currently #ifdef to be compatible with both the old and new methodology. In STAGE-2 it will be moved entirely to MD code. Performance issues: One of the purposes of this commit is to enhance critical section performance, specifically to greatly reduce bus overhead to allow the critical section code to be used to protect per-cpu caches. These caches, such as Jeff's slab allocator work, can potentially operate very quickly making the effective savings of the new critical section code's performance very significant. The second purpose of this commit is to allow architectures to enable certain interrupts while in a critical section. Specifically, the intention is to eventually allow certain FAST interrupts to operate rather then defer. The third purpose of this commit is to begin to clean up the critical_enter()/critical_exit()/cpu_critical_enter()/ cpu_critical_exit() API which currently has serious cross pollution in MI code (in fork_exit() and ast() for example). The fourth purpose of this commit is to provide a framework that allows kernel-preempting software interrupts to be implemented cleanly. This is currently used for two forward interrupts in I386. Other architectures will have the choice of using this infrastructure or building the functionality directly into critical_enter()/ critical_exit(). Finally, this commit is designed to greatly improve the flexibility of various architectures to manage critical section handling, software interrupts, preemption, and other highly integrated architecture-specific details.
* Don't include <isa/isavar.h> or compile code depending on it when isabde2002-01-301-1/+5
| | | | | | | | is not configured. Including <isa/isavar.h> when it is not used is harmful as well as bogus, since it includes "isa_if.h" which is not generated when isa is not configured. This was fixed in 1999 but was broken by unconditionalizing PNPBIOS.
* Introduce a standard name for the lock protecting an interrupt controllerjhb2001-12-201-0/+3
| | | | | | | | and it's associated state variables: icu_lock with the name "icu". This renames the imen_mtx for x86 SMP, but also uses the lock to protect access to the 8259 PIC on x86 UP. This also adds an appropriate lock to the various Alpha chipsets which fixes problems with Alpha SMP machines dropping interrupts with an SMP kernel.
* Modify the critical section API as follows:jhb2001-12-181-8/+8
| | | | | | | | | | | | | | | | | | | - The MD functions critical_enter/exit are renamed to start with a cpu_ prefix. - MI wrapper functions critical_enter/exit maintain a per-thread nesting count and a per-thread critical section saved state set when entering a critical section while at nesting level 0 and restored when exiting to nesting level 0. This moves the saved state out of spin mutexes so that interlocking spin mutexes works properly. - Most low-level MD code that used critical_enter/exit now use cpu_critical_enter/exit. MI code such as device drivers and spin mutexes use the MI wrappers. Note that since the MI wrappers store the state in the current thread, they do not have any return values or arguments. - mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is assigned to curthread->td_savecrit during fork_exit(). Tested on: i386, alpha
* Overhaul the per-CPU support a bit:jhb2001-12-111-17/+17
| | | | | | | | | | | | | | | | | | | | | | | | | - The MI portions of struct globaldata have been consolidated into a MI struct pcpu. The MD per-CPU data are specified via a macro defined in machine/pcpu.h. A macro was chosen over a struct mdpcpu so that the interface would be cleaner (PCPU_GET(my_md_field) vs. PCPU_GET(md.md_my_md_field)). - All references to globaldata are changed to pcpu instead. In a UP kernel, this data was stored as global variables which is where the original name came from. In an SMP world this data is per-CPU and ideally private to each CPU outside of the context of debuggers. This also included combining machine/globaldata.h and machine/globals.h into machine/pcpu.h. - The pointer to the thread using the FPU on i386 was renamed from npxthread to fpcurthread to be identical with other architectures. - Make the show pcpu ddb command MI with a MD callout to display MD fields. - The globaldata_register() function was renamed to pcpu_init() and now init's MI fields of a struct pcpu in addition to registering it with the internal array and list. - A pcpu_destroy() function was added to remove a struct pcpu from the internal array and list. Tested on: alpha, i386 Reviewed by: peter, jake
* MFi386:bde2001-10-211-0/+2
| | | | | | | | | - sys/pc98/pc98/npx.c 1.87 (2001/09/15; author: imp) I don't think pc98 has acpi at all, so ifdef the acpi attachments for now. This completes merging sys/pc98/pc98/npx.c into sys/i386/isa/npx.c so that the former can be removed.
* MFpc98: fundamental differences. The magic numbers for the i/o portbde2001-10-211-0/+17
| | | | | | and the irq are different for pc98, and are not very well handled (we use a historical mess of hard-coded values, values from header files and values from hints).
* MFpc98: all changes in sys/pc98/pc98/npx.c related to FPU_ERROR_BROKEN.bde2001-10-211-0/+9
| | | | | | | | | | | | | | - 1.58 (2000/09/01; author: kato) Fixed FPU_ERROR_BROKEN code. It had old-isa code. - 1.33 (1998/03/09; author: kato) Make FPU_ERROR_BROKEN a new-style option. - 1.7 (1996/10/09; author: asami) Make sure FPU is recognized for non-Intel CPUs. The log for rev.1.7 should have said something like: Added FPU_ERROR_BROKEN option. This forces a successful probe for exception 16, so that hardware with a broken FPU error signal can sort of work.
* Deleted most of npxprobe(), and merged npxprobe1() back into npxprobe().bde2001-10-161-127/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | Use the normal interrupt handler (npx_intr()) instead of a special probe-time interrupt handler, although this causes problems due to the bus_teardown_intr() not actually even tearing down the interrupt (these problems were avoided by doing interrupt attachment for the special interrupt handler directly). Fixed minor bitrot in comments. The reason for the npxprobe()/npxprobe1() split mostly went away at about the same time it was made (in 1992 or 1993 just before the beginning of history). 386BSD ran all probes with interrupts completely masked, and I didn't want to disturb this when I added an irq probe to npxprobe(). An irq (not necessarily npx) must be acked for at least external npx's to take the cpu out of the wait state that it enters when an npx error occurs, so the probe must be done with a suitable irq unmasked. npxprobe() went to great lengths to unmask precisely the npx irq. Running probes with all interrupts masked was never really needed in FreeBSD, since FreeBSD always masked interrupts well enough using splhigh(), but it wasn't until rev.1.48 (1995/12/12) of autoconf.c that all probes were run with CPU interrupts enabled. This permits npxprobe() to probe its irq using normal interrupt resources. Note that most drivers still can't depend on this. It depends on the interrupt handler being fast and the irq not being shared.
* Commit my old fixes for cosmetic bugs in npxprobe() so that they aren'tbde2001-10-161-13/+8
| | | | | | | | | | | | | | | | lost when the buggy code goes away completely: - don't assume that the npx irq number is >= 8. Rev.1.73 only reversed part of the hard-coding of it to 13 in rev.1.66. - backed out the part of rev.1.84 that added a highly confused comment about an enable_intr() being "highly bogus". The whole reason for existence of npxprobe() (separate from the main probe, npxprobe1()) is to handle the complications to make this enable_intr() safe. - backed out the part of rev.1.94 that modified npxprobe(). It mainly broke the enable_intr() to restore_intr(). Restoring the interrupt state in a nested way is precisely what is not wanted here. It was harmless in practice because npxprobe() is called with interrupts enabled, so restoring the interrupt state enables interrupts. Most of npxprobe() is a no-op for the same reason...
* Explicitly initialize the fpu when SSE is enabled since this notegge2001-10-151-0/+5
| | | | | | longer happens as a side effect of calling npxsave. Reviewed by: peter, bde
* Whitespace fixes.jhb2001-09-181-3/+3
|
* s/thread'/thread's/imp2001-09-141-1/+1
|
* KSE Milestone 2julian2001-09-121-42/+43
| | | | | | | | | | | | | | Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is functionally equivalent to teh previousl -current except that there is a thread associated with each process. Sorry john! (your next MFC will be a doosie!) Reviewed by: peter@freebsd.org, dillon@freebsd.org X-MFC after: ha ha ha ha
* Add ACPI attachments.msmith2001-08-301-1/+2
|
* Dont compile in SSE fxsave/fxrstor instructions if CPU_ENABLE_SSE isn'tpeter2001-08-231-6/+15
| | | | active.
* - Close races with signals and other AST's being triggered while we are injhb2001-08-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | the process of exiting the kernel. The ast() function now loops as long as the PS_ASTPENDING or PS_NEEDRESCHED flags are set. It returns with preemption disabled so that any further AST's that arrive via an interrupt will be delayed until the low-level MD code returns to user mode. - Use u_int's to store the tick counts for profiling purposes so that we do not need sched_lock just to read p_sticks. This also closes a problem where the call to addupc_task() could screw up the arithmetic due to non-atomic reads of p_sticks. - Axe need_proftick(), aston(), astoff(), astpending(), need_resched(), clear_resched(), and resched_wanted() in favor of direct bit operations on p_sflag. - Fix up locking with sched_lock some. In addupc_intr(), use sched_lock to ensure pr_addr and pr_ticks are updated atomically with setting PS_OWEUPC. In ast() we clear pr_ticks atomically with clearing PS_OWEUPC. We also do not grab the lock just to test a flag. - Simplify the handling of Giant in ast() slightly. Reviewed by: bde (mostly)
* MASK_FPU_SW didn't do what it was expected to do.peter2001-07-261-7/+1
|
* The per-cpu temporary buffers are not needed since the pcb_save areas havetegge2001-07-171-14/+5
| | | | | | | the proper alignment. Change dummy variable in npxinit from stack to bss to ensure proper alignment. Reviewed by: bde
* Use PCPU_GET(cpuid) instead of curproc->p_oncpu.tegge2001-07-161-9/+9
| | | | Reviewed by: peter
* Fix another missed pcb_savefpu reference (inside NPX_DEBUG)peter2001-07-121-2/+2
|
* Activate SSE/SIMD. This is the extra context switching support thatpeter2001-07-121-11/+82
| | | | | | | | | | | | | | | we are required to do if we let user processes use the extra 128 bit registers etc. This is the base part of the diff I got from: http://www.issei.org/issei/FreeBSD/sse.html I believe this is by: Mr. SUZUKI Issei <issei@issei.org> SMP support apparently by: Takekazu KATO <kato@chino.it.okayama-u.ac.jp> Test code by: NAKAMURA Kazushi <kaz@kobe1995.net>, see http://kobe1995.net/~kaz/FreeBSD/SSE.en.html I have fixed a couple of style(9) deviations. I have some followup commits to fix a couple of non-style things.
* Fix warnings:peter2001-06-151-3/+3
| | | | | 908: warning: long unsigned int format, unsigned int arg (arg 3) 887: warning: `timezero' defined but not used
OpenPOWER on IntegriCloud