summaryrefslogtreecommitdiffstats
path: root/sys/i386
Commit message (Collapse)AuthorAgeFilesLines
* Removed all traces of T_ASTFLT (except for gaps where it was). It becamebde2001-02-193-3/+1
| | | | unused except in dead code when ast() was split off from trap().
* Changed the aston() family to operate on a specified process instead ofbde2001-02-192-3/+3
| | | | | | | | | | | | | | always on curproc. This is needed to implement signal delivery properly (see a future log message for kern_sig.c). Debogotified the definition of aston(). aston() was defined in terms of signotify() (perhaps because only the latter already operated on a specified process), but aston() is the primitive. Similar changes are needed in the ia64 versions of cpu.h and trap.c. I didn't make them because the ia64 is missing the prerequisite changes to make astpending and need_resched per-process and those changes are too large to make without testing.
* Fixed style bugs in clock.c rev.1.164 and cpu.h rev.1.52-1.53 -- declarebde2001-02-193-16/+3
| | | | | | tsc_present in the right places (together with other variables of the same linkage), and don't use messy ifdefs just to avoid exporting it in some cases.
* Allow the superuser to prefent all interrupt harvesting onmarkm2001-02-181-1/+1
| | | | her system.
* Preceed/preceeding are not english words. Use precede or preceding.asmodai2001-02-181-2/+2
|
* Fixed disordering in previous commit. "Fixed" a null comment in previousbde2001-02-171-1/+1
| | | | commit by removing it.
* Allow debugging output to be controlled on a per-syscall granularity.jlemon2001-02-163-45/+67
| | | | | | Also clean up debugging output in a slightly more uniform fashion. The default behavior remains the same (all debugging output is turned on)
* Re-gen auto generated files.jlemon2001-02-163-11/+22
|
* Remove dummy stub functions.jlemon2001-02-161-3/+0
|
* Add mount syscall to linux emulation. Also improve emulation of reboot.jlemon2001-02-162-4/+15
|
* Extend kqueue down to the device layer.jlemon2001-02-157-14/+21
| | | | Backwards compatible approach suggested by: peter
* Correct 2nd argument of getnameinfo(3) to socklen_t.ume2001-02-151-0/+1
| | | | Reviewed by: itojun
* Implement a unified run queue and adjust priority levels accordingly.jake2001-02-123-22/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - All processes go into the same array of queues, with different scheduling classes using different portions of the array. This allows user processes to have their priorities propogated up into interrupt thread range if need be. - I chose 64 run queues as an arbitrary number that is greater than 32. We used to have 4 separate arrays of 32 queues each, so this may not be optimal. The new run queue code was written with this in mind; changing the number of run queues only requires changing constants in runq.h and adjusting the priority levels. - The new run queue code takes the run queue as a parameter. This is intended to be used to create per-cpu run queues. Implement wrappers for compatibility with the old interface which pass in the global run queue structure. - Group the priority level, user priority, native priority (before propogation) and the scheduling class into a struct priority. - Change any hard coded priority levels that I found to use symbolic constants (TTIPRI and TTOPRI). - Remove the curpriority global variable and use that of curproc. This was used to detect when a process' priority had lowered and it should yield. We now effectively yield on every interrupt. - Activate propogate_priority(). It should now have the desired effect without needing to also propogate the scheduling class. - Temporarily comment out the call to vm_page_zero_idle() in the idle loop. It interfered with propogate_priority() because the idle process needed to do a non-blocking acquire of Giant and then other processes would try to propogate their priority onto it. The idle process should not do anything except idle. vm_page_zero_idle() will return in the form of an idle priority kernel thread which is woken up at apprioriate times by the vm system. - Update struct kinfo_proc to the new priority interface. Deliberately change its size by adjusting the spare fields. It remained the same size, but the layout has changed, so userland processes that use it would parse the data incorrectly. The size constraint should really be changed to an arbitrary version number. Also add a debug.sizeof sysctl node for struct kinfo_proc.
* RIP <machine/lock.h>.markm2001-02-1110-103/+79
| | | | | | | Some things needed bits of <i386/include/lock.h> - cy.c now has its own (only) copy of the COM_(UN)LOCK() macros, and IMASK_(UN)LOCK() has been moved to <i386/include/apic.h> (AKA <machine/apic.h>). Reviewed by: jhb
* Clear the reschedule flag after finding it set in userret(). Thisjake2001-02-101-0/+1
| | | | | used to be in cpu_switch(), but I don't see any difference between doing it here.
* Re-enable preemption on interrupts. My last commit accidentally revertedjhb2001-02-101-1/+8
| | | | it as I was playing with some other ways of doing kernel preemption.
* - Make astpending and need_resched process attributes rather than CPUjhb2001-02-1011-83/+47
| | | | | | | | | | | attributes. This is needed for AST's to be properly posted in a preemptive kernel. They are backed by two new flags in p_sflag: PS_ASTPENDING and PS_NEEDRESCHED. They are still accesssed by their old macros: aston(), astoff(), etc. For completeness, an astpending() macro has been added to check for a pending AST, and clear_resched() has been added to clear need_resched(). - Rename syscall2() on the x86 back to syscall() to be consistent with other architectures.
* Add a macro mtx_intr_enable() to alter a spin lock such that interruptsjhb2001-02-101-0/+1
| | | | will be enabled when it is released.
* Revert the spin mutex for the cy(4) driver.jhb2001-02-091-69/+152
| | | | Requested by: bde
* Catch up to the new swi API.jhb2001-02-091-7/+6
|
* - Use a spin mutex instead of COM_LOCK, since COM_LOCK is going away.jhb2001-02-091-165/+83
| | | | | | The same name from the sio(4) driver was used and an appropriate dictionary item added at the top to reduce diffs. - Catch up to the new swi API.
* Catch up to changes to inthand_add().jhb2001-02-092-16/+16
|
* Use the MI ithread helper functions in the x86 interrupt code.jhb2001-02-096-443/+201
|
* - Catch up to the new swi API changes:jhb2001-02-091-1/+1
| | | | | | | - Use swi_* function names. - Use void * to hold cookies to handlers instead of struct intrhand *. - In sio.c, use 'driver_name' instead of "sio" as the name of the driver lock to minimize diffs with cy(4).
* Move the initailization of the proc lock for proc0 very early into the MDjhb2001-02-091-0/+1
| | | | startup code.
* Woops, remove an obsolete reference to gd_cpu_lockid.jhb2001-02-093-3/+0
|
* Remove unused forward_irq counters.jhb2001-02-092-18/+0
|
* Axe gd_cpu_lockid as it is no longer used.jhb2001-02-092-2/+0
|
* Change and clean the mutex lock interface.bmilekic2001-02-0920-196/+239
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mtx_enter(lock, type) becomes: mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized) similarily, for releasing a lock, we now have: mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN. We change the caller interface for the two different types of locks because the semantics are entirely different for each case, and this makes it explicitly clear and, at the same time, it rids us of the extra `type' argument. The enter->lock and exit->unlock change has been made with the idea that we're "locking data" and not "entering locked code" in mind. Further, remove all additional "flags" previously passed to the lock acquire/release routines with the exception of two: MTX_QUIET and MTX_NOSWITCH The functionality of these flags is preserved and they can be passed to the lock/unlock routines by calling the corresponding wrappers: mtx_{lock, unlock}_flags(lock, flag(s)) and mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN locks, respectively. Re-inline some lock acq/rel code; in the sleep lock case, we only inline the _obtain_lock()s in order to ensure that the inlined code fits into a cache line. In the spin lock case, we inline recursion and actually only perform a function call if we need to spin. This change has been made with the idea that we generally tend to avoid spin locks and that also the spin locks that we do have and are heavily used (i.e. sched_lock) do recurse, and therefore in an effort to reduce function call overhead for some architectures (such as alpha), we inline recursion for this case. Create a new malloc type for the witness code and retire from using the M_DEV type. The new type is called M_WITNESS and is only declared if WITNESS is enabled. Begin cleaning up some machdep/mutex.h code - specifically updated the "optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently need those. Finally, caught up to the interface changes in all sys code. Contributors: jake, jhb, jasone (in no particular order)
* Free the memory we get from devclass_get_devices and device_get_children.msmith2001-02-082-6/+16
| | | | Submitted by: wpaul
* Don't enable interrupts for a kernel breakpoint or trace trap. Otherwise,jhb2001-02-081-6/+7
| | | | | this negates the explicit disabling of interrupts when entering the debugger in Debugger().
* When SMPng was first committed, we removed 'cpl' from the interruptjhb2001-02-071-1/+1
| | | | | frame. Teach ddb about this as there is one less word for it to skip over when finding a trapframe on the interrupt frame stack.
* Reflect recently added support for SMC9432FTX cards.semenu2001-02-071-1/+1
|
* Fix typo: compatability -> compatibility.asmodai2001-02-064-6/+6
| | | | Compatability is not an existing english word.
* Fix typo: seperate -> separate.asmodai2001-02-063-4/+4
| | | | Seperate does not exist in the english language.
* Convert if_multiaddrs from LIST to TAILQ so that it can be traversedphk2001-02-062-2/+2
| | | | | | backwards in the three drivers which want to do that. Reviewed by: mikeh
* Another round of the <sys/queue.h> FOREACH transmogriffer.phk2001-02-043-6/+3
| | | | | Created with: sed(1) Reviewed by: md5(1)
* Clean up some leftovers from the root mount cleanup that was done somepeter2001-02-044-10/+1
| | | | time ago. FFS_ROOT and CD9660_ROOT are obsolete.
* Mechanical change to use <sys/queue.h> macro API instead ofphk2001-02-045-38/+40
| | | | | | | fondling implementation details. Created with: sed(1) Reviewed by: md5(1)
* 'device agp' was missingpeter2001-02-041-0/+4
|
* Remove the LABPC driver.phk2001-02-043-1098/+0
| | | | Doesn't work, no maintainer, more promising code exists elsewhere.
* Use macro API to <sys/queue.h>phk2001-02-041-1/+1
|
* This commit represents work mainly submitted by Tor and slightly modifieddillon2001-02-041-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | by myself. It solves a serious vm_map corruption problem that can occur with the buffer cache when block sizes > 64K are used. This code has been heavily tested in -stable but only tested somewhat on -current. An MFC will occur in a few days. My additions include the vm_map_simplify_entry() and minor buffer cache boundry case fix. Make the buffer cache use a system map for buffer cache KVM rather then a normal map. Ensure that VM objects are not allocated for system maps. There were cases where a buffer map could wind up with a backing VM object -- normally harmless, but this could also result in the buffer cache blocking in places where it assumes no blocking will occur, possibly resulting in corrupted maps. Fix a minor boundry case in the buffer cache size limit is reached that could result in non-optimal code. Add vm_map_simplify_entry() calls to prevent 'creeping proliferation' of vm_map_entry's in the buffer cache's vm_map. Previously only a simple linear optimization was made. (The buffer vm_map typically has only a handful of vm_map_entry's. This stabilizes it at that level permanently). PR: 20609 Submitted by: (Tor Egge) tegge
* Unbreak test coverage of cy driver.bde2001-02-011-7/+7
|
* Implement preemptive scheduling of hardware interrupt threads.jake2001-02-012-2/+11
| | | | | | | | | | | | - If possible, context switch to the thread directly in sched_ithd(), rather than triggering a delayed ast reschedule. - Disable interrupts while restoring fpu state in the trap handler, in order to ensure that we are not preempted in the middle, which could cause migration to another cpu. Reviewed by: peter Tested by: peter (alpha)
* Remove count for NSIO. The only places it was used it were incorrect.peter2001-01-311-8/+0
| | | | (alpha-gdbstub.c got sync'ed up a bit with the i386 version)
* Add hpfs and the config glue for it. It was being skipped from testpeter2001-01-311-0/+1
| | | | coverage.
* Zap last remaining references to (and a use use of) of simple_locks.peter2001-01-311-10/+0
|
* As the default MAXDSIZ and DFLDSIZ is 512MB, bump the example valuestanimura2001-01-311-4/+4
| | | | | | | | to 1GB. A box of mine is running with MAXDSIZ and DFLDSIZ increased up to 1.5GB. Wishlist: It would be nice to warn if MAXTSIZ + MAXDSIZ + MAXSSIZ exceeds VM_MAXUSER_ADDRESS - VM_MINUSER_ADDRESS.
* Added used include of <sys/mutex.h>. The SMP case was broken bybde2001-01-301-0/+1
| | | | | | incompletely converting simplelocks to mutexes (COM_LOCK() is supposed to hide the SMP locking internals, but it now depends on mutex interfaces being visible).
OpenPOWER on IntegriCloud