summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_intr.c
Commit message (Collapse)AuthorAgeFilesLines
* Add some locking in for a few proc and thread fields.jhb2003-04-171-1/+2
|
* Use local struct proc variables to reduce repeated td->td_proc dereferencesjhb2003-04-171-3/+5
| | | | and improve readability.
* Adjust a KTR trace to log thread state instead of proc state as that isjhb2003-04-171-1/+1
| | | | more relevant.
* Add a WITNESS_WARN() call to verify that we hold no locks after runningjhb2003-03-041-0/+1
| | | | a handler from an interrupt thread.
* Back out M_* changes, per decision of the TRB.imp2003-02-191-2/+2
| | | | Approved by: trb
* Move a bunch of flags from the KSE to the thread.julian2003-02-171-1/+1
| | | | | | | | I was in two minds as to where to put them in the first case.. I should have listenned to the other mind. Submitted by: parts by davidxu@ Reviewed by: jeff@ mini@
* Fix crash dumps on ata and scsi.alfred2003-02-141-1/+2
| | | | | | | | | | | To fix scsi, don't wait for ithreads if we're dumping, it makes the debugger sad. To fix ata, use what appears to be a polling method if we're dumping, I stole this from tmm but added code to ensure that this change is only in effect while dumping. Tested by: des
* Remove M_TRYWAIT/M_WAITOK/M_WAIT. Callers should use 0.alfred2003-01-211-2/+2
| | | | Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
* Don't put a newline in KTR traces.jake2002-12-281-1/+1
|
* Instead of (sizeof(source_buffer) - 1) bytes, copy at mostrobert2002-10-171-1/+1
| | | | | | (sizeof(destination_buffer) - 1) bytes into the destination buffer. This was not harmful because they currently both provide space for (MAXCOMLEN + 1) bytes.
* Use strlcpy() instead of strncpy() to copy NUL terminated stringsrobert2002-10-171-1/+2
| | | | for safety and consistency.
* Some kernel threads try to do significant work, and the default KSTACK_PAGESscottl2002-10-021-1/+1
| | | | | | | | | | | | | doesn't give them enough stack to do much before blowing away the pcb. This adds MI and MD code to allow the allocation of an alternate kstack who's size can be speficied when calling kthread_create. Passing the value 0 prevents the alternate kstack from being created. Note that the ia64 MD code is missing for now, and PowerPC was only partially written due to the pmap.c being incomplete there. Though this patch does not modify anything to make use of the alternate kstack, acpi and usb are good candidates. Reviewed by: jake, peter, jhb
* Be consistent about "static" functions: if the function is markedphk2002-09-281-1/+1
| | | | | | static in its prototype, mark it static at the definition too. Inspired by: FlexeLint warning #512
* Removed unneeded include (missed in last revision).jake2002-09-221-2/+0
|
* Moved netisr code from kern/kern_intr.c to net/netisr.c as threatened in ajake2002-09-221-79/+1
| | | | comment.
* Completely redo thread states.julian2002-09-111-7/+9
| | | | Reviewed by: davidxu@freebsd.org
* Remove extra ';'davidxu2002-09-061-1/+1
|
* Slight cleanup of some comments/whitespace.julian2002-08-011-8/+9
| | | | | | | | | | | | Make idle process state more consistant. Add an assert on thread state. Clean up idleproc/mi_switch() interaction. Use a local instead of referencing curthread 7 times in a row (I've been told curthread can be expensive on some architectures) Remove some commented out code. Add a little commented out code (completion coming soon) Reviewed by: jhb@freebsd.org
* Part 1 of KSE-IIIjulian2002-06-291-13/+14
| | | | | | | | | | | | | The ability to schedule multiple threads per process (one one cpu) by making ALL system calls optionally asynchronous. to come: ia64 and power-pc patches, patches for gdb, test program (in tools) Reviewed by: Almost everyone who counts (at various times, peter, jhb, matt, alfred, mini, bernd, and a cast of thousands) NOTE: this is still Beta code, and contains lots of debugging stuff. expect slight instability in signals..
* diff reduction from KSE to keep WW-III from happenning on -currentjulian2002-05-291-1/+2
|
* - Set the base priority of an ithread that has no handlers when we set itsjhb2002-04-111-3/+5
| | | | | | normal priority. - Lock sched_lock while we dink with the priorities. - Remove a few extra blank lines.
* Don't lock the ithread lock in ithread_create(). The ithread isn't on anyjhb2002-04-091-2/+0
| | | | | lists or in any tables yet so there are no other references to it, thus we don't need to lock it.
* Change callers of mtx_init() to pass in an appropriate lock type name. Injhb2002-04-041-1/+1
| | | | | | | most cases NULL is passed, but in some cases such as network driver locks (which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used. Tested on: i386, alpha, sparc64
* Remove __P.alfred2002-03-191-1/+1
|
* Make the DEVICE_POLLING code compile with -Werror and in LINTluigi2002-03-091-0/+4
|
* MFS: synchronize the code with the version in -stable, specifically:luigi2002-02-111-1/+1
| | | | | | + SYSCTL_ULONG -> SYSCTL_UINT + some procedure renaming and variable rearrangement + fix the 'interface going deaf' problem same as in -stable.
* In a threaded world, differnt priorirites become properties ofjulian2002-02-111-5/+5
| | | | | | different entities. Make it so. Reviewed by: jhb@freebsd.org (john baldwin)
* Pre-KSE/M3 commit.julian2002-02-071-1/+1
| | | | | | | | | | this is a low-functionality change that changes the kernel to access the main thread of a process via the linked list of threads rather than assuming that it is embedded in the process. It IS still embeded there but remove all teh code that assumes that in preparation for the next commit which will actually move it out. Reviewed by: peter@freebsd.org, gallatin@cs.duke.edu, benno rice,
* Change the preemption code for software interrupt thread schedules andjhb2002-01-051-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mutex releases to not require flags for the cases when preemption is not allowed: The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent switching to a higher priority thread on mutex releease and swi schedule, respectively when that switch is not safe. Now that the critical section API maintains a per-thread nesting count, the kernel can easily check whether or not it should switch without relying on flags from the programmer. This fixes a few bugs in that all current callers of swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from fast interrupt handlers and the swi_sched of softclock needed this flag. Note that to ensure that swi_sched()'s in clock and fast interrupt handlers do not switch, these handlers have to be explicitly wrapped in critical_enter/exit pairs. Presently, just wrapping the handlers is sufficient, but in the future with the fully preemptive kernel, the interrupt must be EOI'd before critical_exit() is called. (critical_exit() can switch due to a deferred preemption in a fully preemptive kernel.) I've tested the changes to the interrupt code on i386 and alpha. I have not tested ia64, but the interrupt code is almost identical to the alpha code, so I expect it will work fine. PowerPC and ARM do not yet have interrupt code in the tree so they shouldn't be broken. Sparc64 is broken, but that's been ok'd by jake and tmm who will be fixing the interrupt code for sparc64 shortly. Reviewed by: peter Tested on: i386, alpha
* Device Polling code for -current.luigi2001-12-141-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Non-SMP, i386-only, no polling in the idle loop at the moment. To use this code you must compile a kernel with options DEVICE_POLLING and at runtime enable polling with sysctl kern.polling.enable=1 The percentage of CPU reserved to userland can be set with sysctl kern.polling.user_frac=NN (default is 50) while the remainder is used by polling device drivers and netisr's. These are the only two variables that you should need to touch. There are a few more parameters in kern.polling but the default values are adequate for all purposes. See the code in kern_poll.c for more details on them. Polling in the idle loop will be implemented shortly by introducing a kernel thread which does the job. Until then, the amount of CPU dedicated to polling will never exceed (100-user_frac). The equivalent (actually, better) code for -stable is at http://info.iet.unipi.it/~luigi/polling/ and also supports polling in the idle loop. NOTE to Alpha developers: There is really nothing in this code that is i386-specific. If you move the 2 lines supporting the new option from sys/conf/{files,options}.i386 to sys/conf/{files,options} I am pretty sure that this should work on the Alpha as well, just that I do not have a suitable test box to try it. If someone feels like trying it, I would appreciate it. NOTE to other developers: sure some things could be done better, and as always I am open to constructive criticism, which a few of you have already given and I greatly appreciated. However, before proposing radical architectural changes, please take some time to possibly try out this code, or at the very least read the comments in kern_poll.c, especially re. the reason why I am using a soft netisr and cannot (I believe) replace it with a simple timeout. Quick description of files touched by this commit: sys/conf/files.i386 new file kern/kern_poll.c sys/conf/options.i386 new option sys/i386/i386/trap.c poll in trap (disabled by default) sys/kern/kern_clock.c initialization and hardclock hooks. sys/kern/kern_intr.c minor swi_net changes sys/kern/kern_poll.c the bulk of the code. sys/net/if.h new flag sys/net/if_var.h declaration for functions used in device drivers. sys/net/netisr.h NETISR_POLL sys/dev/fxp/if_fxp.c sys/dev/fxp/if_fxpvar.h sys/pci/if_dc.c sys/pci/if_dcreg.h sys/pci/if_sis.c sys/pci/if_sisreg.h device driver modifications
* Repeat after me -- "Use of ANSI string concatenation can be bad."obrien2001-12-101-16/+16
| | | | | | | | In this case, C99's __func__ is properly defined as: static const char __func__[] = "function-name"; and GCC 3.1 will not allow it to be used in bogus string concatenation.
* KSE Milestone 2julian2001-09-121-28/+40
| | | | | | | | | | | | | | Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is functionally equivalent to teh previousl -current except that there is a thread associated with each process. Sorry john! (your next MFC will be a doosie!) Reviewed by: peter@freebsd.org, dillon@freebsd.org X-MFC after: ha ha ha ha
* Match the declaration in net/netisr.h.obrien2001-09-031-1/+1
| | | | Submitted by: gcc 3.0.1
* - Close races with signals and other AST's being triggered while we are injhb2001-08-101-3/+3
| | | | | | | | | | | | | | | | | | | | | | the process of exiting the kernel. The ast() function now loops as long as the PS_ASTPENDING or PS_NEEDRESCHED flags are set. It returns with preemption disabled so that any further AST's that arrive via an interrupt will be delayed until the low-level MD code returns to user mode. - Use u_int's to store the tick counts for profiling purposes so that we do not need sched_lock just to read p_sticks. This also closes a problem where the call to addupc_task() could screw up the arithmetic due to non-atomic reads of p_sticks. - Axe need_proftick(), aston(), astoff(), astpending(), need_resched(), clear_resched(), and resched_wanted() in favor of direct bit operations on p_sflag. - Fix up locking with sched_lock some. In addupc_intr(), use sched_lock to ensure pr_addr and pr_ticks are updated atomically with setting PS_OWEUPC. In ast() we clear pr_ticks atomically with clearing PS_OWEUPC. We also do not grab the lock just to test a flag. - Simplify the handling of Giant in ast() slightly. Reviewed by: bde (mostly)
* Make the schedlock saved critical section state a per-thread property.jhb2001-06-301-4/+0
|
* Count the switch when an ithread goes idle as a voluntary context switch.jhb2001-06-251-0/+1
| | | | Submitted by: bde
* Preemption by an interrupt thread is an involuntary switch, not a voluntaryjhb2001-06-201-1/+1
| | | | | | one. Pointy-hat to: me
* Add INTR_TYPE_AV so that we can get to the PI_AV priority in the ithreadpeter2001-06-161-1/+4
| | | | | | handlers. This is beneficial since it means that pcm's MPSAFE handler can get run before things that will block on Giant in the shared irq case.
* Clean up the code exporting interrupt statistics via sysctl a bit:tmm2001-06-011-0/+30
| | | | | | | | | | | | | - move the sysctl code to kern_intr.c - do not use INTRCNT_COUNT, but rather eintrcnt - intrcnt to determine the length of the intrcnt array - move the declarations of intrnames, eintrnames, intrcnt and eintrcnt from machine-dependent include files to sys/interrupt.h - remove the hw.nintr sysctl, it is not needed. - fix various style bugs Requested by: bde Reviewed by: bde (some time ago)
* - Remove the global ithread_list_lock spin lock in favor of per-ithreadjhb2001-05-171-37/+30
| | | | | | | | | | | | sleep locks. - Delay returning from ithread_remove_handler() until we are certain that the interrupt handler being removed has in fact been removed from the ithread. - XXX: There is still a problem in that nothing protects the kernel from adding a new handler while the ithread is running, though with our current architectures this is not a problem. Requested by: gibbs (2)
* Remove unneeded includes of sys/ipl.h and machine/ipl.h.jhb2001-05-151-1/+0
|
* Overhaul of the SMP code. Several portions of the SMP kernel support havejhb2001-04-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | been made machine independent and various other adjustments have been made to support Alpha SMP. - It splits the per-process portions of hardclock() and statclock() off into hardclock_process() and statclock_process() respectively. hardclock() and statclock() call the *_process() functions for the current process so that UP systems will run as before. For SMP systems, it is simply necessary to ensure that all other processors execute the *_process() functions when the main clock functions are triggered on one CPU by an interrupt. For the alpha 4100, clock interrupts are delievered in a staggered broadcast fashion, so we simply call hardclock/statclock on the boot CPU and call the *_process() functions on the secondaries. For x86, we call statclock and hardclock as usual and then call forward_hardclock/statclock in the MD code to send an IPI to cause the AP's to execute forwared_hardclock/statclock which then call the *_process() functions. - forward_signal() and forward_roundrobin() have been reworked to be MI and to involve less hackery. Now the cpu doing the forward sets any flags, etc. and sends a very simple IPI_AST to the other cpu(s). AST IPIs now just basically return so that they can execute ast() and don't bother with setting the astpending or needresched flags themselves. This also removes the loop in forward_signal() as sched_lock closes the race condition that the loop worked around. - need_resched(), resched_wanted() and clear_resched() have been changed to take a process to act on rather than assuming curproc so that they can be used to implement forward_roundrobin() as described above. - Various other SMP variables have been moved to a MI subr_smp.c and a new header sys/smp.h declares MI SMP variables and API's. The IPI API's from machine/ipl.h have moved to machine/smp.h which is included by sys/smp.h. - The globaldata_register() and globaldata_find() functions as well as the SLIST of globaldata structures has become MI and moved into subr_smp.c. Also, the globaldata list is only available if SMP support is compiled in. Reviewed by: jake, peter Looked over by: eivind
* Catch up to header include changes:jhb2001-03-281-0/+1
| | | | | - <sys/mutex.h> now requires <sys/systm.h> - <sys/mutex.h> and <sys/sx.h> now require <sys/lock.h>
* Catch up to the mtx_saveintr -> mtx_savecrit change.jhb2001-03-281-3/+3
|
* Use (..., "%s", foo) instead of (..., foo) to avoid a warning about ajhb2001-03-241-1/+1
| | | | | non-constant format string when calling kthread_create() to create an ithread.
* Ok, the kernel will panic in kmem_malloc() if the kernel map is full, sojhb2001-03-021-4/+0
| | | | | | | malloc with M_WAITOK can't actually return NULL. I wish I could get two people to give me the same answer about this when I ask... Submitted by: jake
* - Check to see if malloc() returned NULL even with M_WAITOK.jhb2001-03-021-1/+6
| | | | | | | - Add a KASSERT() to ensure an ithread has a backing kernel thread when we schedule it. - Don't attempt to preemptively switch to an ithread if p_stat of curproc is not SRUN.
* Sigh. Try to get priorities sorted out. Don't bother trying tojake2001-02-281-0/+1
| | | | | | | | | | | update native priority, it is diffcult to get right and likely to end up horribly wrong. Use an honestly wrong fixed value that seems to work; PUSER for user threads, and the interrupt priority for ithreads. Set it once when the process is created and forget about it. Suggested by: bde Pointy hat: me
* Work around a race condition where an interrupt handler can be removed fromjhb2001-02-221-3/+34
| | | | | | | | | | | an interrupt thread while the interrupt thread is blocked on Giant waiting to execute the interrupt handler being removed. The result was that the intrhand structure would be free'd, and we would call 0xdeadc0de. The work around is to check to see if the interrupt thread is idle when removing a handler. If not, then we mark the interrupt handler as being dead using the new IH_DEAD flag and don't remove it from the interrupt threads' list of handlers. When the interrupt thread resumes, it will see a dead handler while traversing the list of handlers and will remove the handler then.
* Just use the ithread->it_proc directly in a KTR tracepoint instead ofjhb2001-02-221-2/+1
| | | | | | | assigning a local var to it and using it, as otherwise the local var wasn't used, and generated a warning in the !KTR case. Noticed by: bde
OpenPOWER on IntegriCloud