summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_idle.c
Commit message (Collapse)AuthorAgeFilesLines
* Give the 4bsd scheduler the ability to wake up idle processorsjulian2004-09-011-0/+18
| | | | | | when there is new work to be done. MFC after: 5 days
* Expand the generic, but bogusly formed, copyright notice to includeimp2004-07-251-1/+21
| | | | | | | | | | | the license from /usr/src/COPYRIGHT. Since cvs annotate shows that this was written by jasone, julian, jhb, peter, bmilekic and obrien. cvs log shows that many others may have contributed to this file. As such, go ahead and use the author of 'FreeBSD Project' for this file. If this is a problem, please notify me. # this eliminates the last file in the kernel with an indirect reference # to /usr/src/COPYRIGHT in the kernel. A few more in userland remain.
* - Change mi_switch() and sched_switch() to accept an optional thread tojhb2004-07-021-3/+2
| | | | | | | | | | | | | switch to. If a non-NULL thread pointer is passed in, then the CPU will switch to that thread directly rather than calling choosethread() to pick a thread to choose to. - Make sched_switch() aware of idle threads and know to do TD_SET_CAN_RUN() instead of sticking them on the run queue rather than requiring all callers of mi_switch() to know to do this if they can be called from an idlethread. - Move constants for arguments to mi_switch() and thread_single() out of the middle of the function prototypes and up above into their own section.
* Adjust the priority of the idle threads to be the lowest possiblejhb2004-06-281-0/+1
| | | | | | | priority. This is just a comestic nit as the idle thread priorities aren't used by the schedulers. Reported by: bde
* Always set a process' state to normal when it is fully constructed injhb2004-02-051-1/+0
| | | | | fork1() rather than only doing it for the RFSTOPPED case and then having to fix it up in other places later on.
* - Add a flags parameter to mi_switch. The value of flags may be SW_VOL orjeff2004-01-251-2/+1
| | | | | | | | | | SW_INVOL. Assert that one of these is set in mi_switch() and propery adjust the rusage statistics. This is to simplify the large number of users of this interface which were previously all required to adjust the proper counter prior to calling mi_switch(). This also facilitates more switch and locking optimizations. - Change all callers of mi_switch() to pass the appropriate paramter and remove direct references to the process statistics.
* Tidy up loose ends in the idle process. Call the MI cpu_idle() functionpeter2003-10-191-37/+5
| | | | | | | | for all platforms now. XXX alpha/sparc64/powerpc should fill in the function. Submitted by: bde
* Halt the cpu on amd64 as well. For some strange reason, this makespeter2003-10-171-1/+1
| | | | | | a fair bit of difference to the power consumption and lets my cpu cool down enough for the temperature sensitive fan controller to completely stop the cpu fan at times.
* Implement cpu_idle() on ia64. We put the processor in a lightweightmarcel2003-10-171-1/+1
| | | | | | halt state that minimizes power consumption while still preserving cache and TLB coherency. Halting the processor is not conditional at this time. Tested with UP and SMP kernels.
* Use __FBSDID().obrien2003-06-111-1/+3
|
* Move the flag that indicates an idle thread from the KSE to the thread.julian2003-05-021-1/+1
| | | | | | It was always referenced via the thread anyhow. Reviewed by: jhb (a LOOOOONG time ago)
* Add some locking in for a few proc and thread fields.jhb2003-04-171-1/+5
|
* - Create a new scheduler api that is defined in sys/sched.hjeff2002-10-121-2/+3
| | | | | | | | | | - Begin moving scheduler specific functionality into sched_4bsd.c - Replace direct manipulation of scheduler data with hooks provided by the new api. - Remove KSE specific state modifications and single runq assumptions from kern_switch.c Reviewed by: -arch
* Some kernel threads try to do significant work, and the default KSTACK_PAGESscottl2002-10-021-2/+2
| | | | | | | | | | | | | doesn't give them enough stack to do much before blowing away the pcb. This adds MI and MD code to allow the allocation of an alternate kstack who's size can be speficied when calling kthread_create. Passing the value 0 prevents the alternate kstack from being created. Note that the ia64 MD code is missing for now, and PowerPC was only partially written due to the pmap.c being incomplete there. Though this patch does not modify anything to make use of the alternate kstack, acpi and usb are good candidates. Reviewed by: jake, peter, jhb
* Completely redo thread states.julian2002-09-111-2/+2
| | | | Reviewed by: davidxu@freebsd.org
* Slight cleanup of some comments/whitespace.julian2002-08-011-1/+2
| | | | | | | | | | | | Make idle process state more consistant. Add an assert on thread state. Clean up idleproc/mi_switch() interaction. Use a local instead of referencing curthread 7 times in a row (I've been told curthread can be expensive on some architectures) Remove some commented out code. Add a little commented out code (completion coming soon) Reviewed by: jhb@freebsd.org
* Make sure the process state for the idle proc is set correctlyjulian2002-07-171-0/+1
| | | | from the beginning.
* Thinking about it I came to the conclusion that the KSE states were incorrectlyjulian2002-07-141-5/+1
| | | | | | | | | | | | | | formulated. The correct states should be: IDLE: On the idle KSE list for that KSEG RUNQ: Linked onto the system run queue. THREAD: Attached to a thread and slaved to whatever state the thread is in. This means that most places where we were adjusting kse state can go away as it is just moving around because the thread is.. The only places we need to adjust the KSE state is in transition to and from the idle and run queues. Reviewed by: jhb@freebsd.org
* Part 1 of KSE-IIIjulian2002-06-291-4/+15
| | | | | | | | | | | | | The ability to schedule multiple threads per process (one one cpu) by making ALL system calls optionally asynchronous. to come: ia64 and power-pc patches, patches for gdb, test program (in tools) Reviewed by: Almost everyone who counts (at various times, peter, jhb, matt, alfred, mini, bernd, and a cast of thousands) NOTE: this is still Beta code, and contains lots of debugging stuff. expect slight instability in signals..
* Pre-KSE/M3 commit.julian2002-02-071-2/+2
| | | | | | | | | | this is a low-functionality change that changes the kernel to access the main thread of a process via the linked list of threads rather than assuming that it is embedded in the process. It IS still embeded there but remove all teh code that assumes that in preparation for the next commit which will actually move it out. Reviewed by: peter@freebsd.org, gallatin@cs.duke.edu, benno rice,
* Modify the critical section API as follows:jhb2001-12-181-1/+3
| | | | | | | | | | | | | | | | | | | - The MD functions critical_enter/exit are renamed to start with a cpu_ prefix. - MI wrapper functions critical_enter/exit maintain a per-thread nesting count and a per-thread critical section saved state set when entering a critical section while at nesting level 0 and restored when exiting to nesting level 0. This moves the saved state out of spin mutexes so that interlocking spin mutexes works properly. - Most low-level MD code that used critical_enter/exit now use cpu_critical_enter/exit. MI code such as device drivers and spin mutexes use the MI wrappers. Note that since the MI wrappers store the state in the current thread, they do not have any return values or arguments. - mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is assigned to curthread->td_savecrit during fork_exit(). Tested on: i386, alpha
* Overhaul the per-CPU support a bit:jhb2001-12-111-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | - The MI portions of struct globaldata have been consolidated into a MI struct pcpu. The MD per-CPU data are specified via a macro defined in machine/pcpu.h. A macro was chosen over a struct mdpcpu so that the interface would be cleaner (PCPU_GET(my_md_field) vs. PCPU_GET(md.md_my_md_field)). - All references to globaldata are changed to pcpu instead. In a UP kernel, this data was stored as global variables which is where the original name came from. In an SMP world this data is per-CPU and ideally private to each CPU outside of the context of debuggers. This also included combining machine/globaldata.h and machine/globals.h into machine/pcpu.h. - The pointer to the thread using the FPU on i386 was renamed from npxthread to fpcurthread to be identical with other architectures. - Make the show pcpu ddb command MI with a MD callout to display MD fields. - The globaldata_register() function was renamed to pcpu_init() and now init's MI fields of a struct pcpu in addition to registering it with the internal array and list. - A pcpu_destroy() function was added to remove a struct pcpu from the internal array and list. Tested on: alpha, i386 Reviewed by: peter, jake
* KSE Milestone 2julian2001-09-121-4/+4
| | | | | | | | | | | | | | Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is functionally equivalent to teh previousl -current except that there is a thread associated with each process. Sorry john! (your next MFC will be a doosie!) Reviewed by: peter@freebsd.org, dillon@freebsd.org X-MFC after: ha ha ha ha
* Remove #if 0'd remnants of the old idle page zeroing.jhb2001-09-011-9/+0
|
* - Split out the support for per-CPU data from the SMP code. UP kernelsjhb2001-05-101-14/+5
| | | | | | | have per-CPU data and gdb on the i386 at least needs access to it. - Clean up includes in kern_idle.c and subr_smp.c. Reviewed by: jake
* Undo part of the tangle of having sys/lock.h and sys/mutex.h included inmarkm2001-05-011-2/+3
| | | | | | | | | | | other "system" header files. Also help the deprecation of lockmgr.h by making it a sub-include of sys/lock.h and removing sys/lockmgr.h form kernel .c files. Sort sys/*.h includes where possible in affected files. OK'ed by: bde (with reservations)
* Overhaul of the SMP code. Several portions of the SMP kernel support havejhb2001-04-271-11/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | been made machine independent and various other adjustments have been made to support Alpha SMP. - It splits the per-process portions of hardclock() and statclock() off into hardclock_process() and statclock_process() respectively. hardclock() and statclock() call the *_process() functions for the current process so that UP systems will run as before. For SMP systems, it is simply necessary to ensure that all other processors execute the *_process() functions when the main clock functions are triggered on one CPU by an interrupt. For the alpha 4100, clock interrupts are delievered in a staggered broadcast fashion, so we simply call hardclock/statclock on the boot CPU and call the *_process() functions on the secondaries. For x86, we call statclock and hardclock as usual and then call forward_hardclock/statclock in the MD code to send an IPI to cause the AP's to execute forwared_hardclock/statclock which then call the *_process() functions. - forward_signal() and forward_roundrobin() have been reworked to be MI and to involve less hackery. Now the cpu doing the forward sets any flags, etc. and sends a very simple IPI_AST to the other cpu(s). AST IPIs now just basically return so that they can execute ast() and don't bother with setting the astpending or needresched flags themselves. This also removes the loop in forward_signal() as sched_lock closes the race condition that the loop worked around. - need_resched(), resched_wanted() and clear_resched() have been changed to take a process to act on rather than assuming curproc so that they can be used to implement forward_roundrobin() as described above. - Various other SMP variables have been moved to a MI subr_smp.c and a new header sys/smp.h declares MI SMP variables and API's. The IPI API's from machine/ipl.h have moved to machine/smp.h which is included by sys/smp.h. - The globaldata_register() and globaldata_find() functions as well as the SLIST of globaldata structures has become MI and moved into subr_smp.c. Also, the globaldata list is only available if SMP support is compiled in. Reviewed by: jake, peter Looked over by: eivind
* Implement a unified run queue and adjust priority levels accordingly.jake2001-02-121-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - All processes go into the same array of queues, with different scheduling classes using different portions of the array. This allows user processes to have their priorities propogated up into interrupt thread range if need be. - I chose 64 run queues as an arbitrary number that is greater than 32. We used to have 4 separate arrays of 32 queues each, so this may not be optimal. The new run queue code was written with this in mind; changing the number of run queues only requires changing constants in runq.h and adjusting the priority levels. - The new run queue code takes the run queue as a parameter. This is intended to be used to create per-cpu run queues. Implement wrappers for compatibility with the old interface which pass in the global run queue structure. - Group the priority level, user priority, native priority (before propogation) and the scheduling class into a struct priority. - Change any hard coded priority levels that I found to use symbolic constants (TTIPRI and TTOPRI). - Remove the curpriority global variable and use that of curproc. This was used to detect when a process' priority had lowered and it should yield. We now effectively yield on every interrupt. - Activate propogate_priority(). It should now have the desired effect without needing to also propogate the scheduling class. - Temporarily comment out the call to vm_page_zero_idle() in the idle loop. It interfered with propogate_priority() because the idle process needed to do a non-blocking acquire of Giant and then other processes would try to propogate their priority onto it. The idle process should not do anything except idle. vm_page_zero_idle() will return in the form of an idle priority kernel thread which is woken up at apprioriate times by the vm system. - Update struct kinfo_proc to the new priority interface. Deliberately change its size by adjusting the spare fields. It remained the same size, but the layout has changed, so userland processes that use it would parse the data incorrectly. The size constraint should really be changed to an arbitrary version number. Also add a debug.sizeof sysctl node for struct kinfo_proc.
* - Point out that we don't lock anything during the idle setup becausejhb2001-02-091-1/+6
| | | | | | | only the boot processor should be running in the comments. - Initialize curproc to point to each CPU's respective idleproc if their curproc is NULL. - Keep track of the number of context switches performed by idleproc.
* Change and clean the mutex lock interface.bmilekic2001-02-091-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mtx_enter(lock, type) becomes: mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized) similarily, for releasing a lock, we now have: mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN. We change the caller interface for the two different types of locks because the semantics are entirely different for each case, and this makes it explicitly clear and, at the same time, it rids us of the extra `type' argument. The enter->lock and exit->unlock change has been made with the idea that we're "locking data" and not "entering locked code" in mind. Further, remove all additional "flags" previously passed to the lock acquire/release routines with the exception of two: MTX_QUIET and MTX_NOSWITCH The functionality of these flags is preserved and they can be passed to the lock/unlock routines by calling the corresponding wrappers: mtx_{lock, unlock}_flags(lock, flag(s)) and mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN locks, respectively. Re-inline some lock acq/rel code; in the sleep lock case, we only inline the _obtain_lock()s in order to ensure that the inlined code fits into a cache line. In the spin lock case, we inline recursion and actually only perform a function call if we need to spin. This change has been made with the idea that we generally tend to avoid spin locks and that also the spin locks that we do have and are heavily used (i.e. sched_lock) do recurse, and therefore in an effort to reduce function call overhead for some architectures (such as alpha), we inline recursion for this case. Create a new malloc type for the witness code and retire from using the M_DEV type. The new type is called M_WITNESS and is only declared if WITNESS is enabled. Begin cleaning up some machdep/mutex.h code - specifically updated the "optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently need those. Finally, caught up to the interface changes in all sys code. Contributors: jake, jhb, jasone (in no particular order)
* Catch up to moving headers:jhb2000-10-201-1/+1
| | | | | - machine/ipl.h -> sys/ipl.h - machine/mutex.h -> sys/mutex.h
* Axe the idle_event eventhandler, and add a MD cpu_idle function usedjhb2000-10-191-4/+3
| | | | | | for things such as halting CPU's, idling CPU's, etc. Discussed with: msmith
* EVENTHANDLER_INVOKE() takes two arguments.peter2000-10-181-1/+1
|
* Don't needlessly pass the diagnostic counter to the idle_event eventjhb2000-10-181-1/+1
| | | | handlers.
* - Wrap the sanity checks for staying in the idle loop for absurdly longjhb2000-10-171-6/+12
| | | | | amounts of time in #ifdef DIAGNOSTIC - Call vm_page_zero_idle() during the idle loop.
* - Heavyweight interrupt threads on the alpha for device I/O interrupts.jhb2000-10-051-2/+1
| | | | | | | | | | | - Make softinterrupts (SWI's) almost completely MI, and divorce them completely from the x86 hardware interrupt code. - The ihandlers array is now gone. Instead, there is a MI shandlers array that just contains SWI handlers. - Most of the former machine/ipl.h files have moved to a new sys/ipl.h. - Stub out all the spl*() functions on all architectures. Submitted by: dfr
* Create an event (idle_event) which is invoked every time around themsmith2000-09-221-0/+7
| | | | | idle loop. Machine-dependant code can elect to eg. take power-saving actions when this event is invoked.
* Remove some commented out cruft.jhb2000-09-151-7/+0
|
* - Add a new process flag P_NOLOAD that marks a process that should bejhb2000-09-151-0/+1
| | | | | ignored during load average calcuations. - Set this flag for the idle processes and the softinterrupt process.
* Idle processes are always runnable, so let them state at SRUN.jhb2000-09-151-2/+1
|
* Major update to the way synchronization is done in the kernel. Highlightsjasone2000-09-071-0/+108
include: * Mutual exclusion is used instead of spl*(). See mutex(9). (Note: The alpha port is still in transition and currently uses both.) * Per-CPU idle processes. * Interrupts are run in their own separate kernel threads and can be preempted (i386 only). Partially contributed by: BSDi (BSD/OS) Submissions by (at least): cp, dfr, dillon, grog, jake, jhb, sheldonh
OpenPOWER on IntegriCloud