summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_switch.c
Commit message (Collapse)AuthorAgeFilesLines
* Catch up to header include changes:jhb2001-03-281-0/+1
| | | | | - <sys/mutex.h> now requires <sys/systm.h> - <sys/mutex.h> and <sys/sx.h> now require <sys/lock.h>
* Jake essentially rewrote this. It is not by any stretch of thepeter2001-03-151-2/+0
| | | | imagination a derivative of what I did before.
* Assert that the process we're trying to enqueue isn't already there.des2001-03-111-0/+21
|
* Add a new informative KASSERT to ensure that a process is in the SRUN statejhb2001-03-091-0/+3
| | | | before we return it to cpu_switch().
* - Assert that the proc to return is not NULL in runq_choose thejake2001-02-241-0/+2
| | | | | | same as runq_remove. - bzero the whole struct runq in runq_init just in case its not statically allocated.
* Implement a unified run queue and adjust priority levels accordingly.jake2001-02-121-181/+163
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - All processes go into the same array of queues, with different scheduling classes using different portions of the array. This allows user processes to have their priorities propogated up into interrupt thread range if need be. - I chose 64 run queues as an arbitrary number that is greater than 32. We used to have 4 separate arrays of 32 queues each, so this may not be optimal. The new run queue code was written with this in mind; changing the number of run queues only requires changing constants in runq.h and adjusting the priority levels. - The new run queue code takes the run queue as a parameter. This is intended to be used to create per-cpu run queues. Implement wrappers for compatibility with the old interface which pass in the global run queue structure. - Group the priority level, user priority, native priority (before propogation) and the scheduling class into a struct priority. - Change any hard coded priority levels that I found to use symbolic constants (TTIPRI and TTOPRI). - Remove the curpriority global variable and use that of curproc. This was used to detect when a process' priority had lowered and it should yield. We now effectively yield on every interrupt. - Activate propogate_priority(). It should now have the desired effect without needing to also propogate the scheduling class. - Temporarily comment out the call to vm_page_zero_idle() in the idle loop. It interfered with propogate_priority() because the idle process needed to do a non-blocking acquire of Giant and then other processes would try to propogate their priority onto it. The idle process should not do anything except idle. vm_page_zero_idle() will return in the form of an idle priority kernel thread which is woken up at apprioriate times by the vm system. - Update struct kinfo_proc to the new priority interface. Deliberately change its size by adjusting the spare fields. It remained the same size, but the layout has changed, so userland processes that use it would parse the data incorrectly. The size constraint should really be changed to an arbitrary version number. Also add a debug.sizeof sysctl node for struct kinfo_proc.
* Use PCPU_GET, PCPU_PTR and PCPU_SET to access all per-cpu variablesjake2001-01-101-2/+2
| | | | other then curproc.
* Catch up to moving headers:jhb2000-10-201-2/+1
| | | | | - machine/ipl.h -> sys/ipl.h - machine/mutex.h -> sys/mutex.h
* Idle processes are always runnable, so let them state at SRUN.jhb2000-09-151-1/+0
|
* Fix some printf format string warnings due to sizeof(int) != sizeof(long) onjhb2000-09-111-8/+8
| | | | the alpha.
* Major update to the way synchronization is done in the kernel. Highlightsjasone2000-09-071-25/+75
| | | | | | | | | | | | | | | include: * Mutual exclusion is used instead of spl*(). See mutex(9). (Note: The alpha port is still in transition and currently uses both.) * Per-CPU idle processes. * Interrupts are run in their own separate kernel threads and can be preempted (i386 only). Partially contributed by: BSDi (BSD/OS) Submissions by (at least): cp, dfr, dillon, grog, jake, jhb, sheldonh
* Commit major SMP cleanups and move the BGL (big giant lock) in thedillon2000-03-281-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | syscall path inward. A system call may select whether it needs the MP lock or not (the default being that it does need it). A great deal of conditional SMP code for various deadended experiments has been removed. 'cil' and 'cml' have been removed entirely, and the locking around the cpl has been removed. The conditional separately-locked fast-interrupt code has been removed, meaning that interrupts must hold the CPL now (but they pretty much had to anyway). Another reason for doing this is that the original separate-lock for interrupts just doesn't apply to the interrupt thread mechanism being contemplated. Modifications to the cpl may now ONLY occur while holding the MP lock. For example, if an otherwise MP safe syscall needs to mess with the cpl, it must hold the MP lock for the duration and must (as usual) save/restore the cpl in a nested fashion. This is precursor work for the real meat coming later: avoiding having to hold the MP lock for common syscalls and I/O's and interrupt threads. It is expected that the spl mechanisms and new interrupt threading mechanisms will be able to run in tandem, allowing a slow piecemeal transition to occur. This patch should result in a moderate performance improvement due to the considerable amount of code that has been removed from the critical path, especially the simplification of the spl*() calls. The real performance gains will come later. Approved by: jkh Reviewed by: current, bde (exception.s) Some work taken from: luoqi's patch
* Fix a typo and a bug.peter1999-08-191-10/+12
| | | | | | - One RTP_PRIO_REALTIME was meant to be RTP_PRIO_IDLE. - RTP_PRIO_FIFO was not handled. - Move the usual case first for setrunqueue() etc.
* Extract the next runnable process selection out of cpu_switch() into apeter1999-08-191-0/+204
fairly machine independent C routine. gcc actually does a pretty good job of this. Reviewed by: msmith (in principle)
OpenPOWER on IntegriCloud