summaryrefslogtreecommitdiffstats
path: root/sys/kern/sched_ule.c
Commit message (Expand)AuthorAgeFilesLines
* - When choosing a thread on the run queue, check to see if its nice isjeff2004-10-301-2/+4
* - In sched_prio() check to see if the kse is assigned to a runq as thejeff2004-10-301-1/+1
* Fix whitespace botch that only showed up in the commit message diff :-/julian2004-10-051-1/+1
* When preempting a thread, put it back on the HEAD of its run queue.julian2004-10-051-6/+14
* Oops. left out part of the diff.julian2004-10-051-0/+2
* Use some macros to trach available scheduler slots to allowjulian2004-10-051-16/+30
* clean up thread runq accounting a bit.julian2004-09-161-0/+2
* Revert the previous round of changes to td_pinned. The scheduler isn'tscottl2004-09-111-24/+2
* Try committing from the right tree this timejulian2004-09-111-3/+3
* Make up my mind if cpu pinning is stored in the thread structure or thejulian2004-09-101-1/+22
* Add some code to allow threads to nominat a sibling to run if theyu are going...julian2004-09-101-1/+1
* Refactor a bunch of scheduler code to give basically the same behaviourjulian2004-09-051-108/+163
* Turn PREEMPTION into a kernel option. Make sure that it's defined ifscottl2004-09-021-0/+14
* Give setrunqueue() and sched_add() more of a clue as tojulian2004-09-011-4/+13
* Commit Jeff's suggested changes for avoiding a bug that is exposed bypeter2004-08-281-4/+2
* - Introduce a new flag KEF_HOLD that prevents sched_add() from doing ajeff2004-08-121-7/+19
* - Use a new flag, KEF_XFERABLE, to record with certainty that this kse hadjeff2004-08-101-34/+76
* Avoid casts as lvalues.kan2004-07-281-2/+2
* Clean up whitespace, increase consistency and correctness.scottl2004-07-231-5/+3
* When calling scheduler entrypoints for creating new threads and processes,julian2004-07-181-15/+18
* - Move TDF_OWEPREEMPT, TDF_OWEUPC, and TDF_USTATCLOCK over to td_pflagsjhb2004-07-161-1/+2
* Update for the KDB framework:marcel2004-07-101-4/+2
* - Move contents of sched_add() into a sched_add_internal() function thatjhb2004-07-081-5/+11
* Temporarily disable preemption in SCHED_ULE due to reported panics andrwatson2004-07-061-0/+2
* Add NULL arg to mi_switch() call to stop kernel compiles from breaking.phk2004-07-031-1/+1
* Fix SCHED_ULE build on SMP. The previous revision (1.110)bmilekic2004-07-031-1/+1
* Implement preemption of kernel threads natively in the scheduler ratherjhb2004-07-021-1/+10
* - Change mi_switch() and sched_switch() to accept an optional thread tojhb2004-07-021-5/+9
* Add the sysctl node 'kern.sched.name' that has the name of the schedulerscottl2004-06-211-0/+5
* Nice, is a property of a process as a whole..julian2004-06-161-24/+30
* - Run sched_balance() and sched_balance_groups() from hardclock viajeff2004-06-021-38/+21
* There was a thread on "unusually high load averages" when running underobrien2004-04-221-2/+2
* Spell "switches" a more conventional way.cognet2004-04-091-1/+1
* - Use the proper constant in sched_interact_update(). Previously,jeff2004-04-041-1/+1
* Change the type of the various CPU masks to cpumask_t. Note that asmarcel2004-03-271-4/+4
* Give a more reasonable CPU time to the threads which are using schedulerobrien2004-03-211-6/+3
* Switch the sleep/wakeup and condition variable implementations to use thejhb2004-02-271-2/+2
* - Allow interactive tasks to use the maximum time-slice. This is not asjeff2004-02-011-1/+1
* - Add a new member to struct kseq called ksq_sysload. This is intended tojeff2004-02-011-3/+27
* - sched_strict has been dead for a long time now. Get rid of it.jeff2004-01-251-3/+0
* - Clean up KASSERTS.jeff2004-01-251-4/+4
* - Add a flags parameter to mi_switch. The value of flags may be SW_VOL orjeff2004-01-251-2/+1
* - Make our transfer decisions based on load and not transferable load. Ajeff2003-12-201-7/+1
* - Enable ithread migration on x86. This is done to work around a bug in thejeff2003-12-201-0/+10
* - In kseq_transfer() return if smp has not been started.jeff2003-12-201-9/+14
* - Running interactive tasks with the minimum time-slice is fine for vi andjeff2003-12-201-1/+2
* - Assign the ke_cpu field in kseq_notify() so that all of our callers do notjeff2003-12-141-4/+2
* - Now that we have kseq groups, balance them seperately.jeff2003-12-121-47/+130
* - Don't let the pctcpu rate limiter throttle us if we have recorded overjeff2003-12-111-1/+2
* - In sched_switch(), if a thread has been assigned, don't touch the runqueuesjeff2003-12-111-15/+21
OpenPOWER on IntegriCloud