summaryrefslogtreecommitdiffstats
path: root/sys/kern/sched_ule.c
Commit message (Expand)AuthorAgeFilesLines
* Nice, is a property of a process as a whole..julian2004-06-161-24/+30
* - Run sched_balance() and sched_balance_groups() from hardclock viajeff2004-06-021-38/+21
* There was a thread on "unusually high load averages" when running underobrien2004-04-221-2/+2
* Spell "switches" a more conventional way.cognet2004-04-091-1/+1
* - Use the proper constant in sched_interact_update(). Previously,jeff2004-04-041-1/+1
* Change the type of the various CPU masks to cpumask_t. Note that asmarcel2004-03-271-4/+4
* Give a more reasonable CPU time to the threads which are using schedulerobrien2004-03-211-6/+3
* Switch the sleep/wakeup and condition variable implementations to use thejhb2004-02-271-2/+2
* - Allow interactive tasks to use the maximum time-slice. This is not asjeff2004-02-011-1/+1
* - Add a new member to struct kseq called ksq_sysload. This is intended tojeff2004-02-011-3/+27
* - sched_strict has been dead for a long time now. Get rid of it.jeff2004-01-251-3/+0
* - Clean up KASSERTS.jeff2004-01-251-4/+4
* - Add a flags parameter to mi_switch. The value of flags may be SW_VOL orjeff2004-01-251-2/+1
* - Make our transfer decisions based on load and not transferable load. Ajeff2003-12-201-7/+1
* - Enable ithread migration on x86. This is done to work around a bug in thejeff2003-12-201-0/+10
* - In kseq_transfer() return if smp has not been started.jeff2003-12-201-9/+14
* - Running interactive tasks with the minimum time-slice is fine for vi andjeff2003-12-201-1/+2
* - Assign the ke_cpu field in kseq_notify() so that all of our callers do notjeff2003-12-141-4/+2
* - Now that we have kseq groups, balance them seperately.jeff2003-12-121-47/+130
* - Don't let the pctcpu rate limiter throttle us if we have recorded overjeff2003-12-111-1/+2
* - In sched_switch(), if a thread has been assigned, don't touch the runqueuesjeff2003-12-111-15/+21
* - Add support for CPU groups to ule. All SMT cores on the same physicaljeff2003-12-111-116/+263
* rqb_bits[] may be an int64_t (eg: on alpha, and recently on amd64).peter2003-12-071-1/+1
* Fix all users of mp_maxid to use the same semantics, namely:jhb2003-12-031-1/+1
* - Mark ksq_assigned as volatile so that when this code is used withoutjeff2003-11-171-3/+3
* - Remove long dead code. rslices hasn't been used in some time and neitherjeff2003-11-171-52/+4
* - Introduce kseq_runq_{add,rem}() which are used to insert and removejeff2003-11-151-61/+83
* - Somehow I botched my last commit. Add an extra ( to fix things up. I'mjeff2003-11-061-1/+1
* - Remove the local definition of sched_pin and unpin. They are provided injeff2003-11-061-17/+3
* - It's ok if sched_runnable() has races in it, we don't need the sched_lockjeff2003-11-051-3/+4
* - Add initial support for pinning and binding.jeff2003-11-041-2/+53
* - Remove kseq_find(), we no longer scan other cpu's run queues when we gojeff2003-11-031-66/+17
* - Remove the ksq_loads[] array. We are only interested in three counts,jeff2003-11-021-33/+50
* - In sched_prio() only force us onto the current queue if our priority isjeff2003-11-021-1/+2
* - Rename SCHED_PRI_NTHRESH to SCHED_SLICE_NTHRESH since it is only used injeff2003-11-021-10/+11
* - Remove uses of PRIO_TOTAL and replace them with SCHED_PRI_NRESVjeff2003-11-021-5/+5
* - Change sched_interact_update() to only accept slp+runtime values betweenjeff2003-11-021-27/+56
* - Add static to local functions and data where it was missing.jeff2003-10-311-78/+222
* Removed sched_nest variable in sched_switch(). Context switches alwaysbde2003-10-291-3/+0
* - Only change the run queue in sched_prio() if the kse is non null. threadsjeff2003-10-281-10/+2
* - Use a better algorithm in sched_pctcpu_update()jeff2003-10-271-56/+50
* - If a thread is not bound to a kse return 0 from sched_pctcpu().jeff2003-10-201-0/+2
* - Only kse_reassign() in the !running case.jeff2003-10-161-8/+10
* - Call sched_add() with the correct argument on SMP.jeff2003-10-161-1/+1
* - Fix a minor problem with my last commit, we don't want to return fromjeff2003-10-161-3/+1
* - Collapse sched_switchin() and sched_switchout() into sched_switch(). Nowjeff2003-10-161-8/+9
* - Update the sched api. sched_{add,rem,clock,pctcpu} now all accept a tdjeff2003-10-161-7/+14
* - The non iterative algorithm for interact_update was broken due tojeff2003-10-161-8/+6
* - If our user_pri doesn't match our actual priority our priority has beenjeff2003-10-151-3/+10
* - In SCHED_CURR() add holding Giant to the list of criteria that will keepjeff2003-10-121-8/+7
OpenPOWER on IntegriCloud