summaryrefslogtreecommitdiffstats
path: root/sys/kern/sched_ule.c
Commit message (Expand)AuthorAgeFilesLines
* MFC r315851: move thread switch tracing from mi_switch to sched_switchavg2017-04-141-0/+15
* MFC r315075: trace thread running state when a thread is run for the first timeavg2017-03-231-0/+4
* MFC r314625:markj2017-03-101-1/+5
* MFC r312426: fix a thread preemption regression in schedulers introducedavg2017-01-231-2/+2
* Get rid of struct proc p_sched and struct thread td_sched pointers.kib2016-06-051-46/+48
* The struct thread td_estcpu member is only used by the 4BSD scheduler.kib2016-04-171-5/+3
* Summary: Add the interactivity equations to the header comment for ourgnn2015-08-261-0/+15
* kgdb uses td_oncpu to determine if a thread is running and should usejhb2015-08-031-0/+4
* Change the mb() use in the sched_ult tdq_notify() and sched_idletd()kib2015-07-101-2/+2
* Relocate sched_random() within the SMP section.pfg2015-07-071-20/+18
* Use sbuf_new_for_sysctl() instead of plain sbuf_new() to ensure sysctlian2015-03-141-3/+2
* Put back Andy's void for gcc happiness.imp2015-02-271-1/+1
* Make sched_random() return an unsigned number, and use uint32_timp2015-02-271-11/+13
* Fix sched_ule on sparc64, gcc complains sched_random is not a correctandrew2015-02-271-1/+1
* sched_random is only called for SMP, only define it there.andrew2015-02-271-1/+2
* Create sched_rand() and move the LCG code into that. Call this whenimp2015-02-271-9/+22
* Update the ULE scheduler + thread and kinfo structs to use int for cpuidadrian2014-10-181-1/+1
* Reprase r271616 comments.mav2014-09-171-2/+2
* Add comments describing r271604 change.mav2014-09-151-0/+12
* Add couple memory barries to serialize tdq_cpu_idle and tdq_load accesses.mav2014-09-141-0/+2
* Restore pre-r239157 handling of sched_yield(), when thread time slice wasmav2014-08-231-1/+2
* Micro-manage clang to get the expected inlining for cpu_search().kib2014-07-031-6/+8
* Remove write-only local variable.kib2014-06-081-2/+0
* Fix GENERIC build.attilio2014-03-191-0/+1
* - Make runq_steal_from more aggressive. Previously it would examine onlyjeff2014-03-081-16/+11
* ULE works on Book-E since r258002, so remove statements to the contrary.nwhitehorn2014-02-011-4/+0
* In sys/kern/sched_ule.c, remove static function sched_both(), which isdim2013-12-251-24/+0
* Fix an off-by-one error in r228960. The maximum priority delta providedjhb2013-12-031-1/+1
* dtrace sdt: remove the ugly sname parameter of SDT_PROBE_DEFINEavg2013-11-261-16/+16
* - For kernel compiled only with KDTRACE_HOOKS and not any lock debuggingattilio2013-11-251-1/+0
* Micro-optimize cpu_search(), allowing compiler to use more efficient inlinemav2013-09-071-2/+10
* Point args[0] not at the thread that is ending but at the one thatgnn2013-04-151-1/+1
* Fix bug in r242852 that prevented CPU from becoming idle if kernel builtmav2012-11-151-1/+3
* Several optimizations to sched_idletd():mav2012-11-101-18/+35
* - Change ULE to use dynamic slice sizes for the timeshare queue in orderjeff2012-11-081-10/+48
* Rework the known mutexes to benefit about staying on their ownattilio2012-10-311-3/+2
* tdq_lock_pair() already does spinlock_enter() so migration is notattilio2012-10-301-2/+0
* Pad tdq_lock to avoid false sharing with tdq_load and tdq_cpu_idle.jimharris2012-10-241-1/+6
* remove duplicate semicolons where possible.eadler2012-10-221-1/+1
* sched_ule: fix inverted condition in reporting of priority lending via ktravg2012-09-141-1/+1
* Mark the idle threads as non-sleepable and also assert that an idlejhb2012-08-221-0/+1
* Some more minor tunings inspired by bde@.mav2012-08-111-18/+22
* Allow idle threads to steal second threads from other cores on systems withmav2012-08-111-6/+0
* Some minor tunings/cleanups inspired by bde@ after previous commits:mav2012-08-101-30/+40
* Rework r220198 change (by fabient). I believe it solves the problem frommav2012-08-091-5/+8
* Let us manage differences of Book-E PowerPC variations i.e. vendor /raj2012-05-271-1/+1
* Implement the DTrace sched provider. This implementation aims to berstone2012-05-151-1/+35
* Microoptimize cpu_search().mav2012-04-091-24/+28
* Rewrite thread CPU usage percentage math to not depend on periodic callsmav2012-03-131-46/+22
* Make kern.sched.idlespinthresh default value adaptive depending of HZ.mav2012-03-091-1/+3
OpenPOWER on IntegriCloud