summaryrefslogtreecommitdiffstats
path: root/sys/kern/subr_smp.c
Commit message (Collapse)AuthorAgeFilesLines
* Remove forward_roundrobin(), it is unused for quite some time.kib2009-09-211-33/+0
| | | | | Reviewed by: jhb MFC after: 1 week
* * Completely Remove the option STOP_NMI from the kernel. This optionattilio2009-08-131-4/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | has proven to have a good effect when entering KDB by using a NMI, but it completely violates all the good rules about interrupts disabled while holding a spinlock in other occasions. This can be the cause of deadlocks on events where a normal IPI_STOP is expected. * Adds an new IPI called IPI_STOP_HARD on all the supported architectures. This IPI is responsible for sending a stop message among CPUs using a privileged channel when disponible. In other cases it just does match a normal IPI_STOP. Right now the IPI_STOP_HARD functionality uses a NMI on ia32 and amd64 architectures, while on the other has a normal IPI_STOP effect. It is responsibility of maintainers to eventually implement an hard stop when necessary and possible. * Use the new IPI facility in order to implement a new userend SMP kernel function called stop_cpus_hard(). That is specular to stop_cpu() but it does use the privileged channel for the stopping facility. * Let KDB use the newly introduced function stop_cpus_hard() and leave stop_cpus() for all the other cases * Disable interrupts on CPU0 when starting the process of APs suspension. * Style cleanup and comments adding This patch should fix the reboot/shutdown deadlocks many users are constantly reporting on mailing lists. Please don't forget to update your config file with the STOP_NMI option removal Reviewed by: jhb Tested by: pho, bz, rink Approved by: re (kib)
* - Remove the bogus idle thread state code. This may have a race in itjeff2009-04-291-1/+1
| | | | | | | | | | | and it only optimized out an ipi or mwait in very few cases. - Skip the adaptive idle code when running on SMT or HTT cores. This just wastes cpu time that could be used on a busy thread on the same core. - Rename CG_FLAG_THREAD to CG_FLAG_SMT to be more descriptive. Re-use CG_FLAG_THREAD to mean SMT or HTT. Sponsored by: Nokia
* Initial suspend/resume support for amd64.jkim2009-03-171-0/+48
| | | | | | This code is heavily inspired by Takanori Watanabe's experimental SMP patch for i386 and large portion was shamelessly cut and pasted from Peter Wemm's AP boot code.
* as suggested by jhb@, panic in case the ncpus == 0.dchagin2009-03-031-1/+1
| | | | | | | it helps to catch bugs in the callers. Approved by: kib (mentor) MFC after: 5 days
* Fix range-check error introduced in r182292. Also do not do anythingdchagin2009-03-011-1/+3
| | | | | | | if all processors in the map are not available, simply return. Approved by: kib (mentor) MFC after: 1 week
* Whitespace tweak.jhb2009-01-261-1/+1
|
* Adjust the license statement to more closely match a standard 3-clause BSDjhb2008-11-031-12/+12
| | | | | | license. MFC after: 3 days
* - Only count the number of CPUs in the rendezvous map once rather thanjhb2008-08-271-14/+8
| | | | | | | | | | doing it on every CPU. - Use CPU_ABSENT() rather than pcpu_find() to determine if a CPU is not present. - Count up to mp_maxid rather than MAXCPU when iterating over CPUs to match the rest of the code in the kernel. MFC after: 1 week
* Allow a rendezvous with just a specified CPU too.jb2008-05-231-19/+61
| | | | | | Make the API work in the non-smp case too so that a kernel module can work the same regardless of whether or not it is loaded on a SMP kernel or not.
* In keeping with style(9)'s recommendations on macros, use a ';'rwatson2008-03-161-3/+3
| | | | | | | | | after each SYSINIT() macro invocation. This makes a number of lightweight C parsers much happier with the FreeBSD kernel source, including cflow's prcc and lxr. MFC after: 1 month Discussed with: imp, rink
* - Add the missing '2' case to the switch table for kern.smp.topology andjeff2008-03-101-0/+4
| | | | | assign it to create the flat 'none' topology where all cpus are scheduled as if they are equal and unrelated.
* - Remove the old smp cpu topology specification with a new, more flexiblejeff2008-03-021-14/+188
| | | | | | | | | | | | | | | | | tree structure that encodes the level of cache sharing and other properties. - Provide several convenience functions for creating one and two level cpu trees as well as a default flat topology. The system now always has some topology. - On i386 and amd64 create a seperate level in the hierarchy for HTT and multi-core cpus. This will allow the scheduler to intelligently load balance non-uniform cores. Presently we don't detect what level of the cache hierarchy is shared at each level in the topology. - Add a mechanism for testing common topologies that have more information than the MD code is able to provide via the kern.smp.topology tunable. This should be considered a debugging tool only and not a stable api. Sponsored by: Nokia
* A few whitespace fixes.jhb2008-01-021-12/+11
|
* Initial checkin for rmlock (read mostly lock) a multi reader single writerups2007-11-081-16/+37
| | | | | | | | lock optimized for almost exclusive reader access. (see also rmlock.9) TODO: Convert to per cpu variables linkerset as soon as it is available. Optimize UP (single processor) case.
* This is a follow-up, cleaning-up commit about recent changes involvingattilio2007-09-111-1/+1
| | | | | | | | | | | | | | | | topology foo functions. Working at the patch for topology problems in ia32/amd64 evicted some problems regarding functions ordering in the SI_SUB_CPU family of SYSINIT'ed subsystems. In order to avoid problems with new modified to involved functions, a correct ordering is not semantically specified for SI_SUB_CPU functions (for a larger view of the issue please visit: http://lists.freebsd.org/pipermail/freebsd-current/2007-July/075409.html ) Discussed with: peter Tested by: kris, Rui Paulo <rpaulo@FreeBSD.org> Approved by: jeff Approved by: re
* Tweak the low-level MI SMP code some:jhb2007-07-031-11/+23
| | | | | | | | | | | | | - Use cpu_spinwait() in the spin loops in stop_cpus(), restart_cpus(), and smp_rendezvous_action(). - Remove unneeded acq memory barriers in stop_cpus(), restart_cpus(), and smp_rendezvous_action(). - Add an additional synch point in smp_rendezvous() to ensure that all the CPUs will always see an up-to-date value of smp_rv_setup_func. Reviewed by: attilio Approved by: re (kensmith) Tested on: alpha, amd64, i386, sparc64 SMP (for several years)
* Commit 14/14 of sched_lock decomposition.jeff2007-06-051-3/+1
| | | | | | | | | | | - Use thread_lock() rather than sched_lock for per-thread scheduling sychronization. - Use the per-process spinlock rather than the sched_lock for per-process scheduling synchronization. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Instead of doing comparisons using the pcpu area to see ifjulian2007-03-081-1/+1
| | | | | | | a thread is an idle thread, just see if it has the IDLETD flag set. That flag will probably move to the pflags word as it's permenent and never chenges for the life of the system so it doesn't need locking.
* Rename the KDB_STOP_NMI kernel option to STOP_NMI and make it apply to alljhb2005-10-241-32/+0
| | | | | | | | | | | | | | | | | | | | | | IPI_STOP IPIs. - Change the i386 and amd64 MD IPI code to send an NMI if STOP_NMI is enabled if an attempt is made to send an IPI_STOP IPI. If the kernel option is enabled, there is also a sysctl to change the behavior at runtime (debug.stop_cpus_with_nmi which defaults to enabled). This includes removing stop_cpus_nmi() and making ipi_nmi_selected() a private function for i386 and amd64. - Fix ipi_all(), ipi_all_but_self(), and ipi_self() on i386 and amd64 to properly handle bitmapped IPIs as well as IPI_STOP IPIs when STOP_NMI is enabled. - Fix ipi_nmi_handler() to execute the restart function on the first CPU that is restarted making use of atomic_readandclear() rather than assuming that the BSP is always included in the set of restarted CPUs. Also, the NMI handler didn't clear the function pointer meaning that subsequent stop and restarts could execute the function again. - Define a new macro HAVE_STOPPEDPCBS on i386 and amd64 to control the use of stoppedpcbs[] and always enable it for i386 and amd64 instead of being dependent on KDB_STOP_NMI. It works fine in both the NMI and non-NMI cases.
* Second part of commit for moving KDB_STOP_NMI from opt_global.h topeter2005-06-301-0/+2
| | | | | | | opt_kdb.h. Found by: kris Approved by: re
* Implement an alternate method to stop CPUs when entering DDB. Normally we usedwhite2005-04-301-0/+29
| | | | | | | | | | | | | | a regular IPI vector, but this vector is blocked when interrupts are disabled. With "options KDB_STOP_NMI" and debug.kdb.stop_cpus_with_nmi set, KDB will send an NMI to each CPU instead. The code also has a context-stuffing feature which helps ddb extract the state of processes running on the stopped CPUs. KDB_STOP_NMI is only useful with SMP and complains if SMP is not defined. This feature only applies to i386 and amd64 at the moment, but could be used on other architectures with the appropriate MD bits. Submitted by: ups
* /* -> /*- for copyright notices, minor format tweaks as necessaryimp2005-01-061-1/+1
|
* Move 4bsd specific experimental IP code into the 4bsd file.julian2004-09-031-130/+1
| | | | Move the sysctls into kern.sched
* *Blush* forgot to test non SMP builds.. oddly enough some UP code (particularlyjulian2004-09-011-1/+2
| | | | | in the acpi code) seems to want this in a UP build. (I guess so you can have a sigle kernel module that works for both)
* Give the 4bsd scheduler the ability to wake up idle processorsjulian2004-09-011-1/+135
| | | | | | when there is new work to be done. MFC after: 5 days
* s/smp_rv_mtx/smp_ipi_mtx/gobrien2004-08-281-4/+4
| | | | Requested by: jhb
* Commit Doug White and Alan Cox's fix for the cross-ipi smp deadlock.peter2004-08-231-1/+8
| | | | | | | | | | | | | | | We were obtaining different spin mutexes (which disable interrupts after aquisition) and spin waiting for delivery. For example, KSE processes do LDT operations which use smp_rendezvous, while other parts of the system are doing things like tlb shootdowns with a different mutex. This patch uses the common smp_rendezvous mutex for all MD home-grown IPIs that spinwait for delivery. Having the single mutex means that the spinloop to aquire it will enable interrupts periodically, thus avoiding the cross-ipi deadlock. Obtained from: dwhite, alc Reviewed by: jhb
* Don't keep evaluating our own cpu mask..julian2004-08-131-2/+3
| | | | it's not likely to have changed....
* Move the CPU newbus attachment to i386 legacy. The acpi_cpu device willnjl2004-05-061-70/+0
| | | | | | become just "cpu" and provide attachments in the !legacy case. Tested by: des
* Change the type of the various CPU masks to cpumask_t. Note that asmarcel2004-03-271-7/+7
| | | | | | | long as there are still explicit uses of int, whether in types or in function names (such as atomic_set_int() in sched_ule.c), we can not change cpumask_t to be anything other than u_int. See also the commit log for sys/sys/types.h, revision 1.84.
* Add powerpc to temporary fix. The new cpu device claims allgrehan2004-03-161-2/+2
| | | | | 'generic' OpenFirmware nexus nodes, since it uses bus_generic_probe. Maybe the cpu device probe should be MD.
* This is a temporary fix to solve a regression issue on sparc64 thatkensmith2004-03-121-0/+4
| | | | | | | is caused by the way sparc64 registers its CPUs. Nate will work on a real fix shortly. Approved by: njl
* Hook CPUs up to newbus. CPUs will ultimately be a bus driver so thatnjl2004-03-091-0/+67
| | | | | | | multiple CPU-specific drivers can attach. This is a work in progress so children aren't supported yet. Help from: jhb
* - Move smp_topology to subr_smp.c so that it is defined on all architectures.jeff2004-01-241-0/+1
|
* Introduce mp_maxcpus which can be used by libkvm utils to find outalfred2003-12-231-0/+5
| | | | | how many CPUs the system was compiled for. Export the variable via a sysctl node 'kern.smp.maxcpus' as well.
* Export a few SMP related symbols in UP kernels as well. This is needed tojhb2003-12-031-0/+36
| | | | | | | | | | aid other kernel code, especially code which can be in a module such as the acpi_cpu(4) driver, to work properly with both SMP and UP kernels. The exported symbols include mp_ncpus, all_cpus, mp_maxid, smp_started, and the smp_rendezvous() function. This also means that CPU_ABSENT() is now always implemented the same on all kernels. Approved by: re (scottl)
* - Split cpu_mp_probe() into two parts. cpu_mp_setmaxid() is still calledjhb2003-11-211-6/+6
| | | | | | | | | | | | | | | | | | | | very early (SI_SUB_TUNABLES - 1) and is responsible for setting mp_maxid. cpu_mp_probe() is now called at SI_SUB_CPU and determines if SMP is actually present and sets mp_ncpus and all_cpus. Splitting these up allows an architecture to probe CPUs later than SI_SUB_TUNABLES by just setting mp_maxid to MAXCPU in cpu_mp_setmaxid(). This could allow the CPU probing code to live in a module, for example, since modules sysinit's in modules cannot be invoked prior to SI_SUB_KLD. This is needed to re-enable the ACPI module on i386. - For the alpha SMP probing code, use LOCATE_PCS() instead of duplicating its contents in a few places. Also, add a smp_cpu_enabled() function to avoid duplicating some code. There is room for further code reduction later since much of this code is also present in cpu_mp_start(). - All archs besides i386 still set mp_maxid to the same values they set it to before this change. i386 now sets mp_maxid to MAXCPU. Tested on: alpha, amd64, i386, ia64, sparc64 Approved by: re (scottl)
* Ensure that mp_ncpus is set to 1 if mp_cpu_probe() fails.jhb2003-10-301-1/+3
|
* Change all SYSCTLS which are readonly and have a related TUNABLEsilby2003-10-211-1/+1
| | | | | from CTLFLAG_RD to CTLFLAG_RDTUN so that sysctl(8) can provide more useful error messages.
* Document some sysctl variables.des2003-06-121-5/+10
| | | | Submitted by: hmp
* Use __FBSDID().obrien2003-06-111-2/+3
|
* Move the _oncpu entry from the KSE to the thread.julian2003-04-101-1/+1
| | | | | The entry in the KSE still exists but it's purpose will change a bit when we add the ability to lock a KSE to a cpu.
* - Move p->p_sigmask to td->td_sigmask. Signal masks will be per thread withjeff2003-03-311-2/+2
| | | | | | | a follow on commit to kern_sig.c - signotify() now operates on a thread since unmasked pending signals are stored in the thread. - PS_NEEDSIGCHK moves to TDF_NEEDSIGCHK.
* Move a bunch of flags from the KSE to the thread.julian2003-02-171-2/+2
| | | | | | | | I was in two minds as to where to put them in the first case.. I should have listenned to the other mind. Submitted by: parts by davidxu@ Reviewed by: jeff@ mini@
* Add a tunable kern.smp.disabled for disabling explicitly smp on an smpjake2002-12-281-1/+5
| | | | kernel.
* Completely redo thread states.julian2002-09-111-1/+1
| | | | Reviewed by: davidxu@freebsd.org
* Part 1 of KSE-IIIjulian2002-06-291-2/+2
| | | | | | | | | | | | | The ability to schedule multiple threads per process (one one cpu) by making ALL system calls optionally asynchronous. to come: ia64 and power-pc patches, patches for gdb, test program (in tools) Reviewed by: Almost everyone who counts (at various times, peter, jhb, matt, alfred, mini, bernd, and a cast of thousands) NOTE: this is still Beta code, and contains lots of debugging stuff. expect slight instability in signals..
* Updated a doubly stale comment about signotify(). Fixed a nearby long line.bde2002-04-051-4/+5
|
* Change callers of mtx_init() to pass in an appropriate lock type name. Injhb2002-04-041-1/+1
| | | | | | | most cases NULL is passed, but in some cases such as network driver locks (which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used. Tested on: i386, alpha, sparc64
OpenPOWER on IntegriCloud