summaryrefslogtreecommitdiffstats
path: root/sys/kern
Commit message (Collapse)AuthorAgeFilesLines
* Add a MI intr_event_handle() routine for the non-INTR_FILTER case. Thisjhb2008-04-051-21/+105
| | | | | | | | | | | | | | | | | | | | | | | | | | | allows all the INTR_FILTER #ifdef's to be removed from the MD interrupt code. - Rename the intr_event 'eoi', 'disable', and 'enable' hooks to 'post_filter', 'pre_ithread', and 'post_ithread' to be less x86-centric. Also, add a comment describe what the MI code expects them to do. - On amd64, i386, and powerpc this is effectively a NOP. - On arm, don't bother masking the interrupt unless the ithread is scheduled in the non-INTR_FILTER case to match what INTR_FILTER did. Also, don't bother unmasking the interrupt in the post_filter case if we never masked it. The INTR_FILTER case had been doing this by having arm_unmask_irq for the post_filter (formerly 'eoi') hook. - On ia64, stray interrupts are now masked for the non-INTR_FILTER case. They were already masked in the INTR_FILTER case. - On sparc64, use the a NULL pre_ithread hook and use intr_enable_eoi() for both the 'post_filter' and 'post_ithread' hooks to match what the non-INTR_FILTER code did. - On sun4v, retire the ithread wrapper hack by using an appropriate 'post_ithread' hook instead (it's what 'post_ithread'/'enable' was designed to do even in 5.x). Glanced at by: piso Reviewed by: marius Requested by: marius [1], [5] Tested on: amd64, i386, arm, sparc64
* Reintroduce UMA_SLAB_KMAP; however, change its spelling toalc2008-04-041-1/+2
| | | | | | | | | | | | | | | | | | | UMA_SLAB_KERNEL for consistency with its sibling UMA_SLAB_KMEM. (UMA_SLAB_KMAP met its original demise in revision 1.30 of vm/uma_core.c.) UMA_SLAB_KERNEL is now required by the jumbo frame allocators. Without it, UMA cannot correctly return pages from the jumbo frame zones to the VM system because it resets the pages' object field to NULL instead of the kernel object. In more detail, the jumbo frame zones are created with the option UMA_ZONE_REFCNT. This causes UMA to overwrite the pages' object field with the address of the slab. However, when UMA wants to release these pages, it doesn't know how to restore the object field, so it sets it to NULL. This change teaches UMA how to reset the object field to the kernel object. Crashes reported by: kris Fix tested by: kris Fix discussed with: jeff MFC after: 6 weeks
* - Add sysctls at debug.rwlock to control the behavior of the speculativejeff2008-04-041-3/+26
| | | | | | | | | | | | spinning when readers hold a lock. This spinning is speculative because, unlike the write case, we can not test whether the owners are running. - Add speculative read spinning for readers who are blocked by pending writers while a read lock is still held. This allows the thread to spin until the write lock succeeds after which it may spin until the writer has released the lock. This prevents excessive context switches when readers and writers both hold the lock for brief periods. Sponsored by: Nokia
* - Add a Nokia copyright to cpuset to reflect their generousjeff2008-04-041-0/+3
| | | | contribution to this work.
* - Allow static_boost to specify no boost with '0', traditional kerneljeff2008-04-041-2/+6
| | | | | | | | | | | | fixed pri boost with '1' or any priority less than the current thread's priority with a value greater than two. Default the boost to PRI_MIN_TIMESHARE to prevent regular user-space threads from starving threads in the kernel. This prevents these user-threads from also being scheduled as if they are high fixed-priority kernel threads. - Restore the setting of lowpri in tdq_choose(). It has to be either here or in sched_switch(). I accidentally removed it from both places. Tested by: kris
* - Don't check for the ITHD pri class in tdq_load_add and rem. 4BSD doesn'tjeff2008-04-041-12/+6
| | | | | | do this either. Simply check P_NOLOAD. It'd be nice if this was in a thread flag so we didn't have an extra cache miss every time we add and remove a thread from the run-queue.
* - Fix a mis-merge that crept in during the softclock changes.jeff2008-04-041-2/+0
| | | | Spotted by: jhb
* let umtxq_busy() only spin on mp machine. make function namedavidxu2008-04-031-10/+14
| | | | do_rwlock_unlock to be consistent with others.
* - Convert two timeout users to the new callout_reset_curcpu() api.jeff2008-04-022-4/+4
| | | | Sponsored by: Nokia
* Implement per-cpu callout threads, wheels, and locks.jeff2008-04-023-158/+262
| | | | | | | | | | | | | | | | | | | | | - Move callout thread creation from kern_intr.c to kern_timeout.c - Call callout_tick() on every processor via hardclock_cpu() rather than inspecting callout internal details in kern_clock.c. - Remove callout implementation details from callout.h - Package up all of the global variables into a per-cpu callout structure. - Start one thread per-cpu. Threads are not strictly bound. They prefer to execute on the native cpu but may migrate temporarily if interrupts are starving callout processing. - Run all callouts by default in the thread for cpu0 to maintain current ordering and concurrency guarantees. Many consumers may not properly handle concurrent execution. - The new callout_reset_on() api allows specifying a particular cpu to execute the callout on. This may migrate a callout to a new cpu. callout_reset() schedules on the last assigned cpu while callout_reset_curcpu() schedules on the current cpu. Reviewed by: phk Sponsored by: Nokia
* Add two missed chunks from the rev. 1.210, for the giant_read() andkib2008-04-021-4/+2
| | | | | | | giant_ioctl(). PR: kern/122287 MFC after: 3 days
* - Destroy the bo mtx when the vnode is destroyed.jeff2008-04-021-0/+1
|
* Fix compiling problem for amd64.davidxu2008-04-021-2/+2
|
* Er, don't restart a timeout version.davidxu2008-04-021-2/+4
|
* Introduce kernel based userland rwlock. Each umtx chain now has two lists,davidxu2008-04-021-45/+487
| | | | | | | one for readers and one for writers, other types of synchronization object just use first list. Asked by: jeff
* Add rw_try_rlock() and rw_try_wlock() to rwlocks.attilio2008-04-011-0/+49
| | | | | | | | | | These functions try the specified operation (rlocking and wlocking) and true is returned if the operation completes, false otherwise. The KPI is enriched by this commit, so __FreeBSD_version bumping and manpage updating will happen soon. Requested by: jeff, kris
* Don't try to use an SX lock while holding the vnode interlock.dfr2008-04-011-5/+10
| | | | Sponsored by: Isilon Systems
* Regenkib2008-03-313-2/+451
|
* Add the openat(), fexecve() and other *at() syscalls to the table.kib2008-03-311-1/+28
| | | | | | | Based on the submission by rdivacky, sponsored by Google Summer of Code 2007 Reviewed by: rwatson, rdivacky Tested by: pho
* Implement the fexecve(2) syscall.kib2008-03-311-28/+77
| | | | | | | Based on the submission by rdivacky, sponsored by Google Summer of Code 2007 Reviewed by: rwatson, rdivacky Tested by: pho
* Implement thekib2008-03-311-125/+452
| | | | | | | | | | | | openat(2), faccessat(2), fchmodat(2), fchownat(2), fstatat(2), futimesat(2), linkat(2), mkdirat(2), mkfifoat(2), mknodat(2), readlinkat(2), renameat(2), symlinkat(2) syscalls. Based on the submission by rdivacky, sponsored by Google Summer of Code 2007 Reviewed by: rwatson, rdivacky Tested by: pho
* Add the support for the AT_FDCWD and fd-relative name lookups to thekib2008-03-315-3/+27
| | | | | | | | | namei(9). Based on the submission by rdivacky, sponsored by Google Summer of Code 2007 Reviewed by: rwatson, rdivacky Tested by: pho
* Add the support for the O_EXEC open(2) mode, as specified by thekib2008-03-312-2/+14
| | | | | | | POSIX Extended API Set Part 2 extension specification. Reviewed by: rwatson, rdivacky Tested by: pho
* Add the utility function vn_commname() to retrieve the command namekib2008-03-311-0/+19
| | | | | | | from the vfs namecache, when available. Reviewed by: rwatson, rdivacky Tested by: pho
* - Consistently return EDEADLK when presented with a new set that isjeff2008-03-301-20/+27
| | | | | | | | | | | | incompatible with existing bindings. - Try to copyout the setid in cpuset() before migrating the proc to the setid in case the user has supplied a bad buffer. - Rename cpuset_root() and cpuset_base() to cpuset_ref{root,base} to be more descriptive and free cpuset_root to be used as a different type of symbol. - Make cpuset_root the cpuset_t set of all cpus in the system. This should contain the same bitmask as all_cpus presently. - Add a CPU_CMP() macro to compare two sets.
* - Don't allow calls to vn_lock() with no lock type requested. Callersjeff2008-03-291-14/+4
| | | | | | | | which simply want a reference should use vref(). Callers which want to check validity need to hold a lock while performing any action based on that validity. vn_lock() would always release the interlock before returning making any action synchronous with the validity check impossible.
* - Use vget() to lock the vnode rather than refing without a lock andjeff2008-03-291-6/+3
| | | | locking in separate steps.
* b_waiters cannot be adequately protected by the interlock because it isattilio2008-03-282-13/+7
| | | | | | | | | | | | | | | | dropped after the call to lockmgr() so just revert this approach using something similar to the precedent one: BUF_LOCKWAITERS() just checks if there are waiters (not the actual number of them) and it is based on newly introduced lockmgr_waiters() which returns if the lockmgr has waiters or not. The name has been choosen differently by old lockwaiters() in order to not confuse them. KPI results enriched by this commit so __FreeBSD_version bumping and manpage update will be happening soon. 'struct buf' also changes, so kernel ABI is disturbed. Bug found by: jeff Approved by: jeff, kib
* Regen after makesyscalls.sh change.jb2008-03-271-0/+4767
|
* Generate another function for the DTrace syscall provider to specifyjb2008-03-271-2/+14
| | | | | | | the syscall argument types. This code is only compiled into the systrace kernel modul and has no effect otherwise.
* The "free-lance" timer in the i8254 is only used for the speakerphk2008-03-261-0/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | these days, so de-generalize the acquire_timer/release_timer api to just deal with speakers. The new (optional) MD functions are: timer_spkr_acquire() timer_spkr_release() and timer_spkr_setfreq() the last of which configures the timer to generate a tone of a given frequency, in Hz instead of 1/1193182th of seconds. Drop entirely timer2 on pc98, it is not used anywhere at all. Move sysbeep() to kern/tty_cons.c and use the timer_spkr*() if they exist, and do nothing otherwise. Remove prototypes and empty acquire-/release-timer() and sysbeep() functions from the non-beeping archs. This eliminate the need for the speaker driver to know about i8254frequency at all. In theory this makes the speaker driver MI, contingent on the timer_spkr_*() functions existing but the driver does not know this yet and still attaches to the ISA bus. Syscons is more tricky, in one function, sc_tone(), it knows the hz and things are just fine. In the other function, sc_bell() it seems to get the period from the KDMKTONE ioctl in terms if 1/1193182th second, so we hardcode the 1193182 and leave it at that. It's probably not important. Change a few other sysbeep() uses which obviously knew that the argument was in terms of i8254 frequency, and leave alone those that look like people thought sysbeep() took frequency in hertz. This eliminates the knowledge of i8254_freq from all but the actual clock.c code and the prof_machdep.c on amd64 and i386, where I think it would be smart to ask for help from the timecounters anyway [TBD].
* Regen.dfr2008-03-263-4/+14
|
* Add the new kernel-mode NFS Lock Manager. To use it instead of thedfr2008-03-264-473/+1969
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | user-mode lock manager, build a kernel with the NFSLOCKD option and add '-k' to 'rpc_lockd_flags' in rc.conf. Highlights include: * Thread-safe kernel RPC client - many threads can use the same RPC client handle safely with replies being de-multiplexed at the socket upcall (typically driven directly by the NIC interrupt) and handed off to whichever thread matches the reply. For UDP sockets, many RPC clients can share the same socket. This allows the use of a single privileged UDP port number to talk to an arbitrary number of remote hosts. * Single-threaded kernel RPC server. Adding support for multi-threaded server would be relatively straightforward and would follow approximately the Solaris KPI. A single thread should be sufficient for the NLM since it should rarely block in normal operation. * Kernel mode NLM server supporting cancel requests and granted callbacks. I've tested the NLM server reasonably extensively - it passes both my own tests and the NFS Connectathon locking tests running on Solaris, Mac OS X and Ubuntu Linux. * Userland NLM client supported. While the NLM server doesn't have support for the local NFS client's locking needs, it does have to field async replies and granted callbacks from remote NLMs that the local client has contacted. We relay these replies to the userland rpc.lockd over a local domain RPC socket. * Robust deadlock detection for the local lock manager. In particular it will detect deadlocks caused by a lock request that covers more than one blocking request. As required by the NLM protocol, all deadlock detection happens synchronously - a user is guaranteed that if a lock request isn't rejected immediately, the lock will eventually be granted. The old system allowed for a 'deferred deadlock' condition where a blocked lock request could wake up and find that some other deadlock-causing lock owner had beaten them to the lock. * Since both local and remote locks are managed by the same kernel locking code, local and remote processes can safely use file locks for mutual exclusion. Local processes have no fairness advantage compared to remote processes when contending to lock a region that has just been unlocked - the local lock manager enforces a strict first-come first-served model for both local and remote lockers. Sponsored by: Isilon Systems PR: 95247 107555 115524 116679 MFC after: 2 weeks
* Implement taskqueue_block() and taskqueue_unblock(). These functions allowscottl2008-03-251-1/+28
| | | | | | | | | | the owner of a queue to block and unblock execution of the tasks in the queue while allowing tasks to continue to be added queue. Combining this with taskqueue_drain() allows a queue to be safely disabled. The unblock function may run (or schedule to run) the queue when it is called, just as calling taskqueue_enqueue() would. Reviewed by: jhb, sam
* Replaced the misleading uses of a historical artefact M_TRYWAIT with M_WAIT.ru2008-03-254-52/+28
| | | | | | | | | | Removed dead code that assumed that M_TRYWAIT can return NULL; it's not true since the advent of MBUMA. Reviewed by: arch There are ongoing disputes as to whether we want to switch to directly using UMA flags M_WAITOK/M_NOWAIT for mbuf(9) allocation.
* Regen after changing prototypes of cpuset_{get,set}affinity().ru2008-03-253-7/+6
|
* Fixed type of the fourth argument of cpuset_{get,set}affinity(2) to be size_t.ru2008-03-253-16/+15
| | | | Prodded by: davidxu
* - Greatly simplify vget() by removing the guarantee that any newjeff2008-03-241-32/+18
| | | | | | | | | | references to a vnode with VI_OWEINACT set will force the vinactive() call. The kernel makes no guarantees about which reference was the last to close a file or when the actual inactive processing will happen. The previous code was designed to preserve existing semantics in the face of shared locks, however, this was unnecessary. Discussed with: mckusick
* - Don't acquire the vnode interlock in _vn_lock() unless no lock typejeff2008-03-241-19/+13
| | | | | | | | | is requested. Handle this case specially before the while loop. - Use the held vnode lock to check for VI_DOOMED. The vnode lock and interlock must both be held to set VI_DOOMED so either one held, even shared, is sufficient to check it. No objection by: kib
* Yield the cpu in the kernel while iterating the list of thekib2008-03-231-0/+6
| | | | | | | | | | | | | vnodes belonging to the mountpoint. Also, yield when in the softdep_process_worklist() even when we are not going to sleep due to buffer drain. It is believed that the ULE fixed the problem [1], but the yielding seems to be needed at least for the 4BSD case. Discussed: on stable@, with bde Reviewed by: tegge, jeff [1] MFC after: 2 weeks
* Remove commented out code, thread suspension is done in thread library.davidxu2008-03-231-2/+1
|
* - Only return 1 from sync_vnode() in cases where the vnode is stilljeff2008-03-231-1/+1
| | | | | | | at the head of the sync list. This prevents sched_sync() from re-queueing a vnode which may have been freed already. Discussed with: kib
* - Pass BO_MTX(bo) to lockmgr in vtruncbuf, we don't own the vnodejeff2008-03-231-1/+1
| | | | | | interlock here anymore. Reported by: kris
* In abort2(2): Accept a NULL arg pointer if nargs == 0phk2008-03-221-6/+8
|
* - Complete part of the unfinished bufobj work by consistently usingjeff2008-03-224-57/+64
| | | | | | | | | | | | | | | | | BO_LOCK/UNLOCK/MTX when manipulating the bufobj. - Create a new lock in the bufobj to lock bufobj fields independently. This leaves the vnode interlock as an 'identity' lock while the bufobj is an io lock. The bufobj lock is ordered before the vnode interlock and also before the mnt ilock. - Exploit this new lock order to simplify softdep_check_suspend(). - A few sync related functions are marked with a new XXX to note that we may not properly interlock against a non-zero bv_cnt when attempting to sync all vnodes on a mountlist. I do not believe this race is important. If I'm wrong this will make these locations easier to find. Reviewed by: kib (earlier diff) Tested by: kris, pho (earlier diff)
* Fix a race where timeout/untimeout could cause crashes for Giant lockedalfred2008-03-221-4/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | code. The bug: There exists a race condition for timeout/untimeout(9) due to the way that the softclock thread dequeues timeouts. The softclock thread sets the c_func and c_arg of the callout to NULL while holding the callout lock but not Giant. It then drops the callout lock and acquires Giant. It is at this point where untimeout(9) on another cpu/thread could be called. Since c_arg and c_func are cleared, untimeout(9) does not touch the callout and returns as if the callout is canceled. The softclock then tries to acquire Giant and likely blocks due to the other cpu/thread holding it. The other cpu/thread then likely deallocates the backing store that c_arg points to and finishes working and hence drops Giant. Softclock resumes and acquires giant and calls the function with the now free'd c_arg and we have corruption/crash. The fix: We need to track curr_callout even for timeout(9) (LOCAL_ALLOC) callouts. We need to free the callout after the softclock processes it to deal with the race here. Obtained from: Juniper Networks, iedowse Reviewed by: jhb, iedowse MFC After: 2 weeks.
* Reduce contention on the vnode interlock by not acquiring the BO_LOCKkib2008-03-211-12/+10
| | | | | | | | around the check for the BV_BKGRDINPROG in the brelse() and bqrelse(). See the comment for the explanation why it is safe. Tested by: pho Submitted by: jeff
* - Reduce contention on the global bdonelock and bpinlock by usingjeff2008-03-211-30/+34
| | | | | a pool mutex to protect these sleep/wakeup/counter races. This still is preferable to bloating each bio with a mtx.
* - Add a new td flag TDF_NEEDSUSPCHK that is set whenever a thread needsjeff2008-03-215-19/+18
| | | | | | | | | | | | | to enter thread_suspend_check(). - Set TDF_ASTPENDING along with TDF_NEEDSUSPCHK so we can move the thread_suspend_check() to ast() rather than userret(). - Check TDF_NEEDSUSPCHK in the sleepq_catch_signals() optimization so that we don't miss a suspend request. If this is set use the expensive signal path. - Set NEEDSUSPCHK when creating a new thread in thr in case the creating thread is due to be suspended as well but has not yet. Reviewed by: davidxu (Authored original patch)
* Implement a BUS_BIND_INTR() method in the bus interface to bind an IRQjhb2008-03-202-0/+48
| | | | | | | | | | | | | resource to a CPU. The default method is to pass the request up to the parent similar to BUS_CONFIG_INTR() so that all busses don't have to explicitly implement bus_bind_intr. A bus_bind_intr(9) wrapper routine similar to bus_setup/teardown_intr() is added for device drivers to use. Unbinding an interrupt is done by binding it to NOCPU. The IRQ resource must be allocated, but it can happen in any order with respect to bus_setup_intr(). Currently it is only supported on amd64 and i386 via nexus(4) methods that simply call the intr_bind() routine. Tested by: gallatin
OpenPOWER on IntegriCloud