summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_sig.c
Commit message (Collapse)AuthorAgeFilesLines
* - Proc locking. Most of signal handling is now MP safe and doesn't requirejhb2001-03-071-70/+162
| | | | | | | | | | Giant. The only exception is the CANSIGNAL() macro. Unlocking the proc lock around sendsig() in trapsignal() is also questionable. Note that the functions sigexit(), psignal(), and issignal() must be called with the proc lock of the process in question held. postsig() and trapsignal() should not be called with the proc lock held, but they also do not require Giant anymore either. - Remove spl's that are now no longer needed as they are fully replaced.
* Fixed a longstanding latency bug in signal delivery. When a signalbde2001-02-191-6/+2
| | | | | | | | | | | | | | | | | | | is sent to a process, psignal() needs to schedule an AST for the process if the process is runnable, not just if it is current, so that pending signals get checked for on the next return of the process to user mode. This wasn't practical until recently because the AST flag was per-cpu so setting it for a non-current process would usually just cause a bogus AST for the current process. For non-current processes looping in user mode, it took accidental (?) magic to deliver signals at all. Signals were usually delivered late as a side effect of rescheduling (need_resched() sets astpending, etc.). In pre-SMPng, delivery was delayed by at most 1 quantum (the need_resched() call in roundrobin() is certain to occur within 1 quantum for looping processes). In -current, things are complicated by normal interrupt handlers being threads. Missing handling of the complications makes roundrobin() a bogus no-op, but preemptive scheduling sort of works anyway due to even larger bogons elsewhere.
* Implement a unified run queue and adjust priority levels accordingly.jake2001-02-121-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - All processes go into the same array of queues, with different scheduling classes using different portions of the array. This allows user processes to have their priorities propogated up into interrupt thread range if need be. - I chose 64 run queues as an arbitrary number that is greater than 32. We used to have 4 separate arrays of 32 queues each, so this may not be optimal. The new run queue code was written with this in mind; changing the number of run queues only requires changing constants in runq.h and adjusting the priority levels. - The new run queue code takes the run queue as a parameter. This is intended to be used to create per-cpu run queues. Implement wrappers for compatibility with the old interface which pass in the global run queue structure. - Group the priority level, user priority, native priority (before propogation) and the scheduling class into a struct priority. - Change any hard coded priority levels that I found to use symbolic constants (TTIPRI and TTOPRI). - Remove the curpriority global variable and use that of curproc. This was used to detect when a process' priority had lowered and it should yield. We now effectively yield on every interrupt. - Activate propogate_priority(). It should now have the desired effect without needing to also propogate the scheduling class. - Temporarily comment out the call to vm_page_zero_idle() in the idle loop. It interfered with propogate_priority() because the idle process needed to do a non-blocking acquire of Giant and then other processes would try to propogate their priority onto it. The idle process should not do anything except idle. vm_page_zero_idle() will return in the form of an idle priority kernel thread which is woken up at apprioriate times by the vm system. - Update struct kinfo_proc to the new priority interface. Deliberately change its size by adjusting the spare fields. It remained the same size, but the layout has changed, so userland processes that use it would parse the data incorrectly. The size constraint should really be changed to an arbitrary version number. Also add a debug.sizeof sysctl node for struct kinfo_proc.
* - Make astpending and need_resched process attributes rather than CPUjhb2001-02-101-1/+1
| | | | | | | | | | | attributes. This is needed for AST's to be properly posted in a preemptive kernel. They are backed by two new flags in p_sflag: PS_ASTPENDING and PS_NEEDRESCHED. They are still accesssed by their old macros: aston(), astoff(), etc. For completeness, an astpending() macro has been added to check for a pending AST, and clear_resched() has been added to clear need_resched(). - Rename syscall2() on the x86 back to syscall() to be consistent with other architectures.
* Change and clean the mutex lock interface.bmilekic2001-02-091-24/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mtx_enter(lock, type) becomes: mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized) similarily, for releasing a lock, we now have: mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN. We change the caller interface for the two different types of locks because the semantics are entirely different for each case, and this makes it explicitly clear and, at the same time, it rids us of the extra `type' argument. The enter->lock and exit->unlock change has been made with the idea that we're "locking data" and not "entering locked code" in mind. Further, remove all additional "flags" previously passed to the lock acquire/release routines with the exception of two: MTX_QUIET and MTX_NOSWITCH The functionality of these flags is preserved and they can be passed to the lock/unlock routines by calling the corresponding wrappers: mtx_{lock, unlock}_flags(lock, flag(s)) and mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN locks, respectively. Re-inline some lock acq/rel code; in the sleep lock case, we only inline the _obtain_lock()s in order to ensure that the inlined code fits into a cache line. In the spin lock case, we inline recursion and actually only perform a function call if we need to spin. This change has been made with the idea that we generally tend to avoid spin locks and that also the spin locks that we do have and are heavily used (i.e. sched_lock) do recurse, and therefore in an effort to reduce function call overhead for some architectures (such as alpha), we inline recursion for this case. Create a new malloc type for the witness code and retire from using the M_DEV type. The new type is called M_WITNESS and is only declared if WITNESS is enabled. Begin cleaning up some machdep/mutex.h code - specifically updated the "optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently need those. Finally, caught up to the interface changes in all sys code. Contributors: jake, jhb, jasone (in no particular order)
* - Proc locking.jhb2001-01-241-13/+20
| | | | - Catch up to proc flag changes.
* Revert revision 1.102. I don't think p_nice needs to be protected withjhb2001-01-191-2/+0
| | | | | | | sched_lock, and I'm fairly certain P_TRACED will be protected with the proc lock instead. Pointed out indirectly by: bde
* Implement condition variables.jasone2001-01-161-2/+7
|
* Protect p_nice and P_TRACED in psignal() above the switch statement withjhb2001-01-061-0/+2
| | | | sched_lock.
* The previous commit wasn't entirely correct. At least one goto to thejhb2001-01-021-18/+22
| | | | | | | | | | | | out: label in psignal() did not grab sched_lock before trying to release it. Also, the previous version had several cases where it grabbed sched_lock before jumping to out: unneccessarily, so rework this a bit. The runfast: and out: labels must be called with sched_lock released, and the run: label must be called with it held. Appropriate mtx_assert()'s have been added that should catch any bugs that may still be in this code. Noticed by: bde
* Push down sched_lock in psignal(). sched_lock was being held acrossjhb2001-01-011-4/+21
| | | | | recursive calls into psignal() as well as calls to signotify(), forward_signal(), etc.
* Add in a missing release of the proctree lock.jhb2001-01-011-0/+1
| | | | Submitted by: Sja <sakari.jalovaara@eqonline.fi>
* Protect proc.p_pptr and proc.p_children/p_sibling with thejake2000-12-231-1/+10
| | | | | | | | proctree_lock. linprocfs not locked pending response from informal maintainer. Reviewed by: jhb, -smp@
* Fix a typo that allowed signals caused by traps to be deliveredmarcel2000-12-161-1/+1
| | | | | | | to the process when said signal is masked. PR: 23457 Submitted by: Yasuhiko Watanabe <yasu@mrit.mei.co.jp>
* - Change the allproc_lock to use a macro, ALLPROC_LOCK(how), insteadjake2000-12-131-2/+2
| | | | | | | | of explicit calls to lockmgr. Also provides macros for the flags pased to specify shared, exclusive or release which map to the lockmgr flags. This is so that the use of lockmgr can be easily replaced with optimized reader-writer locks. - Add some locking that I missed the first time.
* Protect p_stat with sched_lock.jhb2000-12-011-1/+7
|
* Don't use p->p_sigstk.ss_flags to keep state of whether themarcel2000-11-301-27/+37
| | | | | | | | | | | | | | | process is on the alternate stack or not. For compatibility with sigstack(2) state is being updated if such is needed. We now determine whether the process is on the alternate stack by looking at its stack pointer. This allows a process to siglongjmp from a signal handler on the alternate stack to the place of the sigsetjmp on the normal stack. When maintaining state, this would have invalidated the state information and causing a subsequent signal to be delivered on the normal stack instead of the alternate stack. PR: 22286
* Protect the following with a lockmgr lock:jake2000-11-221-2/+4
| | | | | | | | | | | | allproc zombproc pidhashtbl proc.p_list proc.p_hash nextpid Reviewed by: jhb Obtained from: BSD/OS and netbsd
* - Split the run queue and sleep queue linkage, so that a processjake2000-11-171-2/+2
| | | | | | | | | may block on a mutex while on the sleep queue without corrupting it. - Move dropping of Giant to after the acquire of sched_lock. Tested by: John Hay <jhay@icomtek.csir.co.za> jhb
* Don't release and acquire Giant in mi_switch(). Instead, release andjhb2000-11-161-0/+8
| | | | | | | | acquire Giant as needed in functions that call mi_switch(). The releases need to be done outside of the sched_lock to avoid potential deadlocks from trying to acquire Giant while interrupts are disabled. Submitted by: witness
* Make MINSIGSTKSZ machine dependent, and have the sigaltstackmarcel2000-11-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | syscall compare against a variable sv_minsigstksz in struct sysentvec as to properly take the size of the machine- and ABI dependent struct sigframe into account. The SVR4 and iBCS2 modules continue to have a minsigstksz of 8192 to preserve behavior. The real values (if different) are not known at this time. Other ABI modules use the real values. The native MINSIGSTKSZ is now defined as follows: Arch MINSIGSTKSZ ---- ----------- alpha 4096 i386 2048 ia64 12288 Reviewed by: mjacob Suggested by: bde
* Catch up to moving headers:jhb2000-10-201-3/+2
| | | | | - machine/ipl.h -> sys/ipl.h - machine/mutex.h -> sys/mutex.h
* Unpessimized CURSIG(). The fast path through CURSIG() was broken inbde2000-09-171-6/+6
| | | | | | | | | | the 128-bit sigset_t changes by moving conditionally (rarely) executed code to the beginning where it is always executed, and since this code now involves 3 128-bit operations, the pessimization was relatively large. This change speeds up lmbench's pipe latency benchmark by 3.5%. Fixed style bugs in CURSIG().
* Uninlined CURSIG() and unpolluted <sys/signalvar.h>. CURSIG() had becomebde2000-09-171-0/+26
| | | | | | | | | very bloated, first with 128-bit sigset_t's, then with locking in the SMP case, then with locking in all cases. The space bloat was probably also time bloat, partly because the fast path through CURSIG() was pessimized by the sigset_t changes. This change speeds up lmbench's pipe-based latency benchmark by 4% on a Celeron. <sys/signalvar.h> had become very polluted to support the bloat.
* Move the include of <sys/systm.h> so that KTR gets a declaration fordfr2000-09-101-1/+1
| | | | snprintf().
* Major update to the way synchronization is done in the kernel. Highlightsjasone2000-09-071-0/+3
| | | | | | | | | | | | | | | include: * Mutual exclusion is used instead of spl*(). See mutex(9). (Note: The alpha port is still in transition and currently uses both.) * Per-CPU idle processes. * Interrupts are run in their own separate kernel threads and can be preempted (i386 only). Partially contributed by: BSDi (BSD/OS) Submissions by (at least): cp, dfr, dillon, grog, jake, jhb, sheldonh
* o Centralize inter-process access control, introducing:rwatson2000-08-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | int p_can(p1, p2, operation, privused) which allows specification of subject process, object process, inter-process operation, and an optional call-by-reference privused flag, allowing the caller to determine if privilege was required for the call to succeed. This allows jail, kern.ps_showallprocs and regular credential-based interaction checks to occur in one block of code. Possible operations are P_CAN_SEE, P_CAN_SCHED, P_CAN_KILL, and P_CAN_DEBUG. p_can currently breaks out as a wrapper to a series of static function checks in kern_prot, which should not be invoked directly. o Commented out capabilities entries are included for some checks. o Update most inter-process authorization to make use of p_can() instead of manual checks, PRISON_CHECK(), P_TRESPASS(), and kern.ps_showallprocs. o Modify suser{,_xxx} to use const arguments, as it no longer modifies process flags due to the disabling of ASU. o Modify some checks/errors in procfs so that ENOENT is returned instead of ESRCH, further improving concealment of processes that should not be visible to other processes. Also introduce new access checks to improve hiding of processes for procfs_lookup(), procfs_getattr(), procfs_readdir(). Correct a bug reported by bp concerning not handling the CREATE case in procfs_lookup(). Remove volatile flag in procfs that caused apparently spurious qualifier warnigns (approved by bde). o Add comment noting that ktrace() has not been updated, as its access control checks are different from ptrace(), whereas they should probably be the same. Further discussion should happen on this topic. Reviewed by: bde, green, phk, freebsd-security, others Approved by: bde Obtained from: TrustedBSD Project
* Make this file compile again when COMPAT_43 has not beenmarcel2000-08-261-1/+8
| | | | | | | | | defined. This boils down to conditionally compile the old signal syscalls. We might want to extend the types in syscalls.master to make these syscalls conditionally on something more appropriate than COMPAT_43.
* Add snapshots to the fast filesystem. Most of the changes supportmckusick2000-07-111-0/+11
| | | | | | | | | | | | | | | | | | | | the gating of system calls that cause modifications to the underlying filesystem. The gating can be enabled by any filesystem that needs to consistently suspend operations by adding the vop_stdgetwritemount to their set of vnops. Once gating is enabled, the function vfs_write_suspend stops all new write operations to a filesystem, allows any filesystem modifying system calls already in progress to complete, then sync's the filesystem to disk and returns. The function vfs_write_resume allows the suspended write operations to begin again. Gating is not added by default for all filesystems as for SMP systems it adds two extra locks to such critical kernel paths as the write system call. Thus, gating should only be added as needed. Details on the use and current status of snapshots in FFS can be found in /sys/ufs/ffs/README.snapshot so for brevity and timelyness is not included here. Unless and until you create a snapshot file, these changes should have no effect on your system (famous last words).
* Move the truncation code out of vn_open and into the open system callmckusick2000-07-041-2/+3
| | | | | | | | | | after the acquisition of any advisory locks. This fix corrects a case in which a process tries to open a file with a non-blocking exclusive lock. Even if it fails to get the lock it would still truncate the file even though its open failed. With this change, the truncation is done only after the lock is successfully acquired. Obtained from: BSD/OS
* Back out the previous change to the queue(3) interface.jake2000-05-261-1/+1
| | | | | | It was not discussed and should probably not happen. Requested by: msmith and others
* Change the way that the queue(3) structures are declared; don't assume thatjake2000-05-231-1/+1
| | | | | | | | the type argument to *_HEAD and *_ENTRY is a struct. Suggested by: phk Reviewed by: phk Approved by: mdodd
* Remove unneeded #include <vm/vm_zone.h>phk2000-04-301-1/+0
| | | | Generated by: src/tools/tools/kerninclude
* Introduce kqueue() and kevent(), a kernel event notification facility.jlemon2000-04-161-0/+51
|
* Make the sigprocmask() and geteuid() system calls MP SAFE. Expanddillon2000-04-021-6/+13
| | | | | | | commentary for copyin/copyout to indicate that they are MP SAFE as well. Reviewed by: msmith
* The SMP cleanup commit broke UP compiles. Make UP compiles work again.dillon2000-03-281-2/+0
|
* Commit major SMP cleanups and move the BGL (big giant lock) in thedillon2000-03-281-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | syscall path inward. A system call may select whether it needs the MP lock or not (the default being that it does need it). A great deal of conditional SMP code for various deadended experiments has been removed. 'cil' and 'cml' have been removed entirely, and the locking around the cpl has been removed. The conditional separately-locked fast-interrupt code has been removed, meaning that interrupts must hold the CPL now (but they pretty much had to anyway). Another reason for doing this is that the original separate-lock for interrupts just doesn't apply to the interrupt thread mechanism being contemplated. Modifications to the cpl may now ONLY occur while holding the MP lock. For example, if an otherwise MP safe syscall needs to mess with the cpl, it must hold the MP lock for the duration and must (as usual) save/restore the cpl in a nested fashion. This is precursor work for the real meat coming later: avoiding having to hold the MP lock for common syscalls and I/O's and interrupt threads. It is expected that the spl mechanisms and new interrupt threading mechanisms will be able to run in tandem, allowing a slow piecemeal transition to occur. This patch should result in a moderate performance improvement due to the considerable amount of code that has been removed from the critical path, especially the simplification of the spl*() calls. The real performance gains will come later. Approved by: jkh Reviewed by: current, bde (exception.s) Some work taken from: luoqi's patch
* Add sysctl kern.coredump to enable/disable core dumps system wide.ps2000-03-211-1/+5
|
* Introduce NDFREE (and remove VOP_ABORTOP)eivind1999-12-151-0/+3
|
* Introduce the new functionphk1999-11-211-12/+6
| | | | | | | | | | | | | | p_trespass(struct proc *p1, struct proc *p2) which returns zero or an errno depending on the legality of p1 trespassing on p2. Replace kern_sig.c:CANSIGNAL() with call to p_trespass() and one extra signal related check. Replace procfs.h:CHECKIO() macros with calls to p_trespass(). Only show command lines to process which can trespass on the target process.
* s/p_cred->pc_ucred/p_ucred/gphk1999-11-211-1/+1
|
* This is a partial commit of the patch from PR 14914:phk1999-11-161-7/+4
| | | | | | | | | | | | | Alot of the code in sys/kern directly accesses the *Q_HEAD and *Q_ENTRY structures for list operations. This patch makes all list operations in sys/kern use the queue(3) macros, rather than directly accessing the *Q_{HEAD,ENTRY} structures. This batch of changes compile to the same object files. Reviewed by: phk Submitted by: Jake Burkholder <jake@checker.org> PR: 14914
* Bail out of the process early if the coredumpfile limit is 0.sef1999-10-301-6/+10
| | | | | PR: kern/14540 Reviewed by: Nate Williams
* Don't let osigaction and osigvec accept the new signal numbers.marcel1999-10-121-47/+46
| | | | | | Fix style bugs caused by the sigset_t in general while I'm here. Submitted by: bde
* Add a per-signal flag to mark handlers registered with osigaction, so weluoqi1999-10-111-63/+67
| | | | | | | | | | | | | | | can provide the correct context to each signal handler. Fix broken sigsuspend(): don't use p_oldsigmask as a flag, use SAS_OLDMASK as we did before the linuxthreads support merge (submitted by bde). Move ps_sigstk from to p_sigacts to the main proc structure since signal stack should not be shared among threads. Move SAS_OLDMASK and SAS_ALTSTACK flags from sigacts::ps_flags to proc::p_flag. Move PS_NOCLDSTOP and PS_NOCLDWAIT flags from proc::p_flag to procsig::ps_flag. Reviewed by: marcel, jdp, bde
* sigset_t change (part 2 of 5)marcel1999-09-291-340/+553
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ----------------------------- The core of the signalling code has been rewritten to operate on the new sigset_t. No methodological changes have been made. Most references to a sigset_t object are through macros (see signalvar.h) to create a level of abstraction and to provide a basis for further improvements. The NSIG constant has not been changed to reflect the maximum number of signals possible. The reason is that it breaks programs (especially shells) which assume that all signals have a non-null name in sys_signame. See src/bin/sh/trap.c for an example. Instead _SIG_MAXSIG has been introduced to hold the maximum signal possible with the new sigset_t. struct sigprop has been moved from signalvar.h to kern_sig.c because a) it is only used there, and b) access must be done though function sigprop(). The latter because the table doesn't holds properties for all signals, but only for the first NSIG signals. signal.h has been reorganized to make reading easier and to add the new and/or modified structures. The "old" structures are moved to signalvar.h to prevent namespace polution. Especially the coda filesystem suffers from the change, because it contained lines like (p->p_sigmask == SIGIO), which is easy to do for integral types, but not for compound types. NOTE: kdump (and port linux_kdump) must be recompiled. Thanks to Garrett Wollman and Daniel Eischen for pressing the importance of changing sigreturn as well.
* Make prototype match function.sef1999-09-011-1/+1
|
* General cleanup of core-dumping code.julian1999-09-011-3/+69
| | | | Submitted by: Sean Fagan,
* $Id$ -> $FreeBSD$peter1999-08-281-1/+1
|
* Fix a mistake in my last SA_SIGINFO commit. Processes could blockcracauer1999-08-231-2/+2
| | | | | | | | SIGKILL and SIGSTOP. PR: kern/13293 Submitted by: dwmalone@maths.tcd.ie Obtained from: PR had correct fix
OpenPOWER on IntegriCloud