summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_sig.c
Commit message (Collapse)AuthorAgeFilesLines
* o Wording fix in comment.rwatson2001-12-141-1/+1
| | | | Submitted by: tanimura via p4
* _SIG_MAXSIG (128) is the highest legal signal. The arrays are offsetpeter2001-11-031-2/+2
| | | | | | by one - see _SIG_IDX(). Revert part of my mis-correction in kern_sig.c (but signal 0 still has to be allowed) and fix _SIG_VALID() (it was rejecting ignal 128).
* Partial reversion of rev 1.138. kill and killpg allow a signalpeter2001-11-031-2/+2
| | | | | | | | | | argument of 0. You cannot return EINVAL for signal 0. This broke (in 5 minutes of testing) at least ssh-agent and screen. However, there was a bug in the original code. Signal 128 is not valid. Pointy-hat to: des, jhb
* We have a _SIG_VALID() macro, so use it instead of duplicating the test alldes2001-11-021-7/+5
| | | | | | over the place. Also replace a printf() + panic() with a KASSERT(). Reviewed by: jhb
* Fix a typo in do_sigaction() where sa_sigaction and sa_handler wereiedowse2001-10-071-3/+3
| | | | | | | confused. Since sa_sigaction and sa_handler alias each other in a union, the bug was completely harmless. This had been fixed as part of the SIGCHLD changes in revision 1.125, but it was reverted when they were backed out in revision 1.126.
* Lock the vnode while truncating the corefile. This fixes a panicps2001-09-261-0/+2
| | | | | | | with softupdates dangling deps. Submitted by: peter MFC: ASAP :)
* Replace line accidentally deleted during KSE additions.julian2001-09-171-1/+1
| | | | | Symptom.. Stopped program unable to be restarted if it was stopped while already sleeping.
* o Correct authorization check in CANSIGIO(), which suffered from incorrectrwatson2001-09-151-4/+5
| | | | | | | | transcription during the (pcred,ucred) merge; this was not used for the kill() system call, so does not affect direct explicit process signalling. Pointed out by: fenner
* KSE Milestone 2julian2001-09-121-78/+166
| | | | | | | | | | | | | | Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is functionally equivalent to teh previousl -current except that there is a thread associated with each process. Sorry john! (your next MFC will be a doosie!) Reviewed by: peter@freebsd.org, dillon@freebsd.org X-MFC after: ha ha ha ha
* This brings in a Yahoo coredump patch from Paul, with additional mods bydillon2001-09-081-4/+19
| | | | | | | | | | | | | | | | | | | me (addition of vn_rdwr_inchunks). The problem Yahoo is solving is that if you have large process images core dumping, or you have a large number of forked processes all core dumping at the same time, the original coredump code would leave the vnode locked throughout. This can cause the directory vnode to get locked up, which can cause the parent directory vnode to get locked up, and so on all the way to the root node, locking the entire machine up for extremely long periods of time. This patch solves the problem in two ways. First it uses an advisory non-blocking lock to abort multiple processes trying to core to the same file. Second (my contribution) it chunks up the writes and uses bwillwrite() to avoid holding the vnode locked while blocking in the buffer cache. Submitted by: ps Reviewed by: dillon MFC after: 2 weeks
* Call sendsig() with the proc lock held and return with it held.jhb2001-09-061-4/+0
|
* Giant Pushdowndillon2001-09-011-37/+133
| | | | | | | | clock_gettime() clock_settime() nanosleep() settimeofday() adjtime() getitimer() setitimer() __sysctl() ogetkerninfo() sigaction() osigaction() sigpending() osigpending() osigvec() osigblock() osigsetmask() sigsuspend() osigsuspend() osigstack() sigaltstack() kill() okillpg() trapsignal() nosys()
* Remove the MPSAFE keyword from the parser for syscalls.master.dillon2001-08-301-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead introduce the [M] prefix to existing keywords. e.g. MSTD is the MP SAFE version of STD. This is prepatory for a massive Giant lock pushdown. The old MPSAFE keyword made syscalls.master too messy. Begin comments MP-Safe procedures with the comment: /* * MPSAFE */ This comments means that the procedure may be called without Giant held (The procedure itself may still need to obtain Giant temporarily to do its thing). sv_prepsyscall() is now MP SAFE and assumed to be MP SAFE sv_transtrap() is now MP SAFE and assumed to be MP SAFE ktrsyscall() and ktrsysret() are now MP SAFE (Giant Pushdown) trapsignal() is now MP SAFE (Giant Pushdown) Places which used to do the if (mtx_owned(&Giant)) mtx_unlock(&Giant) test in syscall[2]() in */*/trap.c now do not. Instead they explicitly unlock Giant if they previously obtained it, and then assert that it is no longer held to catch broken system calls. Rebuild syscall tables.
* Prevent passing a null pointer as a filename to vn_open(),roam2001-08-241-0/+2
| | | | | | | | | if for some reason expand_name() failed to build a core file name. PR: 29931 Submitted by: Foldi Tamas <crow@kapu.hu> Reviewed by: dd, -arch MFC after: 1 month
* Make COMPAT_43 optional again. XXX we need COMPAT_FBSD3 etc for thispeter2001-08-211-0/+2
| | | | stuff.
* Temporarily back out kern_sig.c rev 1.125 and kern_exit.c rev 1.131.peter2001-08-011-4/+5
| | | | | | | | | This paniced my one of my machines one time too many :-( and there is no sign of a solution in the pipeline. The deltas are still easily available in cvs. The problem is that if the parent has been swapped out, the child process cannot grope around in the parent's UPAGES to see the sigact[] array or it will fault. This probably is a showstopper for this implementation anyway.
* As per further discussions on hackers redo the SIGCHLD patch to not generatedillon2001-07-221-5/+4
| | | | | | | | | an unexpected user-visible side effect with the sigaction flags. Also cleanup a minor union issue. Submitted by: Rudolf Cejka <cejkar@dcse.fee.vutbr.cz> MFC addendum: MFC will be combined w/ original commit MFC after: 3 days
* Grab Giant around postsig() since sendsig() can call into the vm tojhb2001-07-031-6/+0
| | | | grow the stack and we already needed Giant for KTRACE.
* - Change CURSIG() and postsig() to require that the proc lock is heldjhb2001-06-221-9/+10
| | | | | | | | | | rather than grabbing it and releasing it themselves. This allows callers of these functions to get the lock to close race conditions. - Grab Giant around ktrace in postsig. - Count the switches performed on SIGSTOP's as involuntary context switches in the resource usage stats. Reported by: tegge (signal race), bde (missing csw stats)
* Lock Giant in postsig() for the KTRACE case as ktrpsig() needs Giant whenjhb2001-06-181-0/+4
| | | | | | it writes out to the trace file. Reported by: peter, gallatin, and others
* Try to make the setting of the SIGCHLD handler the same as setting ofdwmalone2001-06-111-1/+4
| | | | | | | the NOCLDWAI flag. Susv2 seems to require this. Submitted by: Cejka Rudolf <cejkar@dcse.fee.vutbr.cz> Reviewed by: dillon
* o Merge contents of struct pcred into struct ucred. Specifically, add therwatson2001-05-251-12/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | real uid, saved uid, real gid, and saved gid to ucred, as well as the pcred->pc_uidinfo, which was associated with the real uid, only rename it to cr_ruidinfo so as not to conflict with cr_uidinfo, which corresponds to the effective uid. o Remove p_cred from struct proc; add p_ucred to struct proc, replacing original macro that pointed. p->p_ucred to p->p_cred->pc_ucred. o Universally update code so that it makes use of ucred instead of pcred, p->p_ucred instead of p->p_pcred, cr_ruidinfo instead of p_uidinfo, cr_{r,sv}{u,g}id instead of p_*, etc. o Remove pcred0 and its initialization from init_main.c; initialize cr_ruidinfo there. o Restruction many credential modification chunks to always crdup while we figure out locking and optimizations; generally speaking, this means moving to a structure like this: newcred = crdup(oldcred); ... p->p_ucred = newcred; crfree(oldcred); It's not race-free, but better than nothing. There are also races in sys_process.c, all inter-process authorization, fork, exec, and exit. o Remove sigio->sio_ruid since sigio->sio_ucred now contains the ruid; remove comments indicating that the old arrangement was a problem. o Restructure exec1() a little to use newcred/oldcred arrangement, and use improved uid management primitives. o Clean up exit1() so as to do less work in credential cleanup due to pcred removal. o Clean up fork1() so as to do less work in credential cleanup and allocation. o Clean up ktrcanset() to take into account changes, and move to using suser_xxx() instead of performing a direct uid==0 comparision. o Improve commenting in various kern_prot.c credential modification calls to better document current behavior. In a couple of places, current behavior is a little questionable and we need to check POSIX.1 to make sure it's "right". More commenting work still remains to be done. o Update credential management calls, such as crfree(), to take into account new ruidinfo reference. o Modify or add the following uid and gid helper routines: change_euid() change_egid() change_ruid() change_rgid() change_svuid() change_svgid() In each case, the call now acts on a credential not a process, and as such no longer requires more complicated process locking/etc. They now assume the caller will do any necessary allocation of an exclusive credential reference. Each is commented to document its reference requirements. o CANSIGIO() is simplified to require only credentials, not processes and pcreds. o Remove lots of (p_pcred==NULL) checks. o Add an XXX to authorization code in nfs_lock.c, since it's questionable, and needs to be considered carefully. o Simplify posix4 authorization code to require only credentials, not processes and pcreds. Note that this authorization, as well as CANSIGIO(), needs to be updated to use the p_cansignal() and p_cansched() centralized authorization routines, as they currently do not take into account some desirable restrictions that are handled by the centralized routines, as well as being inconsistent with other similar authorization instances. o Update libkvm to take these changes into account. Obtained from: TrustedBSD Project Reviewed by: green, bde, jhb, freebsd-arch, freebsd-audit
* - Remove unneeded include of sys/ipl.h.jhb2001-05-151-3/+2
| | | | | | - Require the proc lock be held for killproc() to allow for the vmdaemon to kill a process when memory is exhausted while holding the lock of the process to kill.
* Properly copy the P_ALTSTACK flag in struct proc::p_flag to the childknu2001-05-071-0/+1
| | | | | | | | | | | | | | | | | | process on fork(2). It is the supposed behavior stated in the manpage of sigaction(2), and Solaris, NetBSD and FreeBSD 3-STABLE correctly do so. The previous fix against libc_r/uthread/uthread_fork.c fixed the problem only for the programs linked with libc_r, so back it out and fix fork(2) itself to help those not linked with libc_r as well. PR: kern/26705 Submitted by: KUROSAWA Takahiro <fwkg7679@mb.infoweb.ne.jp> Tested by: knu, GOTOU Yuuzou <gotoyuzo@notwork.org>, and some other people Not objected by: hackers MFC in: 3 days
* Overhaul of the SMP code. Several portions of the SMP kernel support havejhb2001-04-271-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | been made machine independent and various other adjustments have been made to support Alpha SMP. - It splits the per-process portions of hardclock() and statclock() off into hardclock_process() and statclock_process() respectively. hardclock() and statclock() call the *_process() functions for the current process so that UP systems will run as before. For SMP systems, it is simply necessary to ensure that all other processors execute the *_process() functions when the main clock functions are triggered on one CPU by an interrupt. For the alpha 4100, clock interrupts are delievered in a staggered broadcast fashion, so we simply call hardclock/statclock on the boot CPU and call the *_process() functions on the secondaries. For x86, we call statclock and hardclock as usual and then call forward_hardclock/statclock in the MD code to send an IPI to cause the AP's to execute forwared_hardclock/statclock which then call the *_process() functions. - forward_signal() and forward_roundrobin() have been reworked to be MI and to involve less hackery. Now the cpu doing the forward sets any flags, etc. and sends a very simple IPI_AST to the other cpu(s). AST IPIs now just basically return so that they can execute ast() and don't bother with setting the astpending or needresched flags themselves. This also removes the loop in forward_signal() as sched_lock closes the race condition that the loop worked around. - need_resched(), resched_wanted() and clear_resched() have been changed to take a process to act on rather than assuming curproc so that they can be used to implement forward_roundrobin() as described above. - Various other SMP variables have been moved to a MI subr_smp.c and a new header sys/smp.h declares MI SMP variables and API's. The IPI API's from machine/ipl.h have moved to machine/smp.h which is included by sys/smp.h. - The globaldata_register() and globaldata_find() functions as well as the SLIST of globaldata structures has become MI and moved into subr_smp.c. Also, the globaldata list is only available if SMP support is compiled in. Reviewed by: jake, peter Looked over by: eivind
* Change the pfind() and zpfind() functions to lock the process that theyjhb2001-04-241-27/+16
| | | | | | find before releasing the allproc lock and returning. Reviewed by: -smp, dfr, jake
* o Replace p_cankill() with p_cansignal(), remove wrappage of p_can()rwatson2001-04-121-10/+3
| | | | | | | | | | | | | | | | | | from signal authorization checking. o p_cansignal() takes three arguments: subject process, object process, and signal number, unlike p_cankill(), which only took into account the processes and not the signal number, improving the abstraction such that CANSIGNAL() from kern_sig.c can now also be eliminated; previously CANSIGNAL() special-cased the handling of SIGCONT based on process session. privused is now deprecated. o The new p_cansignal() further limits the set of signals that may be delivered to processes with P_SUGID set, and restructures the access control check to allow it to be extended more easily. o These changes take into account work done by the OpenBSD Project, as well as by Robert Watson and Thomas Moestl on the TrustedBSD Project. Obtained from: TrustedBSD Project
* Change stop() to require the sched_lock as well as p's process lock tojhb2001-04-031-6/+8
| | | | | | avoid silly lock contention on sched_lock since in 2 out of the 3 places that we call stop(), we get sched_lock right after calling it and we were locking sched_lock inside of stop() anyways.
* - Move the second stop() of process 'p' in issignal() to be after we sendjhb2001-04-021-3/+2
| | | | | | | | | | | | SIGCHLD to our parent process. Otherwise, we could block while obtaining the process lock for our parent process and switch out while we were in SSTOP. Even worse, when we try to resume from the mutex being blocked on our p_stat will be SRUN, not SSTOP. - Fix a comment above stop() to indicate that it requires that the proc lock be held, not a proctree lock. Reported by: markm Sleuthing by: jake
* Convert the allproc and proctree locks from lockmgr locks to sx locks.jhb2001-03-281-3/+4
|
* - Resort some includes to deal with the new witness code coming in shortly.jhb2001-03-281-4/+8
| | | | | | - Make sure we have Giant locked before calling coredump() in sigexit(). Spotted by: peter (2)
* - Proc locking. Most of signal handling is now MP safe and doesn't requirejhb2001-03-071-70/+162
| | | | | | | | | | Giant. The only exception is the CANSIGNAL() macro. Unlocking the proc lock around sendsig() in trapsignal() is also questionable. Note that the functions sigexit(), psignal(), and issignal() must be called with the proc lock of the process in question held. postsig() and trapsignal() should not be called with the proc lock held, but they also do not require Giant anymore either. - Remove spl's that are now no longer needed as they are fully replaced.
* Fixed a longstanding latency bug in signal delivery. When a signalbde2001-02-191-6/+2
| | | | | | | | | | | | | | | | | | | is sent to a process, psignal() needs to schedule an AST for the process if the process is runnable, not just if it is current, so that pending signals get checked for on the next return of the process to user mode. This wasn't practical until recently because the AST flag was per-cpu so setting it for a non-current process would usually just cause a bogus AST for the current process. For non-current processes looping in user mode, it took accidental (?) magic to deliver signals at all. Signals were usually delivered late as a side effect of rescheduling (need_resched() sets astpending, etc.). In pre-SMPng, delivery was delayed by at most 1 quantum (the need_resched() call in roundrobin() is certain to occur within 1 quantum for looping processes). In -current, things are complicated by normal interrupt handlers being threads. Missing handling of the complications makes roundrobin() a bogus no-op, but preemptive scheduling sort of works anyway due to even larger bogons elsewhere.
* Implement a unified run queue and adjust priority levels accordingly.jake2001-02-121-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - All processes go into the same array of queues, with different scheduling classes using different portions of the array. This allows user processes to have their priorities propogated up into interrupt thread range if need be. - I chose 64 run queues as an arbitrary number that is greater than 32. We used to have 4 separate arrays of 32 queues each, so this may not be optimal. The new run queue code was written with this in mind; changing the number of run queues only requires changing constants in runq.h and adjusting the priority levels. - The new run queue code takes the run queue as a parameter. This is intended to be used to create per-cpu run queues. Implement wrappers for compatibility with the old interface which pass in the global run queue structure. - Group the priority level, user priority, native priority (before propogation) and the scheduling class into a struct priority. - Change any hard coded priority levels that I found to use symbolic constants (TTIPRI and TTOPRI). - Remove the curpriority global variable and use that of curproc. This was used to detect when a process' priority had lowered and it should yield. We now effectively yield on every interrupt. - Activate propogate_priority(). It should now have the desired effect without needing to also propogate the scheduling class. - Temporarily comment out the call to vm_page_zero_idle() in the idle loop. It interfered with propogate_priority() because the idle process needed to do a non-blocking acquire of Giant and then other processes would try to propogate their priority onto it. The idle process should not do anything except idle. vm_page_zero_idle() will return in the form of an idle priority kernel thread which is woken up at apprioriate times by the vm system. - Update struct kinfo_proc to the new priority interface. Deliberately change its size by adjusting the spare fields. It remained the same size, but the layout has changed, so userland processes that use it would parse the data incorrectly. The size constraint should really be changed to an arbitrary version number. Also add a debug.sizeof sysctl node for struct kinfo_proc.
* - Make astpending and need_resched process attributes rather than CPUjhb2001-02-101-1/+1
| | | | | | | | | | | attributes. This is needed for AST's to be properly posted in a preemptive kernel. They are backed by two new flags in p_sflag: PS_ASTPENDING and PS_NEEDRESCHED. They are still accesssed by their old macros: aston(), astoff(), etc. For completeness, an astpending() macro has been added to check for a pending AST, and clear_resched() has been added to clear need_resched(). - Rename syscall2() on the x86 back to syscall() to be consistent with other architectures.
* Change and clean the mutex lock interface.bmilekic2001-02-091-24/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mtx_enter(lock, type) becomes: mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized) similarily, for releasing a lock, we now have: mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN. We change the caller interface for the two different types of locks because the semantics are entirely different for each case, and this makes it explicitly clear and, at the same time, it rids us of the extra `type' argument. The enter->lock and exit->unlock change has been made with the idea that we're "locking data" and not "entering locked code" in mind. Further, remove all additional "flags" previously passed to the lock acquire/release routines with the exception of two: MTX_QUIET and MTX_NOSWITCH The functionality of these flags is preserved and they can be passed to the lock/unlock routines by calling the corresponding wrappers: mtx_{lock, unlock}_flags(lock, flag(s)) and mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN locks, respectively. Re-inline some lock acq/rel code; in the sleep lock case, we only inline the _obtain_lock()s in order to ensure that the inlined code fits into a cache line. In the spin lock case, we inline recursion and actually only perform a function call if we need to spin. This change has been made with the idea that we generally tend to avoid spin locks and that also the spin locks that we do have and are heavily used (i.e. sched_lock) do recurse, and therefore in an effort to reduce function call overhead for some architectures (such as alpha), we inline recursion for this case. Create a new malloc type for the witness code and retire from using the M_DEV type. The new type is called M_WITNESS and is only declared if WITNESS is enabled. Begin cleaning up some machdep/mutex.h code - specifically updated the "optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently need those. Finally, caught up to the interface changes in all sys code. Contributors: jake, jhb, jasone (in no particular order)
* - Proc locking.jhb2001-01-241-13/+20
| | | | - Catch up to proc flag changes.
* Revert revision 1.102. I don't think p_nice needs to be protected withjhb2001-01-191-2/+0
| | | | | | | sched_lock, and I'm fairly certain P_TRACED will be protected with the proc lock instead. Pointed out indirectly by: bde
* Implement condition variables.jasone2001-01-161-2/+7
|
* Protect p_nice and P_TRACED in psignal() above the switch statement withjhb2001-01-061-0/+2
| | | | sched_lock.
* The previous commit wasn't entirely correct. At least one goto to thejhb2001-01-021-18/+22
| | | | | | | | | | | | out: label in psignal() did not grab sched_lock before trying to release it. Also, the previous version had several cases where it grabbed sched_lock before jumping to out: unneccessarily, so rework this a bit. The runfast: and out: labels must be called with sched_lock released, and the run: label must be called with it held. Appropriate mtx_assert()'s have been added that should catch any bugs that may still be in this code. Noticed by: bde
* Push down sched_lock in psignal(). sched_lock was being held acrossjhb2001-01-011-4/+21
| | | | | recursive calls into psignal() as well as calls to signotify(), forward_signal(), etc.
* Add in a missing release of the proctree lock.jhb2001-01-011-0/+1
| | | | Submitted by: Sja <sakari.jalovaara@eqonline.fi>
* Protect proc.p_pptr and proc.p_children/p_sibling with thejake2000-12-231-1/+10
| | | | | | | | proctree_lock. linprocfs not locked pending response from informal maintainer. Reviewed by: jhb, -smp@
* Fix a typo that allowed signals caused by traps to be deliveredmarcel2000-12-161-1/+1
| | | | | | | to the process when said signal is masked. PR: 23457 Submitted by: Yasuhiko Watanabe <yasu@mrit.mei.co.jp>
* - Change the allproc_lock to use a macro, ALLPROC_LOCK(how), insteadjake2000-12-131-2/+2
| | | | | | | | of explicit calls to lockmgr. Also provides macros for the flags pased to specify shared, exclusive or release which map to the lockmgr flags. This is so that the use of lockmgr can be easily replaced with optimized reader-writer locks. - Add some locking that I missed the first time.
* Protect p_stat with sched_lock.jhb2000-12-011-1/+7
|
* Don't use p->p_sigstk.ss_flags to keep state of whether themarcel2000-11-301-27/+37
| | | | | | | | | | | | | | | process is on the alternate stack or not. For compatibility with sigstack(2) state is being updated if such is needed. We now determine whether the process is on the alternate stack by looking at its stack pointer. This allows a process to siglongjmp from a signal handler on the alternate stack to the place of the sigsetjmp on the normal stack. When maintaining state, this would have invalidated the state information and causing a subsequent signal to be delivered on the normal stack instead of the alternate stack. PR: 22286
* Protect the following with a lockmgr lock:jake2000-11-221-2/+4
| | | | | | | | | | | | allproc zombproc pidhashtbl proc.p_list proc.p_hash nextpid Reviewed by: jhb Obtained from: BSD/OS and netbsd
* - Split the run queue and sleep queue linkage, so that a processjake2000-11-171-2/+2
| | | | | | | | | may block on a mutex while on the sleep queue without corrupting it. - Move dropping of Giant to after the acquire of sched_lock. Tested by: John Hay <jhay@icomtek.csir.co.za> jhb
OpenPOWER on IntegriCloud