summaryrefslogtreecommitdiffstats
path: root/sys/kern/sys_process.c
Commit message (Collapse)AuthorAgeFilesLines
* Extend ptrace(PT_LWPINFO) to report siginfo for the signal that causedkib2010-07-041-3/+62
| | | | | | | debugee stop. The change should keep the ABI. Take care of compat32. Discussed with: davidxu, jhb MFC after: 2 weeks
* Use ISO C99 integer types in sys/kern where possible.ed2010-06-211-3/+3
| | | | | | There are only about 100 occurences of the BSD-specific u_int*_t datatypes in sys/kern. The ISO C99 integer types are used here more often.
* Ignore the 'addr' argument passed to PT_STEP (it is required to be '1'jhb2010-05-251-14/+20
| | | | | | | for PT_STEP which means "ignore") and PT_DETACH. PR: kern/146167 MFC after: 1 week
* Reorganize syscall entry and leave handling.kib2010-05-231-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Extend struct sysvec with three new elements: sv_fetch_syscall_args - the method to fetch syscall arguments from usermode into struct syscall_args. The structure is machine-depended (this might be reconsidered after all architectures are converted). sv_set_syscall_retval - the method to set a return value for usermode from the syscall. It is a generalization of cpu_set_syscall_retval(9) to allow ABIs to override the way to set a return value. sv_syscallnames - the table of syscall names. Use sv_set_syscall_retval in kern_sigsuspend() instead of hardcoding the call to cpu_set_syscall_retval(). The new functions syscallenter(9) and syscallret(9) are provided that use sv_*syscall* pointers and contain the common repeated code from the syscall() implementations for the architecture-specific syscall trap handlers. Syscallenter() fetches arguments, calls syscall implementation from ABI sysent table, and set up return frame. The end of syscall bookkeeping is done by syscallret(). Take advantage of single place for MI syscall handling code and implement ptrace_lwpinfo pl_flags PL_FLAG_SCE, PL_FLAG_SCX and PL_FLAG_EXEC. The SCE and SCX flags notify the debugger that the thread is stopped at syscall entry or return point respectively. The EXEC flag augments SCX and notifies debugger that the process address space was changed by one of exec(2)-family syscalls. The i386, amd64, sparc64, sun4v, powerpc and ia64 syscall()s are changed to use syscallenter()/syscallret(). MIPS and arm are not converted and use the mostly unchanged syscall() implementation. Reviewed by: jhb, marcel, marius, nwhitehorn, stas Tested by: marcel (ia64), marius (sparc64), nwhitehorn (powerpc), stas (mips) MFC after: 1 month
* On Alan's advice, rather than do a wholesale conversion on a singlekmacy2010-04-301-4/+4
| | | | | | | | | | | | architecture from page queue lock to a hashed array of page locks (based on a patch by Jeff Roberson), I've implemented page lock support in the MI code and have only moved vm_page's hold_count out from under page queue mutex to page lock. This changes pmap_extract_and_hold on all pmaps. Supported by: Bitgravity Inc. Discussed with: alc, jeffr, and kib
* Provide groundwork for 32-bit binary compatibility on non-x86 platforms,nwhitehorn2010-03-111-16/+14
| | | | | | | | | for upcoming 64-bit PowerPC and MIPS support. This renames the COMPAT_IA32 option to COMPAT_FREEBSD32, removes some IA32-specific code from MI parts of the kernel and enhances the freebsd32 compatibility code to support big-endian platforms. Reviewed by: kib, jhb
* Initialize pve_fsid and pve_fileid to VNOVAL.marcel2010-02-111-0/+3
|
* o Add support for COMPAT_IA32.marcel2010-02-111-69/+123
| | | | | | | | | | o Incorporate review comments: - Properly reference and lock the map - Take into account that the VM map can change inbetween requests - Add the fileid and fsid attributes Credits: kib@ Reviewed by: kib@
* Unbreak building kernels with COMPAT_32 enabled. The actual supportmarcel2010-02-091-0/+19
| | | | | | for the PT_VM_ENTRY request from 32-bit processes will follow. Pointy hat: marcel
* Add PT_VM_TIMESTAMP and PT_VM_ENTRY so that the tracing process canmarcel2010-02-091-0/+103
| | | | | | | | obtain the memory map of the traced process. PT_VM_TIMESTAMP can be used to check if the memory map changed since the last time to avoid iterating over all the VM entries unnecesarily. MFC after: 1 month
* For PT_TO_SCE stop that stops the ptraced process upon syscall entry,kib2010-01-231-0/+5
| | | | | | | | | | | | | | | | | | | | | | syscall arguments are collected before ptracestop() is called. As a consequence, debugger cannot modify syscall or its arguments. For i386, amd64 and ia32 on amd64 MD syscall(), reread syscall number and arguments after ptracestop(), if debugger modified anything in the process environment. Since procfs stopeven requires number of syscall arguments in p_xstat, this cannot be solved by moving stop/trace point before argument fetching. Move the code to read arguments into separate function fetch_syscall_args() to avoid code duplication. Note that ktrace point for modified syscall is intentionally recorded twice, once with original arguments, and second time with the arguments set by debugger. PT_TO_SCX stop is executed after cpu_syscall_set_retval() already. Reported by: Ali Polatel <alip exherbo org> Briefly discussed with: jhb MFC after: 3 weeks
* Replace VM_PROT_OVERRIDE_WRITE by VM_PROT_COPY. VM_PROT_OVERRIDE_WRITE hasalc2009-11-261-9/+12
| | | | | | | | | | | | | | | | | | | | | | represented a write access that is allowed to override write protection. Until now, VM_PROT_OVERRIDE_WRITE has been used to write breakpoints into text pages. Text pages are not just write protected but they are also copy-on-write. VM_PROT_OVERRIDE_WRITE overrides the write protection on the text page and triggers the replication of the page so that the breakpoint will be written to a private copy. However, here is where things become confused. It is the debugger, not the process being debugged that requires write access to the copied page. Nonetheless, the copied page is being mapped into the process with write access enabled. In other words, once the debugger sets a breakpoint within a text page, the program can write to its private copy of that text page. Whereas prior to setting the breakpoint, a SIGSEGV would have occurred upon a write access. VM_PROT_COPY addresses this problem. The combination of VM_PROT_READ and VM_PROT_COPY forces the replication of a copy-on-write page even though the access is only for read. Moreover, the replicated page is only mapped into the process with read access, and not write access. Reviewed by: kib MFC after: 4 weeks
* Update a comment to reflect the previous change.alc2009-10-251-1/+1
|
* o Introduce vm_sync_icache() for making the I-cache coherent withmarcel2009-10-211-0/+4
| | | | | | | | | | | | | | | | | | | | | the memory or D-cache, depending on the semantics of the platform. vm_sync_icache() is basically a wrapper around pmap_sync_icache(), that translates the vm_map_t argumument to pmap_t. o Introduce pmap_sync_icache() to all PMAP implementation. For powerpc it replaces the pmap_page_executable() function, added to solve the I-cache problem in uiomove_fromphys(). o In proc_rwmem() call vm_sync_icache() when writing to a page that has execute permissions. This assures that when breakpoints are written, the I-cache will be coherent and the process will actually hit the breakpoint. o This also fixes the Book-E PMAP implementation that was missing necessary locking while trying to deal with the I-cache coherency in pmap_enter() (read: mmu_booke_enter_locked). The key property of this change is that the I-cache is made coherent *after* writes have been done. Doing it in the PMAP layer when adding or changing a mapping means that the I-cache is made coherent *before* any writes happen. The difference is key when the I-cache prefetches.
* Clean up a number of aspects of token generation from audit arguments torwatson2009-07-021-1/+0
| | | | | | | | | | | | | | | | system calls: - Centralize generation of argument tokens for VM addresses in a macro, ADDR_TOKEN(), and properly encode 64-bit addresses in 64-bit arguments. - Fix up argument numbers across a large number of syscalls so that they match the numeric argument into the system call. - Don't audit the address argument to ioctl(2) or ptrace(2), but do keep generating tokens for mmap(2), minherit(2), since they relate to passing object access across execve(2). Approved by: re (audit argument blanket) Obtained from: TrustedBSD Project MFC after: 1 week
* Replace AUDIT_ARG() with variable argument macros with a set more morerwatson2009-06-271-5/+5
| | | | | | | | | | | | | | specific macros for each audit argument type. This makes it easier to follow call-graphs, especially for automated analysis tools (such as fxr). In MFC, we should leave the existing AUDIT_ARG() macros as they may be used by third-party kernel modules. Suggested by: brooks Approved by: re (kib) Obtained from: TrustedBSD Project MFC after: 1 week
* Implement global and per-uid accounting of the anonymous memory. Addkib2009-06-231-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved for the uid. The accounting information (charge) is associated with either map entry, or vm object backing the entry, assuming the object is the first one in the shadow chain and entry does not require COW. Charge is moved from entry to object on allocation of the object, e.g. during the mmap, assuming the object is allocated, or on the first page fault on the entry. It moves back to the entry on forks due to COW setup. The per-entry granularity of accounting makes the charge process fair for processes that change uid during lifetime, and decrements charge for proper uid when region is unmapped. The interface of vm_pager_allocate(9) is extended by adding struct ucred *, that is used to charge appropriate uid when allocation if performed by kernel, e.g. md(4). Several syscalls, among them is fork(2), may now return ENOMEM when global or per-uid limits are enforced. In collaboration with: pho Reviewed by: alc Approved by: re (kensmith)
* Use the p_sysent->sv_flags flag SV_ILP32 to detect 32bit processkib2009-03-021-5/+4
| | | | | | executing on 64bit kernel. This eliminates the direct comparisions of p_sysent with &ia32_freebsd_sysvec, that were left intact after r185169.
* Revert rev 184216 and 184199, due to the way the thread_lock works,davidxu2008-11-051-0/+2
| | | | | | it may cause a lockup. Noticed by: peter, jhb
* Actually, for signal and thread suspension, extra process spin lock isdavidxu2008-10-231-2/+0
| | | | | | unnecessary, the normal process lock and thread lock are enough. The spin lock is still needed for process and thread exiting to mimic single sched_lock.
* Move per-thread userland debugging flags into seperated field,davidxu2008-10-151-11/+6
| | | | | | this eliminates some problems of locking, e.g, a thread lock is needed but can not be used at that time. Only the process lock is needed now for new field.
* - Relax requirements for p_numthreads, p_threads, p_swtick, and p_nice fromjeff2008-03-191-5/+1
| | | | | | | requiring the per-process spinlock to only requiring the process lock. - Reflect these changes in the proc.h documentation and consumers throughout the kernel. This is a substantial reduction in locking cost for these fields and was made possible by recent changes to threading support.
* Remove kernel support for M:N threading.jeff2008-03-121-15/+0
| | | | | | | | While the KSE project was quite successful in bringing threading to FreeBSD, the M:N approach taken by the kse library was never developed to its full potential. Backwards compatibility will be provided via libmap.conf for dynamically linked binaries and static binaries will be broken.
* Use VM_FAULT_DIRTY to fault in pages for write access inups2007-11-081-2/+3
| | | | | | | | proc_rwmen. Otherwise copy on write may create an anonymous page that is not marked as dirty. Since writing data to these pages in this function also does not dirty these pages they may be later discarded by the pagedaemon.
* - Fix from pr kern/115469; Don't redeliver a signal once it has beenjeff2007-10-091-9/+9
| | | | | | | handled by the target process. Contributed by: Tijl Coosemans <tijl@ulyssis.org> Approved by: re
* - Move all of the PS_ flags into either p_flag or td_flags.jeff2007-09-171-1/+1
| | | | | | | | | | | | | | - p_sflag was mostly protected by PROC_LOCK rather than the PROC_SLOCK or previously the sched_lock. These bugs have existed for some time. - Allow swapout to try each thread in a process individually and then swapin the whole process if any of these fail. This allows us to move most scheduler related swap flags into td_flags. - Keep ki_sflag for backwards compat but change all in source tools to use the new and more correct location of P_INMEM. Reported by: pho Reviewed by: attilio, kib Approved by: re (kensmith)
* Commit 14/14 of sched_lock decomposition.jeff2007-06-051-15/+18
| | | | | | | | | | | - Use thread_lock() rather than sched_lock for per-thread scheduling sychronization. - Use the per-process spinlock rather than the sched_lock for per-process scheduling synchronization. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Remove 'MPSAFE' annotations from the comments above most system calls: allrwatson2007-03-041-3/+0
| | | | | | | | system calls now enter without Giant held, and then in some cases, acquire Giant explicitly. Remove a number of other MPSAFE annotations in the credential code and tweak one or two other adjacent comments.
* Make KSE a kernel option, turned on by default in all GENERICjb2006-10-261-0/+6
| | | | | | | kernel configs except sun4v (which doesn't process signals properly with KSE). Reviewed by: davidxu@
* Move sigqueue_take() call into proc_reparent(), this fixed bugs wheredavidxu2006-10-251-5/+1
| | | | proc_reparent() is called but sigqueue_take() is forgotten.
* Close a race condition where num can be larger than tmp, giving the usertrhodes2006-10-141-1/+1
| | | | | | too large of a boundary. Reported by: Ilja Van Sprundel
* Fix a signedness bug.cperciva2006-08-201-1/+1
| | | | | MFC after: 3 days Security: Local DoS
* Close some races between procfs/ptrace and exit(2):jhb2006-02-221-104/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Reorder the events in exit(2) slightly so that we trigger the S_EXIT stop event earlier. After we have signalled that, we set P_WEXIT and then wait for any processes with a hold on the vmspace via PHOLD to release it. PHOLD now KASSERT()'s that P_WEXIT is clear when it is invoked, and PRELE now does a wakeup if P_WEXIT is set and p_lock drops to zero. - Change proc_rwmem() to require that the processing read from has its vmspace held via PHOLD by the caller and get rid of all the junk to screw around with the vmspace reference count as we no longer need it. - In ptrace() and pseudofs(), treat a process with P_WEXIT set as if it doesn't exist. - Only do one PHOLD in kern_ptrace() now, and do it earlier so it covers FIX_SSTEP() (since on alpha at least this can end up calling proc_rwmem() to clear an earlier single-step simualted via a breakpoint). We only do one to avoid races. Also, by making the EINVAL error for unknown requests be part of the default: case in the switch, the various switch cases can now just break out to return which removes a _lot_ of duplicated PRELE and proc unlocks, etc. Also, it fixes at least one bug where a LWP ptrace command could return EINVAL with the proc lock still held. - Changed the locking for ptrace_single_step(), ptrace_set_pc(), and ptrace_clear_single_step() to always be called with the proc lock held (it was a mixed bag previously). Alpha and arm have to drop the lock while the mess around with breakpoints, but other archs avoid extra lock release/acquires in ptrace(). I did have to fix a couple of other consumers in kern_kse and a few other places to hold the proc lock and PHOLD. Tested by: ps (1 mostly, but some bits of 2-4 as well) MFC after: 1 week
* Audit the arguments to the ptrace(2) system call.wsalamon2006-02-141-0/+7
| | | | | Obtained from: TrustedBSD Project Approved by: rwatson (mentor)
* Add members pl_sigmask and pl_siglist into ptrace_lwpinfo to get lwp'sdavidxu2006-02-061-0/+2
| | | | signal mask and pending signals.
* Avoid kernel panic when attaching a process which may not be stoppeddavidxu2005-12-241-26/+30
| | | | | | | | | | by debugger, e.g process is dumping core. Only access p_xthread if P_STOPPED_TRACE is set, this means thread is ready to exchange signal with debugger, print a warning if P_STOPPED_TRACE is not set due to some bugs in other code, if there is. The patch has been tested by Anish Mistry mistry.7 at osu dot edu, and is slightly adjusted.
* Make sure pending SIGCHLD is removed from previous parent when processdavidxu2005-11-081-1/+10
| | | | is attached or detached.
* Fix a LOR between sched_lock and sleep queue lock.davidxu2005-08-191-2/+4
|
* Jumbo-commit to enhance 32 bit application support on 64 bit kernels.peter2005-06-301-22/+178
| | | | | | | | | | | | | | | | | | | | | | | | This is good enough to be able to run a RELENG_4 gdb binary against a RELENG_4 application, along with various other tools (eg: 4.x gcore). We use this at work. ia32_reg.[ch]: handle the 32 bit register file format, used by ptrace, procfs and core dumps. procfs_*regs.c: vary the format of proc/XXX/*regs depending on the client and target application. procfs_map.c: Don't print a 64 bit value to 32 bit consumers, or their sscanf fails. They expect an unsigned long. imgact_elf.c: produce a valid 32 bit coredump for 32 bit apps. sys_process.c: handle 32 bit consumers debugging 32 bit targets. Note that 64 bit consumers can still debug 32 bit targets. IA64 has got stubs for ia32_reg.c. Known limitations: a 5.x/6.x gdb uses get/setcontext(), which isn't implemented in the 32/64 wrapper yet. We also make a tiny patch to gdb pacify it over conflicting formats of ld-elf.so.1. Approved by: re
* Add missing cases for PT_SYSCALL.das2005-03-181-0/+2
| | | | Found by: Coverity Prevent analysis tool
* /* -> /*- for copyright notices, minor format tweaks as necessaryimp2005-01-061-1/+1
|
* Don't include sys/user.h merely for its side-effect of recursivelydas2004-11-271-1/+1
| | | | including other headers.
* Add pl_flags to ptrace_lwpinfo, two flags PL_FLAG_SA and PL_FLAG_BOUNDdavidxu2004-08-081-0/+7
| | | | | | | indicate that a thread is in UTS critical region. Reviewed by: deischen Approved by: marcel
* - Use atomic ops for updating the vmspace's refcnt and exitingcnt.alc2004-07-271-11/+7
| | | | | | | | - Push down Giant into shmexit(). (Giant is acquired only if the vmspace contains shm segments.) - Eliminate the acquisition of Giant from proc_rwmem(). - Reduce the scope of Giant in exit1(), uncovering the destruction of the address space.
* Fix typo.davidxu2004-07-171-1/+1
|
* Implement following commands: PT_CLEARSTEP, PT_SETSTEP, PT_SUSPENDdavidxu2004-07-131-10/+109
| | | | PT_RESUME, PT_GETNUMLWPS, PT_GETLWPLIST.
* Implement the PT_LWPINFO request. This request can be used by themarcel2004-07-121-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | tracing process to obtain information about the LWP that caused the traced process to stop. Debuggers can use this information to select the thread currently running on the LWP as the current thread. The request has been made compatible with NetBSD for as much as possible. This implementation differs from NetBSD in the following ways: 1. The data argument is allowed to be smaller than the size of the ptrace_lwpinfo structure known to the kernel, but not 0. This is opposite to what NetBSD allows. The reason for this is that we can extend the structure without affecting older binaries. 2. On NetBSD the tracing process is to set the pl_lwpid field to the Id of the LWP it wants information of. We don't do that. Our ptrace interface allows passing the LWP Id instead of the PID. The tracing process is to set the PID to the LWP Id it wants information of. 3. When the PID is actually the PID of the tracing process, this request returns the information about the LWP that caused the process to stop. This was the whole purpose of the request in the first place. When the traced process has exited, this request will return the LWP Id 0, indicating that the process state is not the result of an event specific to a LWP.
* Allow ptrace to deal with lwpid.davidxu2004-07-021-6/+36
| | | | Reviewed by: marcel
* Finish fixing up Alpha to work with an MP safe ptrace():jhb2004-04-011-8/+8
| | | | | | | | | | - ptrace_single_step() is no longer called with the proc lock held, so don't try to unlock it and then relock it. - Push Giant down into proc_rwmem() instead of forcing all the consumers (including Alpha breakpoint support) to explicitly wrap calls to proc_rwmem() with Giant. Tested by: kensmith
* Use uiomove_fromphys() instead of pmap_qenter() and pmap_qremove() inalc2004-03-241-9/+1
| | | | proc_rwmem().
OpenPOWER on IntegriCloud