summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_glue.c
Commit message (Collapse)AuthorAgeFilesLines
* Be consistent about "static" functions: if the function is markedphk2002-09-281-2/+2
| | | | | | static in its prototype, mark it static at the definition too. Inspired by: FlexeLint warning #512
* Use the fields in the sysentvec and in the vm map header in place of thejake2002-09-211-14/+6
| | | | | | | | constants VM_MIN_ADDRESS, VM_MAXUSER_ADDRESS, USRSTACK and PS_STRINGS. This is mainly so that they can be variable even for the native abi, based on different machine types. Get stack protections from the sysentvec too. This makes it trivial to map the stack non-executable for certain abis, on machines that support it.
* Completely redo thread states.julian2002-09-111-10/+15
| | | | Reviewed by: davidxu@freebsd.org
* - Do not swap out a process if it is in creation. The process may have notanimura2002-09-091-0/+24
| | | | | | | | address space yet. - Check whether a process is a system process prior to dereferencing its p_vmspace. Aio assumes that only the curthread switches the address space of a system process.
* Use UMA as a complex object allocator.julian2002-09-061-5/+0
| | | | | | | | | | | | | | | The process allocator now caches and hands out complete process structures *including substructures* . i.e. it get's the process structure with the first thread (and soon KSE) already allocated and attached, all in one hit. For the average non threaded program (non KSE that is) the allocated thread and its stack remain attached to the process, even when the process is unused and in the process cache. This saves having to allocate and attach it later, effectively bringing us (hopefully) close to the efficiency of pre-KSE systems where these were a single structure. Reviewed by: davidxu@freebsd.org, peter@freebsd.org
* s/SGNL/SIG/davidxu2002-09-051-1/+2
| | | | | | | | | | s/SNGL/SINGLE/ s/SNGLE/SINGLE/ Fix abbreviation for P_STOPPED_* etc flags, in original code they were inconsistent and difficult to distinguish between them. Approved by: julian (mentor)
* o Setting PG_MAPPED and PG_WRITEABLE on pages that are mapped and unmappedalc2002-07-311-2/+0
| | | | | | | | | | | | by pmap_qenter() and pmap_qremove() is pointless. In fact, it probably leads to unnecessary pmap_page_protect() calls if one of these pages is paged out after unwiring. Note: setting PG_MAPPED asserts that the page's pv list may be non-empty. Since checking the status of the page's pv list isn't any harder than checking this flag, the flag should probably be eliminated. Alternatively, PG_MAPPED could be set by pmap_enter() exclusively rather than various places throughout the kernel.
* - Optimize wakeup() and its friends; if a thread waken up is beingtanimura2002-07-301-64/+65
| | | | | | | | | | | | | | swapped in, we do not have to ask for the scheduler thread to do that. - Assert that a process is not swapped out in runq functions and swapout(). - Introduce thread_safetoswapout() for readability. - In swapout_procs(), perform a test that may block (check of a thread working on its vm map) first. This lets us call swapout() with the sched_lock held, providing a better atomicity.
* Remove a XXXKSE comment. the code is no longer a problem..julian2002-07-291-1/+1
|
* Create a new thread state to describe threads that would be ready to runjulian2002-07-291-16/+66
| | | | | | | | | | | except for the fact tha they are presently swapped out. Also add a process flag to indicate that the process has started the struggle to swap back in. This will be needed for the case where multiple threads start the swapin action top a collision. Also add code to stop a process fropm being swapped out if one of the threads in this process is actually off running on another CPU.. that might hurt... Submitted by: Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp>
* o Pass VM_ALLOC_WIRED to vm_page_grab() rather than calling vm_page_wire()alc2002-07-291-7/+2
| | | | | in pmap_new_thread(), pmap_pinit(), and vm_proc_new(). o Lock page queue accesses by vm_page_free() in pmap_object_init_pt().
* Do not pass a thread with the state TDS_RUNQ to setrunqueue(), otherwisetanimura2002-07-211-1/+4
| | | | assertion in setrunqueue() fails.
* o Lock page queue accesses by vm_page_wire().alc2002-07-141-0/+2
|
* o Lock some page queue accesses, in particular, those by vm_page_unwire().alc2002-07-131-0/+4
|
* Avoid a vm_page_lookup() - that uses a spinlock protected hash. We canpeter2002-07-121-2/+5
| | | | just use the object's memq for our nefarious purposes.
* Avoid vm_page_lookup() [grabs a spinlock] and just process the upagepeter2002-07-081-14/+9
| | | | | | object memq instead. Suggested by: alc
* Collect all the (now equivalent) pmap_new_proc/pmap_dispose_proc/peter2002-07-071-7/+152
| | | | | | | | | | | | | pmap_swapin_proc/pmap_swapout_proc functions from the MD pmap code and use a single equivalent MI version. There are other cleanups needed still. While here, use the UMA zone hooks to keep a cache of preinitialized proc structures handy, just like the thread system does. This eliminates one dependency on 'struct proc' being persistent even after being freed. There are some comments about things that can be factored out into ctor/dtor functions if it is worth it. For now they are mostly just doing statistics to get a feel of how it is working.
* A small cleanup.julian2002-07-041-1/+0
|
* Don;t call teh thread setup routines from here..julian2002-07-041-1/+0
| | | | they are already called when uma calls thread_init()
* Part 1 of KSE-IIIjulian2002-06-291-17/+31
| | | | | | | | | | | | | The ability to schedule multiple threads per process (one one cpu) by making ALL system calls optionally asynchronous. to come: ia64 and power-pc patches, patches for gdb, test program (in tools) Reviewed by: Almost everyone who counts (at various times, peter, jhb, matt, alfred, mini, bernd, and a cast of thousands) NOTE: this is still Beta code, and contains lots of debugging stuff. expect slight instability in signals..
* o Remove GIANT_REQUIRED from vslock().alc2002-06-221-1/+10
| | | | | | o Annotate kernacc(), useracc(), and vslock() as MPSAFE. Motivated by: alfred
* o Remove GIANT_REQUIRED from useracc() and vsunlock(). Neitheralc2002-06-151-3/+4
| | | | | vm_map_check_protection() nor vm_map_unwire() expect Giant to be held.
* o Use vm_map_wire() and vm_map_unwire() in place of vm_map_pageable() andalc2002-06-141-4/+3
| | | | | | | | | vm_map_user_pageable(). o Remove vm_map_pageable() and vm_map_user_pageable(). o Remove vm_map_clear_recursive() and vm_map_set_recursive(). (They were only used by vm_map_pageable() and vm_map_user_pageable().) Reviewed by: tegge
* o Introduce and use vm_map_trylock() to replace several direct usesalc2002-04-281-3/+1
| | | | | | of lockmgr(). o Add missing synchronization to vmspace_swap_count(): Obtain a read lock on the vm_map before traversing it.
* Remove __P.alfred2002-03-191-3/+3
|
* Fix a gcc-3.1+ warning.peter2002-03-191-0/+1
| | | | | | | | | | | warning: deprecated use of label at end of compound statement ie: you cannot do this anymore: switch(foo) { .... default: }
* Back out the modification of vm_map locks from lockmgr to sx locks. Thegreen2002-03-181-1/+3
| | | | | | | | | | best path forward now is likely to change the lockmgr locks to simple sleep mutexes, then see if any extra contention it generates is greater than removed overhead of managing local locking state information, cost of extra calls into lockmgr, etc. Additionally, making the vm_map lock a mutex and respecting it properly will put us much closer to not needing Giant magic in vm.
* Undo part of revision 1.57: Now that (o)sendsig() doesn't call useracc(),alc2002-03-171-13/+3
| | | | | | | | the motivation for saving and restoring the map->hint in useracc() is gone. (The same tests that motivated this change in revision 1.57 now show that there is no performance loss from removing it.) This was really a hack and some day we would have had to add new synchronization here on map->hint to maintain it.
* Acquire a read lock on the map inside of vm_map_check_protection() ratheralc2002-03-171-4/+1
| | | | | than expecting the caller to do so. This (1) eliminates duplicated code in kernacc() and useracc() and (2) fixes missing synchronization in munmap().
* Rename SI_SUB_MUTEX to SI_SUB_MTX_POOL to make the name at all accurate.green2002-03-131-3/+1
| | | | | | | | | | | | | | | | While doing this, move it earlier in the sysinit boot process so that the VM system can use it. After that, the system is now able to use sx locks instead of lockmgr locks in the VM system. To accomplish this, some of the more questionable uses of the locks (such as testing whether they are owned or not, as well as allowing shared+exclusive recursion) are removed, and simpler logic throughout is used so locks should also be easier to understand. This has been tested on my laptop for months, and has not shown any problems on SMP systems, either, so appears quite safe. One more user of lockmgr down, many more to go :)
* - Remove a number of extra newlines that do not belong here according toeivind2002-03-101-2/+0
| | | | | | | | | style(9) - Minor space adjustment in cases where we have "( ", " )", if(), return(), while(), for(), etc. - Add /* SYMBOL */ after a few #endifs. Reviewed by: alc
* Remove unused variable (td)peter2002-02-261-1/+0
|
* In a threaded world, differnt priorirites become properties ofjulian2002-02-111-2/+3
| | | | | | different entities. Make it so. Reviewed by: jhb@freebsd.org (john baldwin)
* Pre-KSE/M3 commit.julian2002-02-071-4/+5
| | | | | | | | | | this is a low-functionality change that changes the kernel to access the main thread of a process via the linked list of threads rather than assuming that it is embedded in the process. It IS still embeded there but remove all teh code that assumes that in preparation for the next commit which will actually move it out. Reviewed by: peter@freebsd.org, gallatin@cs.duke.edu, benno rice,
* Fix a race with free'ing vmspaces at process exit when vmspaces arealfred2002-02-051-1/+1
| | | | | | | | | | | | | | | | | | | shared. Also introduce vm_endcopy instead of using pointer tricks when initializing new vmspaces. The race occured because of how the reference was utilized: test vmspace reference, possibly block, decrement reference When sharing a vmspace between multiple processes it was possible for two processes exiting at the same time to test the reference count, possibly block and neither one free because they wouldn't see the other's update. Submitted by: green
* Don't declare vm_swapout() in the NO_SWAPPING case when it is not defined.bde2002-01-171-6/+4
| | | | Fixed some style bugs.
* Change the preemption code for software interrupt thread schedules andjhb2002-01-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mutex releases to not require flags for the cases when preemption is not allowed: The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent switching to a higher priority thread on mutex releease and swi schedule, respectively when that switch is not safe. Now that the critical section API maintains a per-thread nesting count, the kernel can easily check whether or not it should switch without relying on flags from the programmer. This fixes a few bugs in that all current callers of swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from fast interrupt handlers and the swi_sched of softclock needed this flag. Note that to ensure that swi_sched()'s in clock and fast interrupt handlers do not switch, these handlers have to be explicitly wrapped in critical_enter/exit pairs. Presently, just wrapping the handlers is sufficient, but in the future with the fully preemptive kernel, the interrupt must be EOI'd before critical_exit() is called. (critical_exit() can switch due to a deferred preemption in a fully preemptive kernel.) I've tested the changes to the interrupt code on i386 and alpha. I have not tested ia64, but the interrupt code is almost identical to the alpha code, so I expect it will work fine. PowerPC and ARM do not yet have interrupt code in the tree so they shouldn't be broken. Sparc64 is broken, but that's been ok'd by jake and tmm who will be fixing the interrupt code for sparc64 shortly. Reviewed by: peter Tested on: i386, alpha
* Make MAXTSIZ, DFLDSIZ, MAXDSIZ, DFLSSIZ, MAXSSIZ, SGROWSIZ loaderps2001-10-101-5/+4
| | | | | | | tunable. Reviewed by: peter MFC after: 2 weeks
* KSE Milestone 2julian2001-09-121-61/+94
| | | | | | | | | | | | | | Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is functionally equivalent to teh previousl -current except that there is a thread associated with each process. Sorry john! (your next MFC will be a doosie!) Reviewed by: peter@freebsd.org, dillon@freebsd.org X-MFC after: ha ha ha ha
* Rip some well duplicated code out of cpu_wait() and cpu_exit() and movepeter2001-09-101-1/+17
| | | | | | | | | | | | it to the MI area. KSE touched cpu_wait() which had the same change replicated five ways for each platform. Now it can just do it once. The only MD parts seemed to be dealing with fpu state cleanup and things like vm86 cleanup on x86. The rest was identical. XXX: ia64 and powerpc did not have cpu_throw(), so I've put a functional stub in place. Reviewed by: jake, tmm, dillon
* whitespace / register cleanupdillon2001-07-041-7/+7
|
* With Alfred's permission, remove vm_mtx in favor of a fine-grained approachdillon2001-07-041-35/+10
| | | | | | | | | (this commit is just the first stage). Also add various GIANT_ macros to formalize the removal of Giant, making it easy to test in a more piecemeal fashion. These macros will allow us to test fine-grained locks to a degree before removing Giant, and also after, and to remove Giant in a piecemeal fashion via sysctl's on those subsystems which the authors believe can operate without Giant.
* Put the scheduler, vmdaemon, and pagedaemon kthreads back under Giant forjhb2001-06-201-3/+0
| | | | | now. The proc locking isn't actually safe yet and won't be until the proc locking is finished.
* - Lock the VM around the pmap_swapin_proc() call in faultin().jhb2001-05-231-15/+16
| | | | | | | | | | | | | | | - Don't lock Giant in the scheduler() function except for when calling faultin(). - In swapout_procs(), lock the VM before the proccess to avoid a lock order violation. - In swapout_procs(), release the allproc lock before calling swapout(). We restart the process scan after swapping out a process. - In swapout_procs(), un #if 0 the code to bump the vmspace reference count and lock the process' vm structures. This bug was introduced by me and could result in the vmspace being free'd out from under a running process. - Fix an old bug where the vmspace reference count was not free'd if we failed the swap_idle_threshold2 test.
* Introduce a global lock for the vm subsystem (vm_mtx).alfred2001-05-191-3/+35
| | | | | | | | | | | | | | | | | | | vm_mtx does not recurse and is required for most low level vm operations. faults can not be taken without holding Giant. Memory subsystems can now call the base page allocators safely. Almost all atomic ops were removed as they are covered under the vm mutex. Alpha and ia64 now need to catch up to i386's trap handlers. FFS and NFS have been tested, other filesystems will need minor changes (grabbing the vm lock when twiddling page properties). Reviewed (partially) by: jake, jhb
* - Use a timeout for the tsleep in scheduler() instead of having vmmeter()jhb2001-05-181-2/+21
| | | | | | | | | | | | | | | wakeup proc0 by hand to enforce the timeout. - When swapping out a process, keep the process locked via the proc lock from the first checks up until we clear PS_INMEM and set PS_SWAPPING in swapout(). The swapout() function now must be called with the proc lock held and releases it before returning. - Comment out the code to attempt to lock a process' VM structures before swapping out. It is broken in that it releases the lock after obtaining it. If it does grab the lock, it needs to hand it off to swapout() instead of releasing it. This can be revisisted when the VM is locked as this is a valid test to perform. It also causes a lock order reversal for the time being, which is the immediate cause for temporarily disabling it.
* - Use PROC_LOCK_ASSERT instead of a direct mtx_assert.jhb2001-05-151-8/+6
| | | | | | | | | | | | - Don't hold Giant in the swapper daemon while we walk the list of processes looking for a process to swap back in. - Don't bother grabbing the sched_lock while checking a process' sleep time in swapout_procs() to ensure that a process has been idle for at least swap_idle_threshold2 before swapping it out. If we lose the race we just let a process stay in memory until the next call of swapout_procs(). - Remove some unneeded spl's, sched_lock does all the locking needed in this case.
* Undo part of the tangle of having sys/lock.h and sys/mutex.h included inmarkm2001-05-011-2/+2
| | | | | | | | | | | other "system" header files. Also help the deprecation of lockmgr.h by making it a sub-include of sys/lock.h and removing sys/lockmgr.h form kernel .c files. Sort sys/*.h includes where possible in affected files. OK'ed by: bde (with reservations)
* Convert the allproc and proctree locks from lockmgr locks to sx locks.jhb2001-03-281-4/+5
|
* Implement a unified run queue and adjust priority levels accordingly.jake2001-02-121-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - All processes go into the same array of queues, with different scheduling classes using different portions of the array. This allows user processes to have their priorities propogated up into interrupt thread range if need be. - I chose 64 run queues as an arbitrary number that is greater than 32. We used to have 4 separate arrays of 32 queues each, so this may not be optimal. The new run queue code was written with this in mind; changing the number of run queues only requires changing constants in runq.h and adjusting the priority levels. - The new run queue code takes the run queue as a parameter. This is intended to be used to create per-cpu run queues. Implement wrappers for compatibility with the old interface which pass in the global run queue structure. - Group the priority level, user priority, native priority (before propogation) and the scheduling class into a struct priority. - Change any hard coded priority levels that I found to use symbolic constants (TTIPRI and TTOPRI). - Remove the curpriority global variable and use that of curproc. This was used to detect when a process' priority had lowered and it should yield. We now effectively yield on every interrupt. - Activate propogate_priority(). It should now have the desired effect without needing to also propogate the scheduling class. - Temporarily comment out the call to vm_page_zero_idle() in the idle loop. It interfered with propogate_priority() because the idle process needed to do a non-blocking acquire of Giant and then other processes would try to propogate their priority onto it. The idle process should not do anything except idle. vm_page_zero_idle() will return in the form of an idle priority kernel thread which is woken up at apprioriate times by the vm system. - Update struct kinfo_proc to the new priority interface. Deliberately change its size by adjusting the spare fields. It remained the same size, but the layout has changed, so userland processes that use it would parse the data incorrectly. The size constraint should really be changed to an arbitrary version number. Also add a debug.sizeof sysctl node for struct kinfo_proc.
OpenPOWER on IntegriCloud