summaryrefslogtreecommitdiffstats
path: root/sys/vm
Commit message (Collapse)AuthorAgeFilesLines
* o Setting PG_MAPPED and PG_WRITEABLE on pages that are mapped and unmappedalc2002-07-311-2/+0
| | | | | | | | | | | | by pmap_qenter() and pmap_qremove() is pointless. In fact, it probably leads to unnecessary pmap_page_protect() calls if one of these pages is paged out after unwiring. Note: setting PG_MAPPED asserts that the page's pv list may be non-empty. Since checking the status of the page's pv list isn't any harder than checking this flag, the flag should probably be eliminated. Alternatively, PG_MAPPED could be set by pmap_enter() exclusively rather than various places throughout the kernel.
* o Lock page accesses by vm_page_io_start() with the page queues lock.alc2002-07-311-1/+2
| | | | o Assert that the page queues lock is held in vm_page_io_start().
* o In vm_object_madvise() and vm_object_page_remove() replacealc2002-07-301-15/+10
| | | | | | | vm_page_sleep_busy() with vm_page_sleep_if_busy(). At the same time, increase the scope of the page queues lock. (This should significantly reduce the locking overhead in vm_object_page_remove().) o Apply some style fixes.
* - Optimize wakeup() and its friends; if a thread waken up is beingtanimura2002-07-301-64/+65
| | | | | | | | | | | | | | swapped in, we do not have to ask for the scheduler thread to do that. - Assert that a process is not swapped out in runq functions and swapout(). - Introduce thread_safetoswapout() for readability. - In swapout_procs(), perform a test that may block (check of a thread working on its vm map) first. This lets us call swapout() with the sched_lock held, providing a better atomicity.
* o Introduce vm_page_sleep_if_busy() as an eventual replacement foralc2002-07-292-0/+23
| | | | | vm_page_sleep_busy(). vm_page_sleep_if_busy() uses the page queues lock.
* Remove a XXXKSE comment. the code is no longer a problem..julian2002-07-291-1/+1
|
* Create a new thread state to describe threads that would be ready to runjulian2002-07-291-16/+66
| | | | | | | | | | | except for the fact tha they are presently swapped out. Also add a process flag to indicate that the process has started the struggle to swap back in. This will be needed for the case where multiple threads start the swapin action top a collision. Also add code to stop a process fropm being swapped out if one of the threads in this process is actually off running on another CPU.. that might hurt... Submitted by: Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp>
* o Pass VM_ALLOC_WIRED to vm_page_grab() rather than calling vm_page_wire()alc2002-07-291-7/+2
| | | | | in pmap_new_thread(), pmap_pinit(), and vm_proc_new(). o Lock page queue accesses by vm_page_free() in pmap_object_init_pt().
* o Modify vm_page_grab() to accept VM_ALLOC_WIRED.alc2002-07-282-1/+5
|
* o Lock page queue accesses by vm_page_free().alc2002-07-281-14/+19
| | | | o Apply some style fixes.
* o Lock page queue accesses by vm_page_free().alc2002-07-281-2/+8
|
* o Lock page queue accesses by vm_page_free().alc2002-07-281-2/+3
| | | | | o Increment cnt.v_dfree inside vm_pageout_page_free() rather than at each call.
* o Lock page queue accesses by vm_page_free().alc2002-07-281-0/+2
|
* o Require that the page queues lock is held on entry to vm_pageout_clean()alc2002-07-273-5/+9
| | | | | | and vm_pageout_flush(). o Acquire the page queues lock before calling vm_pageout_clean() or vm_pageout_flush().
* o Lock page queue accesses by vm_page_activate().alc2002-07-271-0/+6
|
* o Lock page queue accesses by vm_page_activate() and vm_page_deactivate()alc2002-07-271-7/+6
| | | | | in vm_pageout_object_deactivate_pages(). o Apply some style fixes to vm_pageout_object_deactivate_pages().
* o Lock page queue accesses by vm_page_activate() and vm_page_deactivate().alc2002-07-271-0/+2
|
* o Remove a vm_page_deactivate() that is immediately followed by aalc2002-07-251-3/+0
| | | | | | vm_page_rename() from vm_object_backing_scan(). vm_page_rename() also performs vm_page_deactivate() on pages in the cache queues, making the removed vm_page_deactivate() redundant.
* o Merge vm_fault_wire() and vm_fault_user_wire() by adding a new parameter,alc2002-07-243-56/+11
| | | | user_wire.
* o Lock page queue accesses by vm_page_dontneed().alc2002-07-232-4/+5
| | | | o Assert that the page queue lock is held in vm_page_dontneed().
* o Extend the scope of the page queues lock in vm_pageout_scan()alc2002-07-231-2/+1
| | | | to cover the traversal of the cache queue.
* Change struct vmspace->vm_shm from void * to struct shmmap_state *, thisalfred2002-07-221-1/+1
| | | | removes the need for casts in several cases.
* Remove caddr_t.alfred2002-07-221-1/+1
|
* o Lock page queue accesses by vm_page_free() and vm_page_deactivate().alc2002-07-211-0/+12
|
* o Lock page queue accesses by vm_page_free().alc2002-07-211-0/+2
|
* Do not pass a thread with the state TDS_RUNQ to setrunqueue(), otherwisetanimura2002-07-211-1/+4
| | | | assertion in setrunqueue() fails.
* o Lock page queue accesses by vm_page_try_to_cache(). (The accessesalc2002-07-203-1/+5
| | | | | in kern/vfs_bio.c are already locked.) o Assert that the page queues lock is held in vm_page_try_to_cache().
* o Assert that the page queues lock is held in vm_page_try_to_free().alc2002-07-201-0/+2
|
* o Lock page queue accesses by vm_page_cache() in vm_fault() andalc2002-07-203-2/+5
| | | | | vm_pageout_scan(). (The others are already locked.) o Assert that the page queues lock is held in vm_page_cache().
* o Lock accesses to the active page queue in vm_pageout_scan() andalc2002-07-201-2/+4
| | | | vm_pageout_page_stats().
* o Lock page queue accesses by vm_page_cache() in vm_contig_launder().alc2002-07-201-2/+4
| | | | o Micro-optimize the control flow in vm_contig_launder().
* o Remove dead and/or unused code.alc2002-07-202-17/+1
|
* Infrastructure tweaks to allow having both an Elf32 and an Elf64 executablepeter2002-07-202-4/+3
| | | | | | | | | | | | | | | handler in the kernel at the same time. Also, allow for the exec_new_vmspace() code to build a different sized vmspace depending on the executable environment. This is a big help for execing i386 binaries on ia64. The ELF exec code grows the ability to map partial pages when there is a page size difference, eg: emulating 4K pages on 8K or 16K hardware pages. Flesh out the i386 emulation support for ia64. At this point, the only binary that I know of that fails is cvsup, because the cvsup runtime tries to execute code in pages not marked executable. Obtained from: dfr (mostly, many tweaks from me).
* Set P_NOLOAD on the pagezero kthread so that it doesn't artificially skewpeter2002-07-191-1/+7
| | | | | | the loadav. This is not real load. If you have a nice process running in the background, pagezero may sit in the run queue for ages and add one to the loadav, and thereby affecting other scheduling decisions.
* o Duplicate an odd side-effect of vm_page_wire() in vm_page_allocate()alc2002-07-191-1/+2
| | | | | | when VM_ALLOC_WIRED is specified: set the PG_MAPPED bit in flags. o In both vm_page_wire() and vm_page_allocate() add a comment saying that setting PG_MAPPED does not belong there.
* o Remove the acquisition and release of Giant from the idle priority threadalc2002-07-182-8/+1
| | | | | | | | | that pre-zeroes free pages. o Remove GIANT_REQUIRED from some low-level page queue functions. (Instead assertions on the page queue lock are being added to the higher-level functions, like vm_page_wire(), etc.) In collaboration with: peter
* Void functions cannot return values.markm2002-07-181-1/+1
|
* (VM_MAX_KERNEL_ADDRESS - KERNBASE) / PAGE_SIZE may not fit in an integer.peter2002-07-181-1/+1
| | | | | | Use lmin(long, long), not min(u_int, u_int). This is a problem here on ia64 which has *way* more than 2^32 pages of KVA. 281474976710655 pages to be precice.
* o Introduce an argument, VM_ALLOC_WIRED, that requests vm_page_alloc()alc2002-07-182-10/+15
| | | | | | | | | | to return a wired page. o Use VM_ALLOC_WIRED within Alpha's pmap_growkernel(). Also, because Alpha's pmap_growkernel() calls vm_page_alloc() from within a critical section, specify VM_ALLOC_INTERRUPT instead of VM_ALLOC_SYSTEM. (Only VM_ALLOC_INTERRUPT is implemented entirely with a spin mutex.) o Assert that the page queues mutex is held in vm_page_wire() on Alpha, just like the other platforms.
* o Use vm_pageq_remove_nowakeup() and vm_pageq_enqueue() inalc2002-07-161-7/+2
| | | | | | | | | vm_page_zero_idle() instead of partially duplicated implementations. In particular, this change guarantees that the number of free pages in the free queue(s) matches the global free page count when Giant is released. Submitted by: peter (via his p4 "pmap" branch)
* o Create vm_contig_launder() to replace code that appears twicealc2002-07-151-56/+38
| | | | in contigmalloc1().
* o Lock page queue accesses by vm_page_wire() that aren'talc2002-07-141-0/+3
| | | | | | within a critical section. o Assert that the page queues lock is held in vm_page_wire() unless an Alpha.
* o Lock page queue accesses by vm_page_wire().alc2002-07-142-0/+4
|
* o Lock page queue accesses by vm_page_unmanage().alc2002-07-132-0/+3
| | | | o Assert that the page queues lock is held in vm_page_unmanage().
* o Complete the locking of page queue accesses by vm_page_unwire().alc2002-07-132-5/+4
| | | | | | o Assert that the page queues lock is held in vm_page_unwire(). o Make vm_page_lock_queues() and vm_page_unlock_queues() visible to kernel loadable modules.
* o Lock some page queue accesses, in particular, those by vm_page_unwire().alc2002-07-132-1/+8
|
* o Assert GIANT_REQUIRED on system maps in _vm_map_lock(),alc2002-07-122-9/+6
| | | | | | | | | | _vm_map_lock_read(), and _vm_map_trylock(). Submitted by: tegge o Remove GIANT_REQUIRED from kmem_alloc_wait() and kmem_free_wakeup(). (This clears the way for exec_map accesses to move outside of Giant. The exec_map is not a system map.) o Remove some premature MPSAFE comments. Reviewed by: tegge
* Re-enable the idle page-zeroing code. Remove all IPIs from the idledillon2002-07-121-4/+0
| | | | | | | | | | | | | | | | | page-zeroing code as well as from the general page-zeroing code and use a lazy tlb page invalidation scheme based on a callback made at the end of mi_switch. A number of people came up with this idea at the same time so credit belongs to Peter, John, and Jake as well. Two-way SMP buildworld -j 5 tests (second run, after stabilization) 2282.76 real 2515.17 user 704.22 sys before peter's IPI commit 2266.69 real 2467.50 user 633.77 sys after peter's commit 2232.80 real 2468.99 user 615.89 sys after this commit Reviewed by: peter, jhb Approved by: peter
* Avoid a vm_page_lookup() - that uses a spinlock protected hash. We canpeter2002-07-121-2/+5
| | | | just use the object's memq for our nefarious purposes.
* o Lock some (unfortunately, not yet all) accesses to the page queues.alc2002-07-121-2/+2
|
OpenPOWER on IntegriCloud