| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
a prerequisite.
|
|
|
|
|
|
|
|
|
|
| |
s/SNGL/SINGLE/
s/SNGLE/SINGLE/
Fix abbreviation for P_STOPPED_* etc flags, in original code they were
inconsistent and difficult to distinguish between them.
Approved by: julian (mentor)
|
| |
|
|
|
|
|
|
| |
Reduce the swap meta calculation by a factor of 2, it's still massive overkill.
X-MFC after: immediately
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in the original hardwired sysctl implementation.
The buf size calculator still overflows an integer on machines with large
KVA (eg: ia64) where the number of pages does not fit into an int. Use
'long' there.
Change Maxmem and physmem and related variables to 'long', mostly for
completeness. Machines are not likely to overflow 'int' pages in the
near term, but then again, 640K ought to be enough for anybody. This
comes for free on 32 bit machines, so why not?
|
|
|
|
| |
of our platforms implements.
|
|
|
|
|
|
| |
pmap_zero_page() and pmap_zero_page_area() were modified to accept
a struct vm_page * instead of a physical address, vm_page_zero_fill()
and vm_page_zero_fill_area() have served no purpose.
|
|
|
|
| |
Reviewed by: md5
|
|
|
|
| |
in vm_map_insert().
|
|
|
|
| |
(For now, they simply acquire and release Giant.)
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
ia64 pmap.
o Remove the PG_MAPPED flag's declaration.
|
|
|
|
| |
obsolete.)
|
|
|
|
|
| |
flag. (This is the only place in the entire kernel where the PG_MAPPED
flag is tested. It will be removed soon.)
|
|
|
|
|
| |
in vm_page_grab(). Also, replace the nearby tsleep() with an msleep()
on the page queues lock.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- v_vflag is protected by the vnode lock and is used when synchronization
with VOP calls is needed.
- v_iflag is protected by interlock and is used for dealing with vnode
management issues. These flags include X/O LOCK, FREE, DOOMED, etc.
- All accesses to v_iflag and v_vflag have either been locked or marked with
mp_fixme's.
- Many ASSERT_VOP_LOCKED calls have been added where the locking was not
clear.
- Many functions in vfs_subr.c were restructured to provide for stronger
locking.
Idea stolen from: BSD/OS
|
|
|
|
|
| |
o Replace vm_page_sleep_busy() with vm_page_sleep_if_busy()
in vm_contig_launder().
|
|
|
|
| |
vm_page_alloc(VM_ALLOC_WIRED).
|
|
|
|
| |
with appropriate page queue locking.
|
|
|
|
| |
o Assert that the page queues lock is held in vm_page_deactivate().
|
| |
|
|
|
|
| |
o Assert that the page queues lock is held in vm_page_io_finish().
|
|
|
|
|
|
|
|
|
|
|
|
| |
by pmap_qenter() and pmap_qremove() is pointless. In fact, it probably
leads to unnecessary pmap_page_protect() calls if one of these pages is
paged out after unwiring.
Note: setting PG_MAPPED asserts that the page's pv list may be
non-empty. Since checking the status of the page's pv list isn't any
harder than checking this flag, the flag should probably be eliminated.
Alternatively, PG_MAPPED could be set by pmap_enter() exclusively
rather than various places throughout the kernel.
|
|
|
|
| |
o Assert that the page queues lock is held in vm_page_io_start().
|
|
|
|
|
|
|
| |
vm_page_sleep_busy() with vm_page_sleep_if_busy(). At the same time,
increase the scope of the page queues lock. (This should significantly
reduce the locking overhead in vm_object_page_remove().)
o Apply some style fixes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
swapped in, we do not have to ask for the scheduler thread to do
that.
- Assert that a process is not swapped out in runq functions and
swapout().
- Introduce thread_safetoswapout() for readability.
- In swapout_procs(), perform a test that may block (check of a
thread working on its vm map) first. This lets us call swapout()
with the sched_lock held, providing a better atomicity.
|
|
|
|
|
| |
vm_page_sleep_busy(). vm_page_sleep_if_busy() uses the page
queues lock.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
except for the fact tha they are presently swapped out. Also add a process
flag to indicate that the process has started the struggle to swap
back in. This will be needed for the case where multiple threads
start the swapin action top a collision. Also add code to stop
a process fropm being swapped out if one of the threads in this
process is actually off running on another CPU.. that might hurt...
Submitted by: Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp>
|
|
|
|
|
| |
in pmap_new_thread(), pmap_pinit(), and vm_proc_new().
o Lock page queue accesses by vm_page_free() in pmap_object_init_pt().
|
| |
|
|
|
|
| |
o Apply some style fixes.
|
| |
|
|
|
|
|
| |
o Increment cnt.v_dfree inside vm_pageout_page_free() rather than
at each call.
|
| |
|
|
|
|
|
|
| |
and vm_pageout_flush().
o Acquire the page queues lock before calling vm_pageout_clean()
or vm_pageout_flush().
|
| |
|
|
|
|
|
| |
in vm_pageout_object_deactivate_pages().
o Apply some style fixes to vm_pageout_object_deactivate_pages().
|
| |
|
|
|
|
|
|
| |
vm_page_rename() from vm_object_backing_scan(). vm_page_rename()
also performs vm_page_deactivate() on pages in the cache queues,
making the removed vm_page_deactivate() redundant.
|
|
|
|
| |
user_wire.
|
|
|
|
| |
o Assert that the page queue lock is held in vm_page_dontneed().
|
|
|
|
| |
to cover the traversal of the cache queue.
|
|
|
|
| |
removes the need for casts in several cases.
|
| |
|
| |
|