summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_page.c
Commit message (Collapse)AuthorAgeFilesLines
...
* - Relax the Giant required in vm_page_remove().alc2003-04-251-3/+2
| | | | | | | | | | - Remove the Giant required from vm_page_free_toq(). (Any locking errors will be caught by vm_page_remove().) This remedies a panic that occurred when kmem_malloc(NOWAIT) performed without Giant failed to allocate the necessary pages. Reported by: phk
* Revision 1.246 should have also includedalc2003-04-221-1/+2
| | | | | | | - Weaken the assertion in vm_page_insert() to require Giant only if the vm_object isn't locked. Reported by: "Ilmar S. Habibulin" <ilmar@watson.org>
* Revision 1.52 of vm/uma_core.c has led to UMA's obj_alloc() beingalc2003-04-221-3/+2
| | | | | | | | | | | | | | called without Giant; and obj_alloc() in turn calls vm_page_alloc() without Giant. This causes an assertion failure in vm_page_alloc(). Fortunately, obj_alloc() is now MPSAFE. So, we need only clean up some assertions. - Weaken the assertion in vm_page_lookup() to require Giant only if the vm_object isn't locked. - Remove an assertion from vm_page_alloc() that duplicates a check performed in vm_page_lookup(). In collaboration with: gallatin, jake, jeff
* - Kill the pv_flags member of the alpha mdpage since it stop being usedjhb2003-04-101-4/+0
| | | | | | in rev 1.61 of pmap.c. - Now that pmap_page_is_free() is empty and since it is just a hack for the Alpha pmap, remove it.
* - Add vm_paddr_t, a physical address type. This is required for systemsjake2003-03-251-8/+10
| | | | | | | | | | | | | | | where physical addresses larger than virtual addresses, such as i386s with PAE. - Use this to represent physical addresses in the MI vm system and in the i386 pmap code. This also changes the paddr parameter to d_mmap_t. - Fix printf formats to handle physical addresses >4G in the i386 memory detection code, and due to kvtop returning vm_paddr_t instead of u_long. Note that this is a name change only; vm_paddr_t is still the same as vm_offset_t on all currently supported platforms. Sponsored by: DARPA, Network Associates Laboratories Discussed with: re, phk (cdevsw change)
* Remove an empty comment.mux2003-03-191-4/+0
|
* Subtract the memory that backs the vm_page structures from phys_availjake2003-03-171-4/+2
| | | | | after mapping it. This makes it possible to determine if a physical page has a backing vm_page or not.
* Teach vm_page_sleep_if_busy() to release the vm_object lock before sleeping.alc2003-03-011-0/+9
|
* In vm_page_dirty(), assert that the page is not in the free queue(s).alc2003-02-241-0/+2
|
* - Convert the tsleep()s in vm_wait() and vm_waitpfault() to msleep()salc2003-02-011-3/+10
| | | | | with the page queue lock. - Assert that the page queue lock is held in vm_page_free_wakeup().
* - Hold the page queues lock around vm_page_hold().alc2003-01-201-1/+2
| | | | | - Assert that the page queues lock rather than Giant is held in vm_page_hold().
* - Update vm_pageout_deficit using atomic operations. It's a simplealc2003-01-141-2/+2
| | | | | counter outside the scope of existing locks. - Eliminate a redundant clearing of vm_pageout_deficit.
* Make vm_page_alloc() return PG_ZERO only if VM_ALLOC_ZERO is specified.alc2003-01-121-4/+5
| | | | | | | | | | The objective being to eliminate some cases of page queues locking. (See, for example, vm/vm_fault.c revision 1.160.) Reviewed by: tegge (Also, pointed out by tegge that I changed vm_fault.c before changing vm_page.c. Oops.)
* In vm_page_alloc(), fuse two if statements that are conditioned on the samealc2003-01-111-8/+3
| | | | expression.
* In vm_page_alloc(), honor VM_ALLOC_ZERO for system and interrupt classalc2003-01-081-12/+5
| | | | | | | requests when the number of free pages is below the reserved threshold. Previously, VM_ALLOC_ZERO was only honored when the number of free pages was above the reserved threshold. Honoring it in all cases generally makes sense, does no harm, and simplifies the code.
* Use atomic add and subtract to update the global wired page count,alc2003-01-051-3/+3
| | | | cnt.v_wire_count.
* Refine the assertions in vm_page_alloc().alc2003-01-041-2/+2
|
* Update the assertions in vm_page_insert() and vm_page_lookup() to reflectalc2003-01-011-4/+2
| | | | locking of the kmem_object.
* Reduce the number of times that we acquire and release the page queuesalc2002-12-291-2/+0
| | | | | lock by making vm_page_rename()'s caller, rather than vm_page_rename(), responsible for acquiring it.
* Assert that the page queues lock rather than Giant is held inalc2002-12-281-1/+2
| | | | vm_page_flag_clear().
* - Remove vm_page_sleep_busy(). The transition to vm_page_sleep_if_busy(),alc2002-12-191-34/+2
| | | | | | which incorporates page queue and field locking, is complete. - Assert that the page queue lock rather than Giant is held in vm_page_flag_set().
* Assert that the page queues lock is held in vm_page_unhold(),alc2002-12-151-2/+4
| | | | vm_page_remove(), and vm_page_free_toq().
* Hold the page queues/flags lock when calling vm_page_set_validclean().alc2002-11-231-1/+1
| | | | Approved by: re
* Remove vm_page_protect(). Instead, use pmap_page_protect() directly.alc2002-11-181-22/+1
|
* Now that pmap_remove_all() is exported by our pmap implementationsalc2002-11-161-4/+4
| | | | use it directly.
* When prot is VM_PROT_NONE, call pmap_page_protect() directly rather thanalc2002-11-101-3/+3
| | | | | | | | | indirectly through vm_page_protect(). The one remaining page flag that is updated by vm_page_protect() is already being updated by our various pmap implementations. Note: A later commit will similarly change the VM_PROT_READ case and eliminate vm_page_protect().
* In vm_page_remove(), avoid calling vm_page_splay() if the object's memqalc2002-11-091-10/+13
| | | | is empty.
* Export the function vm_page_splay().alc2002-11-041-1/+1
|
* - Remove the memory allocation for the object/offset hash tablealc2002-11-031-45/+2
| | | | | | | | because it's no longer used. (See revision 1.215.) - Fix a harmless bug: the number of vm_page structures allocated wasn't properly adjusted when uma_bootstrap() was introduced. Consequently, we were allocating 30 unused vm_page structures. - Wrap a long line.
* Remove the vm page buckets mutex. As of revision 1.215 of vm/vm_page.c,alc2002-11-021-2/+0
| | | | it is unused.
* - Add a new flag to vm_page_alloc, VM_ALLOC_NOOBJ. This tellsjeff2002-11-011-19/+24
| | | | | | | | | | vm_page_alloc not to insert this page into an object. The pindex is still used for colorization. - Rework vm_page_select_* to accept a color instead of an object and pindex to work with VM_PAGE_NOOBJ. - Document other VM_ALLOC_ flags. Reviewed by: peter, jake
* o Reinline vm_page_undirty(), reducing the kernel size. (This revertsalc2002-10-201-11/+0
| | | | a part of vm_page.h revision 1.87 and vm_page.c revision 1.167.)
* Complete the page queues locking needed for the page-based copy-alc2002-10-191-1/+7
| | | | | | | | | on-write (COW) mechanism. (This mechanism is used by the zero-copy TCP/IP implementation.) - Extend the scope of the page queues lock in vm_fault() to cover vm_page_cowfault(). - Modify vm_page_cowfault() to release the page queues lock if it sleeps.
* Replace the vm_page hash table with a per-vmobject splay tree. There shoulddillon2002-10-181-58/+92
| | | | | | | | | | | | | | | | be no major change in performance from this change at this time but this will allow other work to progress: Giant lock removal around VM system in favor of per-object mutexes, ranged fsyncs, more optimal COMMIT rpc's for NFS, partial filesystem syncs by the syncer, more optimal object flushing, etc. Note that the buffer cache is already using a similar splay tree mechanism. Note that a good chunk of the old hash table code is still in the tree. Alan or I will remove it prior to the release if the new code does not introduce unsolvable bugs, else we can revert more easily. Submitted by: alc (this is Alan's code) Approved by: re
* o Synchronize updates to struct vm_page::cow with the page queues lock.alc2002-09-021-6/+5
|
* o Retire vm_page_zero_fill() and vm_page_zero_fill_area(). Ever sincealc2002-08-251-25/+0
| | | | | | pmap_zero_page() and pmap_zero_page_area() were modified to accept a struct vm_page * instead of a physical address, vm_page_zero_fill() and vm_page_zero_fill_area() have served no purpose.
* o Assert that the page queues lock is held in vm_page_activate().alc2002-08-111-1/+1
|
* o Remove the setting and clearing of the PG_MAPPED flag. (This flag isalc2002-08-101-1/+1
| | | | obsolete.)
* o Use pmap_page_is_mapped() in vm_page_protect() rather than the PG_MAPPEDalc2002-08-081-1/+1
| | | | | flag. (This is the only place in the entire kernel where the PG_MAPPED flag is tested. It will be removed soon.)
* o Acquire the page queues lock before checking the page's busy statusalc2002-08-041-2/+4
| | | | | in vm_page_grab(). Also, replace the nearby tsleep() with an msleep() on the page queues lock.
* - Replace v_flag with v_iflag and v_vflagjeff2002-08-041-2/+6
| | | | | | | | | | | | | | | - v_vflag is protected by the vnode lock and is used when synchronization with VOP calls is needed. - v_iflag is protected by interlock and is used for dealing with vnode management issues. These flags include X/O LOCK, FREE, DOOMED, etc. - All accesses to v_iflag and v_vflag have either been locked or marked with mp_fixme's. - Many ASSERT_VOP_LOCKED calls have been added where the locking was not clear. - Many functions in vfs_subr.c were restructured to provide for stronger locking. Idea stolen from: BSD/OS
* o Remove the setting of PG_MAPPED from vm_page_wire() andalc2002-08-031-2/+0
| | | | vm_page_alloc(VM_ALLOC_WIRED).
* o Lock page queue accesses in nwfs and smbfs.alc2002-08-021-1/+1
| | | | o Assert that the page queues lock is held in vm_page_deactivate().
* o Acquire the page queues lock before calling vm_page_io_finish().alc2002-08-011-1/+2
| | | | o Assert that the page queues lock is held in vm_page_io_finish().
* o Lock page accesses by vm_page_io_start() with the page queues lock.alc2002-07-311-1/+2
| | | | o Assert that the page queues lock is held in vm_page_io_start().
* o Introduce vm_page_sleep_if_busy() as an eventual replacement foralc2002-07-291-0/+22
| | | | | vm_page_sleep_busy(). vm_page_sleep_if_busy() uses the page queues lock.
* o Modify vm_page_grab() to accept VM_ALLOC_WIRED.alc2002-07-281-0/+4
|
* o Lock page queue accesses by vm_page_dontneed().alc2002-07-231-1/+1
| | | | o Assert that the page queue lock is held in vm_page_dontneed().
* o Lock page queue accesses by vm_page_try_to_cache(). (The accessesalc2002-07-201-1/+1
| | | | | in kern/vfs_bio.c are already locked.) o Assert that the page queues lock is held in vm_page_try_to_cache().
* o Assert that the page queues lock is held in vm_page_try_to_free().alc2002-07-201-0/+2
|
OpenPOWER on IntegriCloud