summaryrefslogtreecommitdiffstats
path: root/sys/vm
Commit message (Collapse)AuthorAgeFilesLines
* Add support for the M_ZERO flag to contigmalloc().mux2003-07-251-1/+5
| | | | Reviewed by: jeff
* Remove all but one of the inlines here, this reduces the code size byphk2003-07-221-10/+9
| | | | 2032 bytes and has no measurable impact on performance.
* Don't inline very large functions.phk2003-07-221-1/+1
| | | | Gcc has silently not been doing this for a long time.
* swp_pager_hash() was called before it was instantiated inline. This madepeter2003-07-221-29/+29
| | | | gcc (quite rightly) unhappy. Move it earlier.
* Fix a printf format warning I introduced.phk2003-07-181-21/+19
| | | | | | Use the macro max number of swap devices rather than cache the constant in a variable. Avoid a (now) pointless variable.
* When INVARIANTS is defined make sure that uma_zalloc_arg (and henceharti2003-07-181-0/+20
| | | | | | | uma_zalloc) is called with exactly one of either M_WAITOK or M_NOWAIT and that it is called with neither M_TRYWAIT or M_DONTWAIT. Print a warning if anything is wrong. Default to M_WAITOK of no flag is given. This is the same test as in malloc(9).
* If a proposed swap device exceeds the 8G artificial limit which outphk2003-07-181-6/+6
| | | | radix-tree code imposes, truncate the device instead of rejecting it.
* Move the implementation of the vmspace_swap_count() (used only inphk2003-07-183-64/+65
| | | | | | | | | the "toss the largest process" emergency handling) from vm_map.c to swap_pager.c. The quantity calculated depends strongly on the internals of the swap_pager and by moving it, we no longer need to expose the internal metrics of the swap_pager to the world.
* Add a new function swap_pager_status() which reports the total size of thephk2003-07-182-2/+18
| | | | | | | paging space and how much of it is in use (in pages). Use this interface from the Linuxolator instead of groping around in the internals of the swap_pager.
* Merge swap_pager.c and vm_swap.c into swap_pager.c, the separationphk2003-07-183-581/+504
| | | | | | | | | is not natural and needlessly exposes a lot of dirty laundry. Move private interfaces between the two from swap_pager.h to swap_pager.c and staticize as much as possible. No functional change.
* Make sure that SWP_NPAGES always has the same value in all sourcephk2003-07-172-7/+5
| | | | | | | | | | | files, so that SWAP_META_PAGES does not vary either. swap_pager.c ended up with a value of 16, everybody else 8. Go with the 16 for now. This should only have any effect in the "kill processes because we are out of swap" scenario, where it will make some sort of estimate of something more precise.
* Avoid an unnecessary calculation: there is no need to subtractrobert2003-07-131-1/+1
| | | | `firstaddr' from `v' if we know that the former equals zero.
* - Complete the vm object locking in vm_pageout_object_deactivate_pages().alc2003-07-071-21/+27
| | | | | | | | | - Change vm_pageout_object_deactivate_pages()'s first parameter from a vm_map_t to a pmap_t. - Change vm_pageout_object_deactivate_pages()'s and vm_pageout_map_deactivate_pages()'s last parameter from a vm_pindex_t to a long. Since the number of pages in an address space doesn't require 64 bits on an i386, vm_pindex_t is overkill.
* Lock a vm object when freeing a page from it.alc2003-07-051-0/+7
|
* Remove unnecessary cast.phk2003-07-041-1/+1
|
* Background: pmap_object_init_pt() premaps the pages of a object inalc2003-07-032-3/+75
| | | | | | | | | | | | | | | | order to avoid the overhead of later page faults. In general, it implements two cases: one for vnode-backed objects and one for device-backed objects. Only the device-backed case is really machine-dependent, belonging in the pmap. This commit moves the vnode-backed case into the (relatively) new function vm_map_pmap_enter(). On amd64 and i386, this commit only amounts to code rearrangement. On alpha and ia64, the new machine independent (MI) implementation of the vnode case is smaller and more efficient than their pmap-based implementations. (The MI implementation takes advantage of the fact that objects in -CURRENT are ordered collections of pages.) On sparc64, pmap_object_init_pt() hadn't (yet) been implemented.
* Fix a few style(9) nits.mux2003-07-021-13/+9
|
* Modify vm_page_alloc() and vm_page_select_cache() to allow the page thatalc2003-07-011-2/+4
| | | | | is returned by vm_page_select_cache() to belong to the object that is already locked by the caller to vm_page_alloc().
* Check the address provided to vm_map_stack() against the vm map's maximum,alc2003-07-011-1/+2
| | | | returning an error if the address is too high.
* Introduce vm_map_pmap_enter(). Presently, this is a stub calling the MDalc2003-06-292-7/+21
| | | | pmap_object_init_pt().
* - Export pmap_enter_quick() to the MI VM. This will permit thealc2003-06-291-1/+3
| | | | | | | | | | | implementation of a largely MI pmap_object_init_pt() for vnode-backed objects. pmap_enter_quick() is implemented via pmap_enter() on sparc64 and powerpc. - Correct a mismatch between pmap_object_init_pt()'s prototype and its various implementations. (I plan to keep pmap_object_init_pt() as the MD hook for device-backed objects on i386 and amd64.) - Correct an error in ia64's pmap_enter_quick() and adjust its interface to match the other versions. Discussed with: marcel
* Add vm object locking to vm_pageout_map_deactivate_pages().alc2003-06-291-9/+18
|
* Remove GIANT_REQUIRED from kmem_malloc().alc2003-06-281-3/+0
|
* - Add vm object locking to vm_pageout_clean().alc2003-06-281-5/+7
|
* - Use an int rather than a vm_pindex_t to represent the desired pagealc2003-06-281-24/+6
| | | | | | color in vm_page_alloc(). (This also has small performance benefits.) - Eliminate vm_page_select_free(); vm_page_alloc() might as well call vm_pageq_find() directly.
* Simple read-modify-write operations on a vm object's flags, ref_count, andalc2003-06-271-4/+0
| | | | | shadow_count can now rely on its mutex for synchronization. Remove one use of Giant from vm_map_insert().
* vm_page_select_cache() enforces a number of conditions on the returnedalc2003-06-261-1/+6
| | | | page. Add the ability to lock the containing object to those conditions.
* Modify vm_pageq_requeue() to handle a PQ_NONE page without dereferencingalc2003-06-261-14/+5
| | | | a NULL pointer; remove some now unused code.
* Move the pcpu lock out of the uma_cache and instead have a single setbmilekic2003-06-252-50/+25
| | | | | | | of pcpu locks. This makes uma_zone somewhat smaller (by (LOCKNAME_LEN * sizeof(char) + sizeof(struct mtx) * maxcpu) bytes, to be exact). No Objections from jeff.
* Make sure that the zone destructor doesn't get called twice inbmilekic2003-06-251-2/+6
| | | | certain free paths.
* Remove a GIANT_REQUIRED on the kernel object that we no longer need.alc2003-06-251-2/+0
|
* Maintain the lock on a vm object when calling vm_page_grab().alc2003-06-251-3/+0
|
* Assert that the vm object is locked on entry to dev_pager_getpages().alc2003-06-241-0/+1
|
* Assert that the vm object is locked on entry to vm_pager_get_pages().alc2003-06-231-5/+1
|
* Maintain a lock on the vm object of interest throughout vm_fault(),alc2003-06-224-12/+15
| | | | | | releasing the lock only if we are about to sleep (e.g., vm_pager_get_pages() or vm_pager_has_pages()). If we sleep, we have marked the vm object with the paging-in-progress flag.
* Add a f_vnode field to struct file.phk2003-06-221-1/+1
| | | | | | | | | | | | Several of the subtypes have an associated vnode which is used for stuff like the f*() functions. By giving the vnode a speparate field, a number of checks for the specific subtype can be replaced simply with a check for f_vnode != NULL, and we can later free f_data up to subtype specific use. At this point in time, f_data still points to the vnode, so any code I might have overlooked will still work.
* As vm_fault() descends the chain of backing objects, set paging-in-alc2003-06-221-8/+8
| | | | progress on the next object before clearing it on the current object.
* Complete the vm object locking in vm_object_backing_scan(); specifically,alc2003-06-221-5/+12
| | | | | deal with the case where we need to sleep on a busy page with two vm object locks held.
* Make some style and white-space changes to the copy-on-write path throughalc2003-06-221-10/+5
| | | | vm_fault(); remove a pointless assignment statement from that path.
* Use a do {...} while (0); and a couple of breaks to reduce the levelphk2003-06-211-78/+80
| | | | of indentation a bit.
* Lock one of the vm objects involved in an optimized copy-on-write fault.alc2003-06-211-2/+5
|
* - Increase the scope of the vm object lock in vm_object_collapse().alc2003-06-211-3/+4
| | | | | - Assert that the vm object and its backing vm object are both locked in vm_object_qcollapse().
* Make swap_pager_haspages() static; remove unused function prototypes.alc2003-06-202-5/+3
|
* Initialize b_saveaddr when we hand out pbufsphk2003-06-201-2/+3
|
* The so-called "optimized copy-on-write fault" case should not requirealc2003-06-201-9/+2
| | | | | | | the vm map lock. What's really needed is vm object locking, which is (for the moment) provided Giant. Reviewed by: tegge
* Assert that the vm object is locked in vm_page_try_to_free().alc2003-06-191-0/+2
|
* Fix a vm object reference leak in the page-based copy-on-write mechanismalc2003-06-191-1/+1
| | | | | | used by the zero-copy sockets implementation. Reviewed by: gallatin
* Lock the vm object when freeing a vm page.alc2003-06-181-0/+14
|
* This file was ignored by CVS in my last commit for some reason:phk2003-06-161-1/+0
| | | | | Remove pointless initialization of b_spc field, which now no longer exists.
* Add the same KASSERT to all VOP_STRATEGY and VOP_SPECSTRATEGY implementationsphk2003-06-151-0/+2
| | | | to check that the buffer points to the correct vnode.
OpenPOWER on IntegriCloud