summaryrefslogtreecommitdiffstats
path: root/sys/vm
Commit message (Collapse)AuthorAgeFilesLines
* Move the declaration of faultin() from the vm headers to proc.h, sincepeter1999-04-131-2/+1
| | | | it is now referenced from a macro there (PHOLD()).
* Staticizeeivind1999-04-111-2/+2
|
* Convert usage of vm_page_bits() to the new convention ("Inputs are requireddt1999-04-101-2/+2
| | | | to range within a page").
* Lock vnode correctly for VOP_OPEN.eivind1999-04-101-1/+5
| | | | Discussed with: alc, dillon
* Don't forcibly kill processes that are locked in-core via PHOLD - it waspeter1999-04-061-2/+3
| | | | just checking P_NOSWAP before.
* Only use p->p_lock (manage by PHOLD()/PRELE()) - P_NOSWAP/P_PHYSIO is nopeter1999-04-061-2/+2
| | | | longer set.
* Catch a case spotted by Tor where files mmapped could leave garbage in thejulian1999-04-054-38/+157
| | | | | | | | | | | | unallocated parts of the last page when the file ended on a frag but not a page boundary. Delimitted by tags PRE_MATT_MMAP_EOF and POST_MATT_MMAP_EOF, in files alpha/alpha/pmap.c i386/i386/pmap.c nfs/nfs_bio.c vm/pmap.h vm/vm_page.c vm/vm_page.h vm/vnode_pager.c miscfs/specfs/spec_vnops.c ufs/ufs/ufs_readwrite.c kern/vfs_bio.c Submitted by: Matt Dillon <dillon@freebsd.org> Reviewed by: Alan Cox <alc@freebsd.org>
* Two changes to vm_map_delete:alc1999-04-041-13/+10
| | | | | | | | | | | | | | 1. Don't bother checking object->ref_count == 1 in order to set OBJ_ONEMAPPING. It's a waste of time. If object->ref_count == 1, vm_map_entry_delete will "run-down" the object and its pages. 2. If object->ref_count == 1, ignore OBJ_ONEMAPPING. Wait for vm_map_entry_delete to "run-down" the object and its pages. Otherwise, we're calling two different procedures to delete the object's pages. Note: "vmstat -s" will once again show a non-zero value for "pages freed by exiting processes".
* Mainly, eliminate the comments about share maps. (We don't have share mapsalc1999-03-271-33/+7
| | | | | any more.) Also, eliminate an incorrect comment that says that we don't coalesce vm_map_entry's. (We do.)
* Correct a comment.eivind1999-03-271-2/+2
|
* Two changes:alc1999-03-211-19/+24
| | | | | | | | | | Remove more (redundant) map timestamp increments from properly synchronized routines. (Changed: vm_map_entry_link, vm_map_entry_unlink, and vm_map_pageable.) Micro-optimize vm_map_entry_link and vm_map_entry_unlink, eliminating unnecessary dereferences. At the same time, converted them from macros to inline functions.
* Construct the free queue(s) in descending order (by physicalalc1999-03-191-2/+8
| | | | | | address) so that the first 16MB of physical memory is allocated last rather than first. On large-memory machines, this avoids the exhaustion of low physical memory before isa_dmainit has run.
* Correct a problem in kmem_malloc: A kmem_malloc allowing "wait" mayalc1999-03-161-3/+5
| | | | | | block (VM_WAIT) holding the map lock. This is bad. For example, a subsequent kmem_malloc by an interrupt handler on the same map may find the lock held and panic in the lockmgr.
* Two changes:alc1999-03-151-10/+5
| | | | | | | | | | In general, vm_map_simplify_entry should be performed INSIDE the loop that traverses the map, not outside. (Changed: vm_map_inherit, vm_map_pageable.) vm_fault_unwire doesn't acquire the map lock (or block holding it). Thus, vm_map_set/clear_recursive shouldn't be called. (Changed: vm_map_user_pageable, vm_map_pageable.)
* Fix breakage in last commitjulian1999-03-151-3/+3
| | | | Submitted by: Brian Feldman <green@unixhelp.org>
* A bit of a hack, but allows the vn device to be a module again.julian1999-03-141-1/+15
| | | | Submitted by: Matt Dillon <dillon@freebsd.org>
* Submitted by: Matt Dillon <dillon@freebsd.org>julian1999-03-143-20/+406
| | | | | | | | | | | The old VN device broke in -4.x when the definition of B_PAGING changed. This patch fixes this plus implements additional capabilities. The new VN device can be backed by a file ( as per normal ), or it can be directly backed by swap. Due to dependencies in VM include files (on opt_xxx options) the new vn device cannot be a module yet. This will be fixed in a later commit. This commit delimitted by tags {PRE,POST}_MATT_VNDEV
* Correct two optimization errors in vm_object_page_remove:alc1999-03-141-3/+4
| | | | | | | | 1. The size of vm_object::memq is vm_object::resident_page_count, not vm_object::size. 2. The "size > 4" test sometimes results in the traversal of a ~1000 page memq in order to locate ~10 pages.
* Remove vm_page_frees from kmem_malloc that are performedalc1999-03-121-7/+1
| | | | by vm_map_delete/vm_object_page_remove anyway.
* Stop the mfs from trying to swap out crucial bits of the mfsjulian1999-03-121-2/+2
| | | | | as this can lead to deadlock. Submitted by: Mat dillon <dillon@freebsd.org>
* Remove (redundant) map timestamp increments from some properlyalc1999-03-091-6/+1
| | | | synchronized routines.
* Remove an unused variable from vmspace_fork.alc1999-03-081-3/+1
|
* Change vm_map_growstack to acquire and hold a read lock (instead of a writealc1999-03-071-11/+17
| | | | | | | | | lock) until it actually needs to modify the vm_map. Note: it is legal to modify vm_map::hint without holding a write lock. Submitted by: "Richard Seaman, Jr." <dick@tar.com> with minor changes by myself.
* Upgrading a map's lock to exclusive status should incrementalc1999-03-061-2/+6
| | | | | the map's timestamp. In general, whenever an exclusive lock is acquired the timestamp should be incremented.
* To avoid a conflict for the vm_map's lock with vm_fault, releasealc1999-03-021-4/+33
| | | | | | the read lock around the subyte operations in mincore. After the lock is reacquired, use the map's timestamp to determine if we need to restart the scan.
* Remove the last of the share map code: struct vm_map::is_main_map.alc1999-03-022-15/+10
| | | | Reviewed by: Matthew Dillon <dillon@apollo.backplane.com>
* mincore doesn't modify the vm_map. Therefore, it doesn't requirealc1999-03-011-6/+6
| | | | an exclusive lock. A read lock will suffice.
* Reviewed by: "John S. Dyson" <dyson@iquest.net>alc1999-02-271-1/+20
| | | | | | Submitted by: Matthew Dillon <dillon@apollo.backplane.com> To prevent a deadlock, if we are extremely low on memory, force synchronous operation by the VOP_PUTPAGES in vnode_pager_putpages.
* Reviewed by: Matthew Dillon <dillon@apollo.backplane.com>alc1999-02-251-2/+3
| | | | | | Corrected the computation of cnt.v_ozfod in vm_fault: vm_fault was counting the number of unoptimized rather than optimized zero-fill faults.
* Comment swstrategy() routine.dillon1999-02-251-1/+9
|
* Remove unnecessary page protects on map_split and collapse operations.dillon1999-02-243-6/+16
| | | | | | | Fix bug where an object's OBJ_WRITEABLE/OBJ_MIGHTBEDIRTY flags do not get set under certain circumstances ( page rename case ). Reviewed by: Alan Cox <alc@cs.rice.edu>, John Dyson
* Removed ENOMEM error on swap_pager_full condition which ignored thedillon1999-02-221-4/+2
| | | | | | | availability of physical memory. As per original bug report by Bruce. Reviewed by: Alan Cox <alc@cs.rice.edu>
* Remove conditional sysctl'sdillon1999-02-211-46/+4
| | | | | | Leave swap_async_max sysctl intact, remove swap_cluster_max sysctl. Reviewed by: Alan Cox <alc@cs.rice.edu>
* Reviewed by: Alan Cox <alc@cs.rice.edu>dillon1999-02-211-9/+15
| | | | Fix problem w/ low-swap/low-memory handling as reported by Bruce Evans.
* Eliminate a possible numerical overflow.luoqi1999-02-191-7/+7
|
* Hide access to vmspace:vm_pmap with inline function vmspace_pmap(). Thisluoqi1999-02-195-19/+30
| | | | | | | is the preparation step for moving pmap storage out of vmspace proper. Reviewed by: Alan Cox <alc@cs.rice.edu> Matthew Dillion <dillon@apollo.backplane.com>
* Submitted by: Alan Cox <alc@cs.rice.edu>dillon1999-02-191-57/+8
| | | | | Remove remaining share map garbage from vm_map_lookup() and clean out old #if 0 stuff.
* Limit number of simultanious asynchronous swap pager I/Os that candillon1999-02-181-13/+109
| | | | | | | | | | | | | | | | | be in progress at any given moment. Add two swap tuneables to sysctl: vm.swap_async_max: 4 vm.swap_cluster_max: 16 Recommended values are a cluster size of 8 or 16 pages. async_max is about right for 1-4 swap devices. Reduce to 2 if swap is eating too much bandwidth, or even 1 if swap is both eating too much bandwidth and sitting on a slow network (10BaseT). The defaults work well across a broad range of configurations and should normally be left alone.
* Submitted by: Luoqi Chen <luoqi@watermarkgroup.com>dillon1999-02-171-1/+12
| | | | | | | Unlock vnode before messing with map to avoid deadlock between map and vnode ( e.g. with exec_map and underlying program binary vnode ). Solves a deadlock that most often occurs during a large -j# buildworld reported by three people.
* Minor reorganization of vm_page_alloc(). No functional changes havedillon1999-02-152-114/+84
| | | | | | been made but the code has been reorganized and documented to make it more readable, reduce the size of the code, and optimize the branch path caching capabilities that most modern processors have.
* Fix a bug in the new madvise() code that would possibly (improperly)dillon1999-02-151-24/+12
| | | | | | | | | free swap space out from under a busy page. This is not legal because the swap may be reallocated and I/O issued while I/O is still in progress on the same swap page from the madvise()'d object. This bug could only occur under extreme paging conditions but might not cause an error until much later. As a side-benefit, madvise() is now even smaller.
* Minor optimization to madvise() MADV_FREE to make page as freeable asdillon1999-02-121-1/+7
| | | | | | | possible without actually unmapping it from the process. As of now, I declare madvise() on OBJT_DEFAULT/OBJT_SWAP objects to be 'working and complete'.
* Fix non-fatal bug in vm_map_insert() which improperly cleareddillon1999-02-122-60/+49
| | | | | | | | | | | OBJ_ONEMAPPING in the case where an object is extended by an additional vm_map_entry must be allocated. In vm_object_madvise(), remove calll to vm_page_cache() in MADV_FREE case in order to avoid a page fault on page reuse. However, we still mark the page as clean and destroy any swap backing store. Submitted by: Alan Cox <alc@cs.rice.edu>
* Addendum to vm_map coalesce optimization. Also, this was backed-outdillon1999-02-091-1/+1
| | | | | | because there was a concensus on current in regards to leaving bss r+w+x instead of r+w. This is in order to maintain reasonable compatibility with existing JIT compilers (e.g. kaffe) and possibly other programs.
* Revamp vm_object_[q]collapse(). Despite the complexity of this patch,dillon1999-02-082-210/+236
| | | | | | no major operational changes were made. The three core object->memq loops were moved into a single inline procedure and various operational characteristics of the collapse function were documented.
* General cleanup. Remove #if 0's and remove useless register qualifiers.dillon1999-02-081-79/+34
|
* Rip out PQ_ZERO queue. PQ_ZERO functionality is now combined in withdillon1999-02-083-107/+64
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | PQ_FREE. There is little operational difference other then the kernel being a few kilobytes smaller and the code being more readable. * vm_page_select_free() has been *greatly* simplified. * The PQ_ZERO page queue and supporting structures have been removed * vm_page_zero_idle() revamped (see below) PG_ZERO setting and clearing has been migrated from vm_page_alloc() to vm_page_free[_zero]() and will eventually be guarenteed to remain tracked throughout a page's life ( if it isn't already ). When a page is freed, PG_ZERO pages are appended to the appropriate tailq in the PQ_FREE queue while non-PG_ZERO pages are prepended. When locating a new free page, PG_ZERO selection operates from within vm_page_list_find() ( get page from end of queue instead of beginning of queue ) and then only occurs in the nominal critical path case. If the nominal case misses, both normal and zero-page allocation devolves into the same _vm_page_list_find() select code without any specific zero-page optimizations. Additionally, vm_page_zero_idle() has been revamped. Hysteresis has been added and zero-page tracking adjusted to conform with the other changes. Currently hysteresis is set at 1/3 (lo) and 1/2 (hi) the number of free pages. We may wish to increase both parameters as time permits. The hysteresis is designed to avoid silly zeroing in borderline allocation/free situations.
* Backed out vm_map coalesce optimization - it resulted in 22% more pagedillon1999-02-081-2/+2
| | | | | | faults for reasons unknown ( under investigation ). /usr/bin/time -l make in /usr/src/bin went from 67000 faults to 90000 faults.
* Remove MAP_ENTRY_IS_A_MAP 'share' maps. These maps were once used todillon1999-02-077-108/+43
| | | | | | attempt to optimize forks but were essentially given-up on due to problems and replaced with an explicit dup of the vm_map_entry structure. Prior to the removal, they were entirely unused.
* Remove L1 cache coloring optimization ( leave L2 cache coloring opt ).dillon1999-02-072-196/+91
| | | | | Rewrite vm_page_list_find() and vm_page_select_free() - make inline out of nominal case.
OpenPOWER on IntegriCloud