summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_contig.c
Commit message (Collapse)AuthorAgeFilesLines
* Enable the new physical memory allocator.alc2007-06-161-374/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allocator uses a binary buddy system with a twist. First and foremost, this allocator is required to support the implementation of superpages. As a side effect, it enables a more robust implementation of contigmalloc(9). Moreover, this reimplementation of contigmalloc(9) eliminates the acquisition of Giant by contigmalloc(..., M_NOWAIT, ...). The twist is that this allocator tries to reduce the number of TLB misses incurred by accesses through a direct map to small, UMA-managed objects and page table pages. Roughly speaking, the physical pages that are allocated for such purposes are clustered together in the physical address space. The performance benefits vary. In the most extreme case, a uniprocessor kernel running on an Opteron, I measured an 18% reduction in system time during a buildworld. This allocator does not implement page coloring. The reason is that superpages have much the same effect. The contiguous physical memory allocation necessary for a superpage is inherently colored. Finally, the one caveat is that this allocator does not effectively support prezeroed pages. I hope this is temporary. On i386, this is a slight pessimization. However, on amd64, the beneficial effects of the direct-map optimization outweigh the ill effects. I speculate that this is true in general of machines with a direct map. Approved by: re
* Conditionally acquire Giant in vm_contig_launder_page().alc2007-06-111-0/+4
|
* Revert VMCNT_* operations introduction.attilio2007-05-311-2/+2
| | | | | | | | Probabilly, a general approach is not the better solution here, so we should solve the sched_lock protection problems separately. Requested by: alc Approved by: jeff (mentor)
* - define and use VMCNT_{GET,SET,ADD,SUB,PTR} macros for manipulatingjeff2007-05-181-2/+2
| | | | | | | | vmcnts. This can be used to abstract away pcpu details but also changes to use atomics for all counters now. This means sched lock is no longer responsible for protecting counts in the switch routines. Contributed by: Attilio Rao <attilio@FreeBSD.org>
* Correct contigmalloc2()'s implementation of M_ZERO. Specifically,alc2007-04-191-1/+1
| | | | | | | | | contigmalloc2() was always testing the first physical page for PG_ZERO, not the current page of interest. Submitted by: Michael Plass PR: 81301 MFC after: 1 week
* Change the free page queue lock from a spin mutex to a default (blocking)alc2007-02-051-7/+7
| | | | | mutex. With the demise of Alpha support, there is no longer a reason for it to be a spin mutex.
* Ensure that the page's oflags field is initialized by contigmalloc().alc2006-11-081-0/+1
|
* Replace PG_BUSY with VPO_BUSY. In other words, changes to the page'salc2006-10-221-2/+2
| | | | | busy flag, i.e., VPO_BUSY, are now synchronized by the per-vm object lock instead of the global page queues lock.
* sun4v requires TSBs (translation storage buffers) to be contiguous and bekmacy2006-10-121-4/+6
| | | | | | | | size aligned requiring heavy usage of vm_page_alloc_contig This change makes vm_page_alloc_contig SMP safe Approved by: scottl (acting as backup for mentor rwatson)
* Make vm_page_release_contig() static.alc2006-09-031-1/+1
|
* Prevent a call to contigmalloc() that asks for more physical memory thanalc2006-08-261-1/+1
| | | | | | | | the machine has from causing a panic. Submitted by: Michael Plass PR: 101668 MFC after: 3 days
* Ignore dirty pages owned by "dead" objects.tegge2006-03-081-0/+4
|
* Eliminate a deadlock when creating snapshots. Blocking vn_start_write() musttegge2006-03-021-0/+3
| | | | | | be called without any vnode locks held. Remove calls to vn_start_write() and vn_finished_write() in vnode_pager_putpages() and add these calls before the vnode lock is obtained to most of the callers that don't already have them.
* Hold extra reference to vm object while cleaning pages.tegge2006-03-021-0/+2
|
* The change a few years ago of having contigmalloc start its scan at the topscottl2006-01-291-2/+19
| | | | | | | | | | | | | | | | | of physical RAM instead of the bottom was a sound idea, but the implementation left a lot to be desired. Scans would spend considerable time looking at pages that are above of the address range given by the caller, and multiple calls (like what happens in busdma) would spend more time on top of that rescanning the same pages over and over. Solve this, at least for now, with two simple optimizations. The first is to not bother scanning high ordered pages that are outside of the provided address range. Second is to cache the page index from the last successful operation so that subsequent scans don't have to restart from the top. This is conditional on the numpages argument being the same or greater between calls. MFC After: 2 weeks
* Plug a leak in the newer contigmalloc() implementation. Specifically, ifalc2006-01-261-10/+5
| | | | | | | a multipage allocation was aborted midway, the pages that were already allocated were not always returned to the free list. Submitted by: tegge
* The previous revision incorrectly changed a switch statement into an ifalc2006-01-251-3/+3
| | | | | | | | | | statement. Specifically, a break statement that previously broke out of the enclosing switch was not changed. Consequently, the enclosing loop terminated prematurely. This could result in "vm_page_insert: page already inserted" panics. Submitted by: tegge
* MI changes:netchild2005-12-311-6/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | - provide an interface (macros) to the page coloring part of the VM system, this allows to try different coloring algorithms without the need to touch every file [1] - make the page queue tuning values readable: sysctl vm.stats.pagequeue - autotuning of the page coloring values based upon the cache size instead of options in the kernel config (disabling of the page coloring as a kernel option is still possible) MD changes: - detection of the cache size: only IA32 and AMD64 (untested) contains cache size detection code, every other arch just comes with a dummy function (this results in the use of default values like it was the case without the autotuning of the page coloring) - print some more info on Intel CPU's (like we do on AMD and Transmeta CPU's) Note to AMD owners (IA32 and AMD64): please run "sysctl vm.stats.pagequeue" and report if the cache* values are zero (= bug in the cache detection code) or not. Based upon work by: Chad David <davidc@acns.ab.ca> [1] Reviewed by: alc, arch (in 2004) Discussed with: alc, Chad David, arch (in 2004)
* Check for marker pages when scanning active and inactive page queues.tegge2005-08-121-0/+5
| | | | Reviewed by: alc
* The new contigmalloc(9) has a bad degenerate case where there weregreen2005-06-111-11/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | many regions checked again and again despite knowing the pages contained were not usable and only satisfied the alignment constraints This case was compounded, especially for large allocations, by the practice of looping from the top of memory so as to keep out of the important low-memory regions. While the old contigmalloc(9) has the same problem, it is not as noticeable due to looping from the low memory to high. This degenerate case is fixed, as well as reversing the sense of the rest of the loops within it, to provide a tremendous speed increase. This makes the best case O(n * VM overhead) much more likely than the worst case O(4 * VM overhead). For comparison, the worst case for old contigmalloc would be O(5 * VM overhead) in addition to its strategy of turning used memory into free being highly pessimal. Also, fix a bug that in practice most likely couldn't have been triggered, int the new contigmalloc(9): it walked backwards from the end of memory without accounting for how many pages it needed. Potentially, nonexistant pages could have been mapped. This hasn't occurred because the kernel generally requests as its first contigmalloc(9) a single page. Reported by: Nicolas Dehaine <nicko@stbernard.com>, wes MFC After: 1 month More testing by: Nicolas Dehaine <nicko@stbernard.com>, wes
* /* -> /*- for license, minor formatting changesimp2005-01-071-2/+2
|
* Try to close a potential, but serious race in our VM subsystem.delphij2004-11-241-2/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | Historically, our contigmalloc1() and contigmalloc2() assumes that a page in PQ_CACHE can be unconditionally reused by busying and freeing it. Unfortunatelly, when object happens to be not NULL, the code will set m->object to NULL and disregard the fact that the page is actually in the VM page bucket, resulting in page bucket hash table corruption and finally, a filesystem corruption, or a 'page not in hash' panic. This commit has borrowed the idea taken from DragonFlyBSD's fix to the VM fix by Matthew Dillon[1]. This version of patch will do the following checks: - When scanning pages in PQ_CACHE, check hold_count and skip over pages that are held temporarily. - For pages in PQ_CACHE and selected as candidate of being freed, check if it is busy at that time. Note: It seems that this is might be unrelated to kern/72539. Obtained from: DragonFlyBSD, sys/vm/vm_contig.c,v 1.11 and 1.12 [1] Reminded by: Matt Dillon Reworked by: alc MFC After: 1 week
* The synchronization provided by vm object locking has eliminated thealc2004-11-031-2/+0
| | | | | | | | | | | | | | | | | need for most calls to vm_page_busy(). Specifically, most calls to vm_page_busy() occur immediately prior to a call to vm_page_remove(). In such cases, the containing vm object is locked across both calls. Consequently, the setting of the vm page's PG_BUSY flag is not even visible to other threads that are following the synchronization protocol. This change (1) eliminates the calls to vm_page_busy() that immediately precede a call to vm_page_remove() or functions, such as vm_page_free() and vm_page_rename(), that call it and (2) relaxes the requirement in vm_page_remove() that the vm page's PG_BUSY flag is set. Now, the vm page's PG_BUSY flag is set only when the vm object lock is released while the vm page is still in transition. Typically, this is when it is undergoing I/O.
* Acquire the vm object lock before rather than after callingalc2004-10-241-4/+5
| | | | | | vm_page_sleep_if_busy(). (The motivation being to transition synchronization of the vm_page's PG_BUSY flag from the global page queues lock to the per-object lock.)
* Turn on the new contigmalloc(9) by default. There should not actuallygreen2004-08-051-1/+1
| | | | | be a reason to use the old contigmalloc(9), but if desired, it the vm.old_contigmalloc setting can be tuned/sysctld back to 0 for now.
* Remove extraneous locks on the VM free page queue mutex; it is notgreen2004-07-191-2/+0
| | | | | | | meant to be recursed upon, and could cauuse a deadlock inside the new contigmalloc (vm.old_contigmalloc=0) code. Submitted by: alc
* Reimplement contigmalloc(9) with an algorithm which stands a greatly-green2004-07-191-36/+270
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | improved chance of working despite pressure from running programs. Instead of trying to throw a bunch of pages out to swap and hope for the best, only a range that can potentially fulfill contigmalloc(9)'s request will have its contents paged out (potentially, not forcibly) at a time. The new contigmalloc operation still operates in three passes, but it could potentially be tuned to more or less. The first pass only looks at pages in the cache and free pages, so they would be thrown out without having to block. If this is not enough, the subsequent passes page out any unwired memory. To combat memory pressure refragmenting the section of memory being laundered, each page is removed from the systems' free memory queue once it has been freed so that blocking later doesn't cause the memory laundered so far to get reallocated. The page-out operations are now blocking, as it would make little sense to try to push out a page, then get its status immediately afterward to remove it from the available free pages queue, if it's unlikely to have been freed. Another change is that if KVA allocation fails, the allocated memory segment will be freed and not leaked. There is a sysctl/tunable, defaulting to on, which causes the old contigmalloc() algorithm to be used. Nonetheless, I have been using vm.old_contigmalloc=0 for over a month. It is safe to switch at run-time to see the difference it makes. A new interface has been used which does not require mapping the allocated pages into KVA: vm_page.h functions vm_page_alloc_contig() and vm_page_release_contig(). These are what vm.old_contigmalloc=0 uses internally, so the sysctl/tunable does not affect their operation. When using the contigmalloc(9) and contigfree(9) interfaces, memory is now tracked with malloc(9) stats. Several functions have been exported from kern_malloc.c to allow other subsystems to use these statistics, as well. This invalidates the BUGS section of the contigmalloc(9) manpage.
* Make contigmalloc() more reliable:green2004-06-151-6/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | 1. Remove a race whereby contigmalloc() would deadlock against the running processes in the system if they kept reinstantiating the memory on the active and inactive page queues that it was trying to flush out. The process doing the contigmalloc() would sit in "swwrt" forever and the swap pager would be going at full force, but never get anywhere. Instead of doing it until the queues are empty, launder for as many iterations as there are pages in the queue. 2. Do all laundering to swap synchronously; previously, the vnode laundering was synchronous and the swap laundering not. 3. Increase the number of launder-or-allocate passes to three, from two, while failing without bothering to do all the laundering on the third pass if allocation was not possible. This effectively gives exactly two chances to launder enough contiguous memory, helpful with high memory churn where a lot of memory from one pass to the next (and during a single laundering loop) becomes dirtied again. I can now reliably hot-plug hardware requiring a 256KB contigmalloc() without having the kldload/cbb ithread sit around failing to make progress, while running a busy X session. Previously, it took killing X to get contigmalloc() to get further (that is, quiescing the system), and even then contigmalloc() returned failure.
* Remove advertising clause from University of California Regent's license,imp2004-04-061-4/+0
| | | | | | per letter dated July 22, 1999. Approved by: core
* Remove GIANT_REQUIRED from contigfree().alc2004-03-131-1/+1
|
* In the last revision, I introduced a physical contiguity check that is bothalc2004-03-051-3/+1
| | | | | | | | | | unnecessary and wrong. While it is necessary to verify that the page is still free after dropping and reacquiring the free page queue lock, the physical contiguity of the page can not change, making this check unnecessary. This check was wrong in that it could cause an out-of-bounds array access. Tested by: rwatson
* Modify contigmalloc1() so that the free page queues lock is not held whenalc2004-03-021-1/+13
| | | | | | vm_page_free() is called. The problem with holding this lock is that it is a spin lock and vm_page_free() may attempt the acquisition of a different default-type lock.
* Correct a long-standing race condition in vm_contig_launder() that couldalc2004-02-161-0/+2
| | | | | | | | | result in a panic "vm_page_cache: caching a dirty page, ...": Access to the page must be restricted or removed before calling vm_page_cache(). This race condition is identical in nature to that which was addressed by vm_pageout.c's revision 1.251 and vm_page.c's revision 1.275. MFC after: 7 days
* Remove vm_page_alloc_contig(). It's now unused.alc2004-01-141-15/+0
|
* - Unmanage pages allocated by contigmalloc1(). (There is no point inalc2004-01-101-6/+2
| | | | | having PV entries for these pages.) - Remove splvm() and splx() calls.
* - Enable recursive acquisition of the mutex synchronizing access to thealc2004-01-081-2/+6
| | | | | | | | | | | | | | | | | free pages queue. This is presently needed by contigmalloc1(). - Move a sanity check against attempted double allocation of two pages to the same vm object offset from vm_page_alloc() to vm_page_insert(). This provides better protection because double allocation could occur through a direct call to vm_page_insert(), such as that by vm_page_rename(). - Modify contigmalloc1() to hold the mutex synchronizing access to the free pages queue while it scans vm_page_array in search of free pages. - Correct a potential leak of pages by contigmalloc1() that I introduced in revision 1.20: We must convert all cache queue pages to free pages before we begin removing free pages from the free queue. Otherwise, if we have to restart the scan because we are unable to acquire the vm object lock that is necessary to convert a cache queue page to a free page, we leak those free pages already removed from the free queue.
* Don't bother clearing PG_ZERO in contigmalloc1(), kmem_alloc(), oralc2004-01-061-1/+0
| | | | kmem_malloc(). It serves no purpose.
* - Increase the object lock's scope in vm_contig_launder() so that accessalc2003-10-181-4/+11
| | | | | | | | | to the object's type field and the call to vm_pageout_flush() are synchronized. - The above change allows for the eliminaton of the last parameter to vm_pageout_flush(). - Synchronize access to the page's valid field in vm_pageout_flush() using the containing object's lock.
* Add the mlockall() and munlockall() system calls.bms2003-08-111-1/+2
| | | | | | | | | | | | | | | | | | | | | | | - All those diffs to syscalls.master for each architecture *are* necessary. This needed clarification; the stub code generation for mlockall() was disabled, which would prevent applications from linking to this API (suggested by mux) - Giant has been quoshed. It is no longer held by the code, as the required locking has been pushed down within vm_map.c. - Callers must specify VM_MAP_WIRE_HOLESOK or VM_MAP_WIRE_NOHOLES to express their intention explicitly. - Inspected at the vmstat, top and vm pager sysctl stats level. Paging-in activity is occurring correctly, using a test harness. - The RES size for a process may appear to be greater than its SIZE. This is believed to be due to mappings of the same shared library page being wired twice. Further exploration is needed. - Believed to back out of allocations and locks correctly (tested with WITNESS, MUTEX_PROFILING, INVARIANTS and DIAGNOSTIC). PR: kern/43426, standards/54223 Reviewed by: jake, alc Approved by: jake (mentor) MFC after: 2 weeks
* Use pmap_zero_page() to zero pages instead of bzero() becausemux2003-07-271-1/+1
| | | | they haven't been vm_map_wire()'d yet.
* Acquire Giant rather than asserting it is held in contigmalloc(). This isalc2003-07-261-1/+2
| | | | a prerequisite to removing further uses of Giant from UMA.
* Add support for the M_ZERO flag to contigmalloc().mux2003-07-251-1/+5
| | | | Reviewed by: jeff
* Lock a vm object when freeing a page from it.alc2003-07-051-0/+7
|
* Fix a few style(9) nits.mux2003-07-021-13/+9
|
* Use __FBSDID().obrien2003-06-111-1/+3
|
* - Acquire the vm_object's lock when performing vm_object_page_clean().alc2003-04-241-1/+3
| | | | | | - Add a parameter to vm_pageout_flush() that tells vm_pageout_flush() whether its caller has locked the vm_object. (This is a temporary measure to bootstrap vm_object locking.)
* Update locking on the kernel_object to use the new macros.alc2003-04-141-2/+2
|
* - Add vm_paddr_t, a physical address type. This is required for systemsjake2003-03-251-7/+8
| | | | | | | | | | | | | | | where physical addresses larger than virtual addresses, such as i386s with PAE. - Use this to represent physical addresses in the MI vm system and in the i386 pmap code. This also changes the paddr parameter to d_mmap_t. - Fix printf formats to handle physical addresses >4G in the i386 memory detection code, and due to kvtop returning vm_paddr_t instead of u_long. Note that this is a name change only; vm_paddr_t is still the same as vm_offset_t on all currently supported platforms. Sponsored by: DARPA, Network Associates Laboratories Discussed with: re, phk (cdevsw change)
* - Hold the kernel_object's lock around vm_page_insert(..., kernel_object,alc2002-12-231-0/+2
| | | | ...).
* o Extend the scope of the page queues lock in contigmalloc1().alc2002-08-041-8/+8
| | | | | o Replace vm_page_sleep_busy() with vm_page_sleep_if_busy() in vm_contig_launder().
OpenPOWER on IntegriCloud