summaryrefslogtreecommitdiffstats
path: root/sys/vm
Commit message (Collapse)AuthorAgeFilesLines
* Revise the interface between vm_object_madvise() and vm_page_dontneed() soalc2013-06-103-31/+30
| | | | | | | that pointless calls to pmap_is_modified() can be easily avoided when performing madvise(..., MADV_FREE). Sponsored by: EMC / Isilon Storage Division
* Make sys_mlock() function just a wrapper around vm_mlock() functionglebius2013-06-082-5/+11
| | | | | | | that does all the job. Reviewed by: kib, jilles Sponsored by: Nginx, Inc.
* Complete r251452:attilio2013-06-062-4/+7
| | | | | | | | | Avoid to busy/unbusy a page in cases where there is no need to drop the vm_obj lock, more nominally when the page is full valid after vm_page_grab(). Sponsored by: EMC / Isilon storage division Reviewed by: alc
* In vm_object_split(), busy and consequently unbusy the pages only whenattilio2013-06-041-3/+4
| | | | | | | | swap_pager_copy() is invoked, otherwise there is no reason to do so. This will eliminate the necessity to busy pages most of the times. Sponsored by: EMC / Isilon storage division Reviewed by: alc
* Update a comment.alc2013-06-041-2/+2
|
* Relax the object locking in vm_pageout_map_deactivate_pages() andalc2013-06-041-11/+11
| | | | | | vm_pageout_object_deactivate_pages(). A read lock suffices. Sponsored by: EMC / Isilon Storage Division
* Remove irrelevant comments.kib2013-06-031-7/+0
| | | | | Discussed with: alc MFC after: 3 days
* Require that the page lock is held, instead of the object lock, whenalc2013-06-032-7/+16
| | | | | | | | | | | | | | | | | | | clearing the page's PGA_REFERENCED flag. Since we are typically manipulating the page's act_count field when we are clearing its PGA_REFERENCED flag, the page lock is already held everywhere that we clear the PGA_REFERENCED flag. So, in fact, this revision only changes some comments and an assertion. Nonetheless, it will enable later changes to object locking in the pageout code. Introduce vm_page_assert_locked(), which completely hides the implementation details of the page lock from the caller, and use it in vm_page_aflag_clear(). (The existing vm_page_lock_assert() could not be used in vm_page_aflag_clear().) Over the coming weeks, I expect that we'll either eliminate or replace the various uses of vm_page_lock_assert() with vm_page_assert_locked(). Reviewed by: attilio Sponsored by: EMC / Isilon Storage Division
* Now that access to the page's "act_count" field is synchronized by the pagealc2013-06-011-1/+0
| | | | | | | | lock instead of the object lock, there is no reason for vm_page_activate() to assert that the object is locked for either read or write access. (The "VPO_UNMANAGED" flag never changes after page allocation.) Sponsored by: EMC / Isilon Storage Division
* Simplify the definition of vm_page_lock_assert(). There is no compellingalc2013-05-311-7/+6
| | | | | | | | reason to inline the implementation of vm_page_lock_assert() in the !KLD_MODULES case. Use the same implementation for both KLD_MODULES and !KLD_MODULES. Reviewed by: kib
* After the object lock was dropped, the object' reference count couldkib2013-05-301-5/+5
| | | | | | | | | | | change. Retest the ref_count and return from the function to not execute the further code which assumes that ref_count == 1 if it is not. Also, do not leak vnode lock if other thread cleared OBJ_TMPFS flag meantime. Reported by: bdrewery Tested by: bdrewery, pho Sponsored by: The FreeBSD Foundation
* Remove the capitalization in the assertion message. Print the addresskib2013-05-301-1/+1
| | | | of the object to get useful information from optimizated kernels dump.
* o Change the locking scheme for swp_bcount.attilio2013-05-281-5/+7
| | | | | | | | | | | It can now be accessed with a write lock on the object containing it OR with a read lock on the object containing it along with the swhash_mtx. o Remove some duplicate assertions for swap_pager_freespace() and swap_pager_unswapped() but keep the object locking references for documentation. Sponsored by: EMC / Isilon storage division Reviewed by: alc
* Acquire read lock on the src object for vm_fault_copy_entry().attilio2013-05-221-4/+4
| | | | | Sponsored by: EMC / Isilon storage division Reviewed by: alc
* o Relax locking assertions for vm_page_find_least()attilio2013-05-213-8/+18
| | | | | | | | | | | | o Relax locking assertions for pmap_enter_object() and add them also to architectures that currently don't have any o Introduce VM_OBJECT_LOCK_DOWNGRADE() which is basically a downgrade operation on the per-object rwlock o Use all the mechanisms above to make vm_map_pmap_enter() to work mostl of the times only with readlocks. Sponsored by: EMC / Isilon storage division Reviewed by: alc
* Add ddb command 'show pginfo' which provides useful information aboutkib2013-05-211-0/+23
| | | | | | | | | | a vm page, denoted either by an address of the struct vm_page, or, if the '/p' modifier is specified, by a physical address of the corresponding frame. Reviewed by: jhb Sponsored by: The FreeBSD Foundation MFC after: 1 week
* Relax the object locking in vm_fault_prefault(). A read lock suffices.alc2013-05-171-5/+5
| | | | | Reviewed by: attilio Sponsored by: EMC / Isilon Storage Division
* Relax the object locking assertion in vm_page_lookup(). Now that a radixalc2013-05-171-1/+1
| | | | | | | | tree is used to maintain the object's collection of resident pages, vm_page_lookup() no longer needs an exclusive lock. Reviewed by: attilio Sponsored by: EMC / Isilon Storage Division
* o Add accessor functions to add and remove pages from a specificattilio2013-05-132-198/+139
| | | | | | | | | | | | | | | | | | | | | | | | | | | freelist. o Split the pool of free pages queues really by domain and not rely on definition of VM_RAW_NFREELIST. o For MAXMEMDOM > 1, wrap the RR allocation logic into a specific function that is called when calculating the allocation domain. The RR counter is kept, currently, per-thread. In the future it is expected that such function evolves in a real policy decision referee, based on specific informations retrieved by per-thread and per-vm_object attributes. o Add the concept of "probed domains" under the form of vm_ndomains. It is responsibility for every architecture willing to support multiple memory domains to correctly probe vm_ndomains along with mem_affinity segments attributes. Those two values are supposed to remain always consistent. Please also note that vm_ndomains and td_dom_rr_idx are both int because segments already store domains as int. Ideally u_int would have much more sense. Probabilly this should be cleaned up in the future. o Apply RR domain selection also to vm_phys_zero_pages_idle(). Sponsored by: EMC / Isilon storage division Partly obtained from: jeff Reviewed by: alc Tested by: jeff
* Bandaid for compiling with gcc, which happens to be the default compilerpeter2013-05-131-0/+1
| | | | for a number of platforms still.
* Refactor vm_page_alloc()'s interactions with vm_reserv_alloc_page() andalc2013-05-123-25/+63
| | | | | | | | | | vm_page_insert() so that (1) vm_radix_lookup_le() is never called while the free page queues lock is held and (2) vm_radix_lookup_le() is called at most once. This change reduces the average time that the free page queues lock is held by vm_page_alloc() as well as vm_page_alloc()'s average overall running time. Sponsored by: EMC / Isilon Storage Division
* To reduce the amount of arithmetic performed in the various radix treealc2013-05-111-13/+12
| | | | | | | functions, reverse the numbering scheme for the levels. The highest numbered level in the tree now appears near the root instead of the leaves. Sponsored by: EMC / Isilon Storage Division
* Fix-up r250338 by completing the removal of VM_NDOMAIN in favor ofattilio2013-05-081-2/+2
| | | | | | | | MAXMEMDOM. This unbreak builds. Sponsored by: EMC / Isilon storage division Reported by: adrian, jeli
* Rename VM_NDOMAIN into MAXMEMDOM and move it into machine/param.h inattilio2013-05-071-10/+10
| | | | | | | | | order to match the MAXCPU concept. The change should also be useful for consolidation and consistency. Sponsored by: EMC / Isilon storage division Obtained from: jeff Reviewed by: alc
* Remove a redundant call to panic() from vm_radix_keydiff(). The assertionalc2013-05-071-4/+2
| | | | | | before the loop accomplishes the same thing. Sponsored by: EMC / Isilon Storage Division
* Optimize vm_radix_lookup_ge() and vm_radix_lookup_le(). Specifically,alc2013-05-041-103/+75
| | | | | | | | | | | | | change the way that these functions ascend the tree when the search for a matching leaf fails at an interior node. Rather than returning to the root of the tree and repeating the lookup with an updated key, maintain a stack of interior nodes that were visited during the descent and use that stack to resume the lookup at the closest ancestor that might have a matching descendant. Sponsored by: EMC / Isilon Storage Division Reviewed by: attilio Tested by: pho
* Fix two bugs in the current NUMA-aware allocation code:jhb2013-05-031-6/+48
| | | | | | | | | | | | | | | | | | - vm_phys_alloc_freelist_pages() can be called by vm_page_alloc_freelist() to allocate a page from a specific freelist. In the NUMA case it did not properly map the public VM_FREELIST_* constants to the correct backing freelists, nor did it try all NUMA domains for allocations from VM_FREELIST_DEFAULT. - vm_phys_alloc_pages() did not pin the thread and each call to vm_phys_alloc_freelist_pages() fetched the current domain to choose which freelist to use. If a thread migrated domains during the loop in vm_phys_alloc_pages() it could skip one of the freelists. If the other freelists were out of memory then it is possible that vm_phys_alloc_pages() would fail to allocate a page even though pages were available resulting in a panic in vm_page_alloc(). Reviewed by: alc MFC after: 1 week
* Add a hint suggesting why tmpfs does not need a special case there.kib2013-05-021-1/+1
|
* Rework the handling of the tmpfs node backing swap object and tmpfskib2013-04-282-1/+34
| | | | | | | | | | | | | | | | | | vnode v_object to avoid double-buffering. Use the same object both as the backing store for tmpfs node and as the v_object. Besides reducing memory use up to 2x times for situation of mapping files from tmpfs, it also makes tmpfs read and write operations copy twice bytes less. VM subsystem was already slightly adapted to tolerate OBJT_SWAP object as v_object. Now the vm_object_deallocate() is modified to not reinstantiate OBJ_ONEMAPPING flag and help the VFS to correctly handle VV_TEXT flag on the last dereference of the tmpfs backing object. Reviewed by: alc Tested by: pho, bf MFC after: 1 month
* Make vm_object_page_clean() and vm_mmap_vnode() tolerate the vnode'kib2013-04-282-3/+15
| | | | | | | | | | | | | | | | | v_object of non OBJT_VNODE type. For vm_object_page_clean(), simply do not assert that object type must be OBJT_VNODE, and add a comment explaining how the check for OBJ_MIGHTBEDIRTY prevents the rest of function from operating on such objects. For vm_mmap_vnode(), if the object type is not OBJT_VNODE, require it to be for swap pager (or default), handle the bypass filesystems, and correctly acquire the object reference in this case. Reviewed by: alc Tested by: pho, bf MFC after: 1 week
* Assert that the object type for the vnode' non-NULL v_object, passedkib2013-04-281-0/+6
| | | | | | | | | | | | | | to vnode_pager_setsize(), is either OBJT_VNODE, or, if vnode was already reclaimed, OBJT_DEAD. Note that the later is only possible due to some filesystems, in particular, nfsiods from nfs clients, call vnode_pager_setsize() with unlocked vnode. More, if the object is terminated, do not perform the resizing operation. Reviewed by: alc Tested by: pho, bf MFC after: 1 week
* Convert panic() into KASSERT().kib2013-04-281-2/+1
| | | | | Reviewed by: alc MFC after: 1 week
* Eliminate an unneeded call to vm_radix_trimkey() from vm_radix_lookup_le().alc2013-04-281-1/+0
| | | | | | | This call is clearing bits from the key that will be set again by the next line. Sponsored by: EMC / Isilon Storage Division
* Avoid some lookup restarts in vm_radix_lookup_{ge,le}().alc2013-04-271-22/+24
| | | | Sponsored by: EMC / Isilon Storage Division
* Panic if UMA_ZONE_PCPU is created at early stages of boot, when mp_ncpusglebius2013-04-221-0/+1
| | | | | | isn't yet initialized. Otherwise we will panic at first allocation later. Sponsored by: Nginx, Inc.
* Simplify vm_radix_{add,dec}lev().alc2013-04-221-8/+13
| | | | Sponsored by: EMC / Isilon Storage Division
* When calculating the number of reserved nodes, discount the pages that willalc2013-04-181-2/+9
| | | | | | be used to store the nodes. Sponsored by: EMC / Isilon Storage Division
* Although we perform path compression to reduce the height of the trie andalc2013-04-151-26/+32
| | | | | | | | | | | | | | | | the number of interior nodes, we have previously created a level zero interior node at the root of every non-empty trie, even when that node is not strictly necessary, i.e., it has only one child. This change is the second (and final) step in eliminating those unnecessary level zero interior nodes. Specifically, it updates the deletion and insertion functions so that they do not require a level zero interior node at the root of the trie. For a "buildworld" workload, this change results in a 16.8% reduction in the number of interior nodes allocated and a similar reduction in the average execution time for lookup functions. For example, the average execution time for a call to vm_radix_lookup_ge() is reduced by 22.9%. Reviewed by: attilio, jeff (an earlier version) Sponsored by: EMC / Isilon Storage Division
* Although we perform path compression to reduce the height of the trie andalc2013-04-121-20/+33
| | | | | | | | | | | | the number of interior nodes, we always create a level zero interior node at the root of every non-empty trie, even when that node is not strictly necessary, i.e., it has only one child. This change is the first step in eliminating those unnecessary level zero interior nodes. Specifically, it updates all of the lookup functions so that they do not require a level zero interior node at the root. Reviewed by: attilio, jeff (an earlier version) Sponsored by: EMC / Isilon Storage Division
* Convert UMA code to C99 uintXX_t types.glebius2013-04-094-97/+97
|
* Swap us_freecount and us_flags, achieving same structure sizeglebius2013-04-091-2/+2
| | | | | | as before previous commit. Submitted by: alc
* Since now we support 256 items per slab, we need more bitsglebius2013-04-091-1/+1
| | | | | | | | | | for us_freecount. This grows uma_slab_head on 32-bit arches, but growth isn't significant. Taking kmem zones as example, only the 32 byte zone is affected, ipers is reduced from 113 to 112. In collaboration with: kib
* Fix KASSERTs: maximum number of items per slab is 256.glebius2013-04-091-3/+3
|
* Fix the assertions for the state of the object under the map entrykib2013-04-091-6/+16
| | | | | | | | | | | with the MAP_ENTRY_VN_WRITECNT flag: - Move the assertion that verifies the state of the v_writecount and vnp.writecount, under the block where the object is locked. - Check that the object type is OBJT_VNODE before asserting. Reported by: avg Reviewed by: alc MFC after: 1 week
* The per-page act_count can be made very-easily protected by theattilio2013-04-082-5/+5
| | | | | | | | | per-page lock rather than vm_object lock, without any further overhead. Make the formal switch. Sponsored by: EMC / Isilon storage division Reviewed by: alc Tested by: pho
* Merge from projects/counters: UMA_ZONE_PCPU zones.glebius2013-04-083-38/+68
| | | | | | | | | | | | | | | | | | | | | | | These zones have slab size == sizeof(struct pcpu), but request from VM enough pages to fit (uk_slabsize * mp_ncpus). An item allocated from such zone would have a separate twin for each CPU in the system, and these twins are at a distance of sizeof(struct pcpu) from each other. This magic value of distance would allow us to make some optimizations later. To address private item from a CPU simple arithmetics should be used: item = (type *)((char *)base + sizeof(struct pcpu) * curcpu) These arithmetics are available as zpcpu_get() macro in pcpu.h. To introduce non-page size slabs a new field had been added to uma_keg uk_slabsize. This shifted some frequently used fields of uma_keg to the fourth cache line on amd64. To mitigate this pessimization, uma_keg fields were a bit rearranged and least frequently used uk_name and uk_link moved down to the fourth cache line. All other fields, that are dereferenced frequently fit into first three cache lines. Sponsored by: Nginx, Inc.
* Micro-optimize the order of struct vm_radix_node's fields. Specifically,alc2013-04-071-2/+2
| | | | | | | | | | arrange for all of the fields to start at a short offset from the beginning of the structure. Eliminate unnecessary masking of VM_RADIX_FLAGS from the root pointer in vm_radix_getroot(). Sponsored by: EMC / Isilon Storage Division
* Prepare to replace the buf splay with a trie:jeff2013-04-061-20/+4
| | | | | | | | | | | | | | | | - Don't insert BKGRDMARKER bufs into the splay or dirty/clean buf lists. No consumers need to find them there and it complicates the tree. These flags are all FFS specific and could be moved out of the buf cache. - Use pbgetvp() and pbrelvp() to associate the background and journal bufs with the vp. Not only is this much cheaper it makes more sense for these transient bufs. - Fix the assertions in pbget* and pbrel*. It's not safe to check list pointers which were never initialized. Use the BX flags instead. We also check B_PAGING in reassignbuf() so this should cover all cases. Discussed with: kib, mckusick, attilio Sponsored by: EMC / Isilon Storage Division
* Simplify vm_radix_keybarr().alc2013-04-061-3/+1
| | | | Sponsored by: EMC / Isilon Storage Division
* Simplify vm_radix_insert().alc2013-04-061-29/+8
| | | | | | Reviewed by: attilio Tested by: pho Sponsored by: EMC / Isilon Storage Division
OpenPOWER on IntegriCloud