summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_radix.c
Commit message (Collapse)AuthorAgeFilesLines
* MFC r259107alc2014-05-231-7/+5
| | | | | | Eliminate a redundant parameter to vm_radix_replace(). Improve the wording of the comment describing vm_radix_replace().
* Addendum to r254141: The call to vm_radix_insert() in vm_page_cache() canalc2013-08-231-0/+15
| | | | | | | | | | reclaim the last preexisting cached page in the object, resulting in a call to vdrop(). Detect this scenario so that the vnode's hold count is correctly maintained. Otherwise, we panic. Reported by: scottl Tested by: pho Discussed with: attilio, jeff, kib
* On all the architectures, avoid to preallocate the physical memoryattilio2013-08-091-48/+126
| | | | | | | | | | | | | | | | | | | | | for nodes used in vm_radix. On architectures supporting direct mapping, also avoid to pre-allocate the KVA for such nodes. In order to do so make the operations derived from vm_radix_insert() to fail and handle all the deriving failure of those. vm_radix-wise introduce a new function called vm_radix_replace(), which can replace a leaf node, already present, with a new one, and take into account the possibility, during vm_radix_insert() allocation, that the operations on the radix trie can recurse. This means that if operations in vm_radix_insert() recursed vm_radix_insert() will start from scratch again. Sponsored by: EMC / Isilon storage division Reviewed by: alc (older version) Reviewed by: jeff Tested by: pho, scottl
* To reduce the amount of arithmetic performed in the various radix treealc2013-05-111-13/+12
| | | | | | | functions, reverse the numbering scheme for the levels. The highest numbered level in the tree now appears near the root instead of the leaves. Sponsored by: EMC / Isilon Storage Division
* Remove a redundant call to panic() from vm_radix_keydiff(). The assertionalc2013-05-071-4/+2
| | | | | | before the loop accomplishes the same thing. Sponsored by: EMC / Isilon Storage Division
* Optimize vm_radix_lookup_ge() and vm_radix_lookup_le(). Specifically,alc2013-05-041-103/+75
| | | | | | | | | | | | | change the way that these functions ascend the tree when the search for a matching leaf fails at an interior node. Rather than returning to the root of the tree and repeating the lookup with an updated key, maintain a stack of interior nodes that were visited during the descent and use that stack to resume the lookup at the closest ancestor that might have a matching descendant. Sponsored by: EMC / Isilon Storage Division Reviewed by: attilio Tested by: pho
* Eliminate an unneeded call to vm_radix_trimkey() from vm_radix_lookup_le().alc2013-04-281-1/+0
| | | | | | | This call is clearing bits from the key that will be set again by the next line. Sponsored by: EMC / Isilon Storage Division
* Avoid some lookup restarts in vm_radix_lookup_{ge,le}().alc2013-04-271-22/+24
| | | | Sponsored by: EMC / Isilon Storage Division
* Simplify vm_radix_{add,dec}lev().alc2013-04-221-8/+13
| | | | Sponsored by: EMC / Isilon Storage Division
* When calculating the number of reserved nodes, discount the pages that willalc2013-04-181-2/+9
| | | | | | be used to store the nodes. Sponsored by: EMC / Isilon Storage Division
* Although we perform path compression to reduce the height of the trie andalc2013-04-151-26/+32
| | | | | | | | | | | | | | | | the number of interior nodes, we have previously created a level zero interior node at the root of every non-empty trie, even when that node is not strictly necessary, i.e., it has only one child. This change is the second (and final) step in eliminating those unnecessary level zero interior nodes. Specifically, it updates the deletion and insertion functions so that they do not require a level zero interior node at the root of the trie. For a "buildworld" workload, this change results in a 16.8% reduction in the number of interior nodes allocated and a similar reduction in the average execution time for lookup functions. For example, the average execution time for a call to vm_radix_lookup_ge() is reduced by 22.9%. Reviewed by: attilio, jeff (an earlier version) Sponsored by: EMC / Isilon Storage Division
* Although we perform path compression to reduce the height of the trie andalc2013-04-121-20/+33
| | | | | | | | | | | | the number of interior nodes, we always create a level zero interior node at the root of every non-empty trie, even when that node is not strictly necessary, i.e., it has only one child. This change is the first step in eliminating those unnecessary level zero interior nodes. Specifically, it updates all of the lookup functions so that they do not require a level zero interior node at the root. Reviewed by: attilio, jeff (an earlier version) Sponsored by: EMC / Isilon Storage Division
* Micro-optimize the order of struct vm_radix_node's fields. Specifically,alc2013-04-071-2/+2
| | | | | | | | | | arrange for all of the fields to start at a short offset from the beginning of the structure. Eliminate unnecessary masking of VM_RADIX_FLAGS from the root pointer in vm_radix_getroot(). Sponsored by: EMC / Isilon Storage Division
* Simplify vm_radix_keybarr().alc2013-04-061-3/+1
| | | | Sponsored by: EMC / Isilon Storage Division
* Simplify vm_radix_insert().alc2013-04-061-29/+8
| | | | | | Reviewed by: attilio Tested by: pho Sponsored by: EMC / Isilon Storage Division
* Replace the remaining uses of vm_radix_node_page() by vm_radix_isleaf() andalc2013-04-031-65/+67
| | | | | | | | | | | | vm_radix_topage(). This transformation eliminates some unnecessary conditional branches from the inner loops of vm_radix_insert(), vm_radix_lookup{,_ge,_le}(), and vm_radix_remove(). Simplify the control flow of vm_radix_lookup_{ge,le}(). Reviewed by: attilio (an earlier version) Tested by: pho Sponsored by: EMC / Isilon Storage Division
* Introduce vm_radix_isleaf() and use it in a couple places. As compared toalc2013-03-261-2/+12
| | | | | | | | | | using vm_radix_node_page() == NULL, the compiler is able to generate one less conditional branch when vm_radix_isleaf() is used. More use cases involving the inner loops of vm_radix_insert(), vm_radix_lookup{,_ge,_le}(), and vm_radix_remove() will follow. Reviewed by: attilio Sponsored by: EMC / Isilon Storage Division
* Micro-optimize the control flow in a few places. Eliminate a panic callalc2013-03-241-8/+6
| | | | | | | | | that could never be reached in vm_radix_insert(). (If the pointer being checked by the panic call were ever NULL, the immmediately preceding loop would have already crashed on a NULL pointer dereference.) Reviewed by: attilio (an earlier version) Sponsored by: EMC / Isilon Storage Division
* Commit new file FreeBSD tags.attilio2013-03-171-0/+1
| | | | Sponsored by: EMC / Isilon storage division
* Fix a couple typos.alc2013-03-171-2/+2
| | | | Sponsored by: EMC / Isilon Storage Division
* The M_ZERO can be eliminated from the uma_zalloc() call inalc2013-03-171-3/+22
| | | | | | | | | vm_radix_node_get() with a small change to vm_radix_reclaim_allnodes_int(). This change further reduced the average number of cycles per vm_page_insert() call from 532 to 519. Reviewed by: attilio Sponsored by: EMC / Isilon Storage Division
* Simplify the interface to vm_radix_insert() by eliminating the parameteralc2013-03-171-4/+3
| | | | | | | | | | | | "index". The content of a radix tree leaf, or at least its "key", is not opaque to the other radix tree operations. Specifically, they know how to extract the "key" from a leaf. So, eliminating the parameter "index" isn't breaking the abstraction. Moreover, eliminating the parameter "index" effectively prevents the caller from passing an inconsistent "index" and leaf to vm_radix_insert(). Reviewed by: attilio Sponsored by: EMC / Isilon Storage Division
* Expand ambiguous comments some more.attilio2013-03-171-5/+11
| | | | Requested by: alc
* Fix compilation.attilio2013-03-131-1/+2
| | | | Sponsored by: EMC / Isilon storage division
* Add a further safety belt to prevent inconsistencies.attilio2013-03-131-0/+2
| | | | | Sponsored by: EMC / Isilon storage division Submitted by: alc
* For uniformity, use the user provided index.attilio2013-03-131-1/+1
| | | | | Sponsored by: EMC / Isilon storage division Reviewed and reported by: alc
* Improve comments.attilio2013-03-071-9/+9
| | | | | Sponsored by: EMC / Isilon storage division Submitted by: mdf
* Fix a typo.alc2013-03-041-1/+1
| | | | Sponsored by: EMC / Isilon Storage Division
* Make a pass over most of the comments.alc2013-03-041-26/+25
|
* Simplify Boolean expressions.alc2013-03-041-6/+6
| | | | Sponsored by: EMC / Isilon Storage Division
* Fix spelling.alc2013-03-041-1/+1
| | | | Sponsored by: EMC / Isilon Storage Division
* Remove the boot-time cache support and rely on UMA boot-time slab cacheattilio2013-03-041-54/+32
| | | | | | | | for allocating the nodes before to have the possibility to carve directly from the UMA subsystem. Sponsored by: EMC / Isilon storage division Reviewed by: alc
* Missing semicolon.attilio2013-02-241-1/+1
| | | | | | Sponsored by: EMC / Isilon storage division Submitted by: alc Pointy hat to: me
* Simplify return logic.attilio2013-02-241-6/+2
| | | | | Sponsored by: EMC / Isilon storage division Submitted by: alc
* Retire the old UMA primitive uma_zone_set_obj() and replace it with theattilio2013-02-241-6/+4
| | | | | | | | | | | | | | | | | | more modern uma_zone_reserve_kva(). The difference is that it doesn't rely anymore on an obj to allocate pages and the slab allocator doesn't use any more any specific locking but atomic operations to complete the operation. Where possible, the uma_small_alloc() is instead used and the uk_kva member becomes unused. The subsequent cleanups also brings along the removal of VM_OBJECT_LOCK_INIT() macro which is not used anymore as the code can be easilly cleaned up to perform a single mtx_init(), private to vm_object.c. For the same reason, _vm_object_allocate() becomes private as well. Sponsored by: EMC / Isilon storage division Reviewed by: alc
* Fix an inverted check that was reporting indexes wrongly detectedattilio2013-02-241-1/+1
| | | | | | | as wrapped. Sponsored by: EMC / Isilon storage divison Reported by: alc
* On arches with VM_PHYSSEG_DENSE the vm_page_array is larger thanattilio2013-02-151-2/+4
| | | | | | | | | | | the actual number of vm_page_t that will be derived, so v_page_count should be used appropriately. Besides that, add a panic condition in case UMA fails to properly restrict the area in a way to keep all the desired objects. Sponsored by: EMC / Isilon storage division Reported by: alc
* Remove unused headers.attilio2013-02-151-6/+0
|
* Fix comment.attilio2013-02-151-1/+1
|
* Move the radix node zone destructor definition closer toattilio2013-02-151-16/+16
| | | | | | vm_radix_init() definition. Sponsored by: EMC / Isilon storage division
* - When panicing for "too small boot cache" reason, print the actualattilio2013-02-151-1/+5
| | | | | | | cache size value - Add a way to specify the size of the boot cache at compile time Sponsored by: EMC / Isilon storage division
* Improve dynamic branch prediction and i-cache utilization:attilio2013-02-151-7/+17
| | | | | | | | - Use predict_false() to tag boot-time cache decisions - Compact boot-time cache allocation into a separate, non-inline, function that won't be called most of the times. Sponsored by: EMC / Isilon storage division
* Fix style.attilio2013-02-141-4/+4
|
* The radix preallocation pages can overfow the biggestone segment, soattilio2013-02-141-28/+41
| | | | | | | | | | | | use a different scheme for preallocation: reserve few KB of nodes to be used to cater page allocations before the memory can be efficiently pre-allocated by UMA. This at all effects remove boot_pages further carving and along with this modifies to the boot_pages allocation system and necessity to initialize the UMA zone before pmap_init(). Reported by: pho, jhb
* Grammar.attilio2013-02-131-1/+1
| | | | Sponsored by: EMC / Isilon storage division
* Implement a new algorithm for managing the radix trie which alsoattilio2013-02-131-522/+516
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | includes path-compression. This greatly helps with sparsely populated tries, where an uncompressed trie may end up by having a lot of intermediate nodes for very little leaves. The new algorithm introduces 2 main concepts: the node level and the node owner. Every node represents a branch point where the leaves share the key up to the level specified in the node-level (current level excluded, of course). Such key partly shared is the one contained in the owner. Of course, the root branch is exempted to keep a valid owner, because theoretically all the keys are contained in the space designed by the root branch node. The search algorithm seems very intuitive and that is where one should start reading to understand the full approach. In the end, the algorithm ends up by demanding only one node per insert and this is not necessary in all the cases. To stay safe, we basically preallocate as many nodes as the number of physical pages are in the system, using uma_preallocate(). However, this raises 2 concerns: * As pmap_init() needs to kmem_alloc(), the nodes must be pre-allocated when vm_radix_init() is currently called, which is much before UMA is fully initialized. This means that uma_prealloc() will dig into the UMA_BOOT_PAGES pool of pages, which is often not enough to keep track of such large allocations. In order to fix this, change a bit the concept of UMA_BOOT_PAGES and vm.boot_pages. More specifically make the UMA_BOOT_PAGES an initial "value" as long as vm.boot_pages and extend the boot_pages physical area by as many bytes as needed with the information returned by vm_radix_allocphys_size(). * A small amount of pages will be held in per-cpu buckets and won't be accessible from curcpu, so the vm_radix_node_get() could really panic when the pre-allocation pool is close to be exhausted. In theory we could pre-allocate more pages than the number of physical frames to satisfy such request, but as many insert would happen without a node allocation anyway, I think it is safe to assume that the over-allocation is already compensating for such problem. On the field testing can stand me correct, of course. This could be further helped by the case where we allow a single-page insert to not require a complete root node. The use of pre-allocation gets rid all the non-direct mapping trickery and introduced lock recursion allowance for vm_page_free_queue. The nodes children are reduced in number from 32 -> 16 and from 16 -> 8 (for respectively 64 bits and 32 bits architectures). This would make the children to fit into cacheline for amd64 case, for example, and in general spawn less cacheline, which may be helpful in lookup_ge() case. Also, path-compression cames to help in cases where there are many levels, making the fallouts of such change less hurting. Sponsored by: EMC / Isilon storage division Reviewed by: jeff (partially) Tested by: flo
* Cleanup vm_radix KPI:attilio2013-02-061-34/+33
| | | | | | | | - Avoid the return value for vm_radix_insert() - Name the functions argument per-style(9) - Avoid to get and return opaque objects but use vm_page_t as vm_radix is thought to not really be general code but to cater specifically page cache and resident cache.
* Remove vm_radix_lookupn() and its usage in the kernel.attilio2013-01-101-49/+18
|
* - Split the cached and resident pages tree into 2 distinct ones.attilio2012-07-081-120/+22
| | | | | | | | | | | This makes the RED/BLACK support go away and simplifies a lot vmradix functions used here. This happens because with patricia trie support the trie will be little enough that keeping 2 diffetnt will be efficient too. - Reduce differences with head, in places like backing scan where the optimizazions used shuffled the code a little bit around. Tested by: flo, Andrea Barberio
* Revert r231027 and fix the prototype for vm_radix_remove().attilio2012-06-081-61/+7
| | | | | | The target of this is getting at the point where the recovery path is completely removed as we could count on pre-allocation once the path compressed trie is implemented.
OpenPOWER on IntegriCloud