summaryrefslogtreecommitdiffstats
path: root/sys/vm
Commit message (Collapse)AuthorAgeFilesLines
* MFC r325530 (jeff), r325566 (kib), r325588 (kib):markj2018-02-2110-47/+130
| | | | | | Replace many instances of VM_WAIT with blocking page allocation flags. (cherry picked from commit 2069f0080fbdcf49b623bc3c1eda76524a4d1a77)
* MFC r321247:kib2018-02-214-14/+18
| | | | | | | Add pctrie_init() and vm_radix_init() to initialize generic pctrie and vm_radix trie. (cherry picked from commit 449ea22b392f956d4af1311d01e6ca647ebda976)
* MFC r322913:kib2018-02-214-296/+330
| | | | | | | | | | | | | | | | | | | | Replace global swhash in swap pager with per-object trie to track swap blocks assigned to the object pages. MFC r322970 (by alc): Do not call vm_pager_page_unswapped() on the fast fault path. MFC r322971 (by alc): Update a couple vm_object lock assertions in the swap pager. MFC r323224: In swp_pager_meta_build(), handle a race with other thread allocating swapblk for our index while we dropped the object lock. MFC r323226: Do not leak empty swblk. (cherry picked from commit 36d113490a64de94e4172f3d916e74d8eff5b7db)
* MFC r323018:kib2018-02-211-7/+6
| | | | | | | Adjust interface of swapon_check_swzone() to its actual usage. PR: 221356 (cherry picked from commit 2481224bb101ec60b11dc294c29ba3fbbc176659)
* MFC r323017:kib2018-02-212-2/+1
| | | | | | | Make the swap_pager_full variable static. PR: 221356 (cherry picked from commit ba21942ce28b39691547ca8cd966f6304b5ce025)
* MFC r321217:kib2018-02-212-38/+0
| | | | | | Remove unused function swap_pager_isswapped(). (cherry picked from commit c6e718d6cb8b7c73f74c2910fc47637f436573d2)
* MFC r320319alc2018-02-212-4/+3
| | | | | | | | | | | | | Increase the pageout cluster size to 32 pages. Decouple the pageout cluster size from the size of the hash table entry used by the swap pager for mapping (object, pindex) to a block on the swap device(s), and keep the size of a hash table entry at its current size. Eliminate a pointless macro. (cherry picked from commit 90fed17dafd94f9a34e74086f35e7e8a540e00a7)
* MFC r320049alc2018-02-211-1/+1
| | | | | | | | Pages that are passed to swap_pager_putpages() should already be fully dirty. Assert that they are fully dirty rather than redundantly calling vm_page_dirty() on them. (cherry picked from commit 804e94da8f1b60ea3d603f65d73c1fc9e6f6729f)
* MFC r320181alc2018-02-211-1/+0
| | | | | | Eliminate an unused macro. (cherry picked from commit 7c32a320e0086c6cf5712bd48c9e8fa6d4ca6d5c)
* MFC r322547:markj2018-02-213-21/+42
| | | | | | Add vm_page_alloc_after(). (cherry picked from commit 8e264f308c8b33afa7e707ce0f70254f4e1bea1b)
* MFC r323234,r323305,r323306,r324044:mjg2018-02-213-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Start annotating global _padalign locks with __exclusive_cache_line While these locks are guarnteed to not share their respective cache lines, their current placement leaves unnecessary holes in lines which preceeded them. For instance the annotation of vm_page_queue_free_mtx allows 2 neighbour cachelines (previously separate by the lock) to be collapsed into 1. The annotation is only effective on architectures which have it implemented in their linker script (currently only amd64). Thus locks are not converted to their not-padaligned variants as to not affect the rest. ============= Annotate global process locks with __exclusive_cache_line ============= Annotate Giant with __exclusive_cache_line ============= Annotate sysctlmemlock with __exclusive_cache_line. (cherry picked from commit dc9eed165c25d9af290b93f577ad7ac9d7b3788c)
* MFC r326234, r326235, r326284:markj2018-02-213-47/+26
| | | | | | vm_page_array initialization improvements. (cherry picked from commit 15927ea545dd5119e164043a24026155250f8a2b)
* MFC r326055:markj2018-02-211-2/+4
| | | | | | Allow for fictitious physical pages in vm_page_scan_contig(). (cherry picked from commit 332a8c368c824313c1bcac21a4ad1c73666818ae)
* MFC r324824:kib2018-02-211-0/+10
| | | | | | Check that the page which is freed as zeroed, indeed has all-zero content. (cherry picked from commit 0519574f8cf9e9258b0499d6f2833990b377c5d7)
* MFC r324793:kib2018-02-211-0/+2
| | | | | | | In vm_page_free_phys_pglist(), do not take vm_page_queue_free_mtx if there is nothing to do. (cherry picked from commit d054fc982f42ac1c95e784cabcf25c437b0dc81c)
* MFC r320980,321377alc2018-02-213-11/+42
| | | | | | | | | | | Generalize vm_page_ps_is_valid() to support testing other predicates on the (super)page, renaming the function to vm_page_ps_test(). In vm_page_ps_test(), always check that the base pages within the specified superpage all belong to the same object. To date, that check has not been needed, but upcoming changes require it. (cherry picked from commit 8df894b522e2199c482090bcc1064dadc3259a72)
* MFC r323973,324087alc2018-02-213-14/+18
| | | | | | | | | | | | | | | | Optimize vm_page_try_to_free(). Specifically, the call to pmap_remove_all() can be avoided when the page's containing object has a reference count of zero. (If the object has a reference count of zero, then none of its pages can possibly be mapped.) Address nearby style issues in vm_page_try_to_free(), and change its return type to "bool". Optimize vm_object_page_remove() by eliminating pointless calls to pmap_remove_all(). If the object to which a page belongs has no references, then that page cannot possibly be mapped. (cherry picked from commit 2d2427db5b735ecdb6fe8ad9251f524b4260bb6a)
* MFC r323290:markj2018-02-213-43/+61
| | | | | | Speed up vm_page_array initialization. (cherry picked from commit 7c7c98c4dc6d7946663f0050ac2155d81bc4542a)
* MFC r323544:markj2018-02-212-7/+13
| | | | | | Fix a logic error in the item size calculation for internal UMA zones. (cherry picked from commit 81448270d4454329f3302889a4d99f3bbca26f4e)
* MFC r323562:kib2018-02-211-1/+1
| | | | | | | Remove inline specifier from vm_page_free_wakeup(), do not micro-manage compiler. (cherry picked from commit c31a8a35798f25fb0758d0e64c1013c203f75ca9)
* MFC r323559:kib2018-02-212-39/+81
| | | | | | Split vm_page_free_toq(). (cherry picked from commit c8dd21ff3bde9b30fa86dd16c8dae3c2c34e1250)
* MFC r322405, r322406:markj2018-02-213-7/+14
| | | | | | | Modify vm_page_grab_pages() to handle VM_ALLOC_NOWAIT, use it in sendfile_swapin(). (cherry picked from commit 00ffd58e267b0466241a684db7dbfd7f2fecbf80)
* MFC r322296alc2018-02-213-18/+112
| | | | | | | | | | Introduce vm_page_grab_pages(), which is intended to replace loops calling vm_page_grab() on consecutive page indices. Besides simplifying the code in the caller, vm_page_grab_pages() allows for batching optimizations. For example, the current implementation replaces calls to vm_page_lookup() on consecutive page indices by cheaper calls to vm_page_next(). (cherry picked from commit 9d710dfe3f1905122f3d9e3c84da8e4dc03363ee)
* MFC r323368:kib2018-02-213-53/+34
| | | | | | Add a vm_page_change_lock() helper. (cherry picked from commit e44297aa7c8b20f74352986ad5c27fed648542cc)
* MFC r322383:markj2018-02-211-0/+1
| | | | | | Make vm_page_sunbusy() assert that the page is unlocked. (cherry picked from commit 8a00dc568742c6a3e32ef33b446a660cefa790f1)
* MFC r312208, r312994:markj2018-02-212-67/+110
| | | | | | Optimize vm_object_madvise(). (cherry picked from commit 7093b6d4b52a9bc798ae8b86f7ef56f1d1fd2b03)
* MFS r320889:kib2017-07-121-0/+1
| | | | | | Restore layout of struct vm_map_entry. Approved by: re (delphij)
* MFC r320843 MFS r320903:kib2017-07-121-1/+1
| | | | | | Fix loop termination in vm_map_find_min(). Approved by: re (delphij)
* Add MAP_GUARD and use it for stack grow area protection.kib2017-07-075-246/+263
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Bump __FreeBSD_version. This is an MFS of stable/11 r320666. MFC r320317: Implement address space guards. MFC r320338: Remove stale part of the comment. MFC r320339: Correctly handle small MAP_STACK requests. MFC r320344: For now, allow mprotect(2) over the guards to succeed regardless of the requested protection. MFC r320430: Treat the addr argument for mmap(2) request without MAP_FIXED flag as a hint. MFC r320560 (by alc): Modify vm_map_growstack() to protect itself from the possibility of the gap entry in the vm map being smaller than the sysctl-derived stack guard size. Approved by: re (delphij)
* MFS r320605, r320610: MFC r303052, r309017 (by alc):markj2017-07-055-23/+19
| | | | | | | Omit v_cache_count when computing the number of free pages, since its value is always 0. Approved by: re (gjb, kib)
* MFC r320316:kib2017-07-011-4/+4
| | | | | | Do not try to unmark MAP_ENTRY_IN_TRANSITION marked by other thread. Approved by: re (gjb)
* MFC r320202:kib2017-06-281-3/+4
| | | | | | | Call pmap_copy() only for map entries which have the backing object instantiated. Approved by: re (delphij)
* MFC r320201:kib2017-06-281-0/+2
| | | | | | | Assert that the protection of a new map entry is a subset of the max protection. Approved by: re (delphij)
* MFC r320121:kib2017-06-261-3/+1
| | | | | | | Ignore the P_SYSTEM process flag, and do not request VM_MAP_WIRE_SYSTEM mode when wiring the newly grown stack. Approved by: re (marius)
* MFC r319975:kib2017-06-221-25/+21
| | | | | | Some minor improvements to vnode_pager_generic_putpages(). Approved by: re (marius)
* MFC 319702: Fix an off-by-one error in the VM page array on some systems.jhb2017-06-211-7/+25
| | | | | | | | | | | | | | r313186 changed how the size of the VM page array was calculated to be less wasteful. For most systems, the amount of memory is divided by the overhead required by each page (a page of data plus a struct vm_page) to determine the maximum number of available pages. However, if the remainder for the first non-available page was at least a page of data (so that the only memory missing was a struct vm_page), this last page was left in phys_avail[] but was not allocated an entry in the VM page array. Handle this case by explicitly excluding the page from phys_avail[]. Approved by: re (kib)
* MFC r318995alc2017-06-151-20/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In r118390, the swap pager's approach to striping swap allocation over multiple devices was changed. However, swapoff_one() was not fully and correctly converted. In particular, with r118390's introduction of a per- device blist, the maximum swap block size, "dmmax", became irrelevant to swapoff_one()'s operation. Moreover, swapoff_one() was performing out-of- range operations on the per-device blist that were silently ignored by blist_fill(). This change corrects both of these problems with swapoff_one(), which will allow us to potentially increase MAX_PAGEOUT_CLUSTER. Previously, swapoff_one() would panic inside of blist_fill() if you increased MAX_PAGEOUT_CLUSTER. MFC r319001 After r118390, the variable "dmmax" was neither the correct strip size nor the correct maximum block size. Moreover, after r318995, it serves no purpose except to provide information to user space through a read- sysctl. This change eliminates the variable "dmmax" but retains the sysctl. It also corrects the value returned by the sysctl. MFC r319604 Halve the memory being internally allocated by the blist allocator. In short, half of the memory that is allocated to implement the radix tree is wasted because we did not change "u_daddr_t" to be a 64-bit unsigned int when we changed "daddr_t" to be a 64-bit (signed) int. (See r96849 and r96851.) MFC r319612 When the function blist_fill() was added to the kernel in r107913, the swap pager used a different scheme for striping the allocation of swap space across multiple devices. And, although blist_fill() was intended to support fill operations with large counts, the old striping scheme never performed a fill larger than the stripe size. Consequently, the misplacement of a sanity check in blst_meta_fill() went undetected. Now, moving forward in time to r118390, a new scheme for striping was introduced that maintained a blist allocator per device, but as noted in r318995, swapoff_one() was not fully and correctly converted to the new scheme. This change completes what was started in r318995 by fixing the underlying bug in blst_meta_fill() that stops swapoff_one() from simply performing a single blist_fill() operation. MFC r319627 Starting in r118390, swaponsomething() began to reserve the blocks at the beginning of a swap area for a disk label. However, neither r118390 nor r118544, which increased the reservation from one to two blocks, correctly accounted for these blocks when updating the variable "swap_pager_avail". This change corrects that error. MFC r319655 Originally, this file could be compiled as a user-space application for testing purposes. However, over the years, various changes to the kernel have broken this feature. This revision applies some fixes to get user- space compilation working again. There are no changes in this revision to code that is used by the kernel. Approved by: re (kib)
* Null pointer must be checked before use. This fixes a regression introducedjkim2017-06-151-5/+5
| | | | | | | | | | | in r318716. Note it is a direct commit to stable/11 because head removed support for idle page zeroing in r305362. PR: 219994 Reviewed by: markj Approved by: re (gjb)
* MFC r315272, r315370delphij2017-05-312-0/+30
| | | | | | | | | | | | | | | | | | | | | | | r315272: Implement INHERIT_ZERO for minherit(2). INHERIT_ZERO is an OpenBSD feature. When a page is marked as such, it would be zeroed upon fork(). This would be used in new arc4random(3) functions. PR: 182610 Reviewed by: kib (earlier version) Differential Revision: https://reviews.freebsd.org/D427 r315370: The adj_free and max_free values of new_entry will be calculated and assigned by subsequent vm_map_entry_link(), therefore, remove the pointless copying. Submitted by: alc
* MFC r308474, r308691, r309203, r309365, r309703, r309898, r310720,markj2017-05-2317-1000/+876
| | | | | r308489, r308706: Add PQ_LAUNDRY and remove PG_CACHED pages.
* MFC 316493: Assert that the align parameter to uma_zcreate() is valid.jhb2017-05-111-0/+3
|
* MFC r316288:dchagin2017-04-291-8/+9
| | | | Add kern_mincore() helper for mincore() syscall.
* MFC r316686, r316687, r316689markj2017-04-171-9/+18
| | | | Fix a race between vm_map_wire() and vm_map_protect().
* MFC r315078: uma: fix pages <-> items conversions at several placesavg2017-04-141-6/+8
|
* MFC r315077: uma: eliminate uk_slabsize fieldavg2017-04-142-14/+12
|
* MFC r316526:kib2017-04-122-18/+30
| | | | | Extract calculation of ioflags from the vm_pager_putpages flags into a helper.
* MFC r316525:kib2017-04-121-7/+2
| | | | | Some style fixes for vnode_pager_generic_putpages(), in the local declaration block.
* MFC r316524:kib2017-04-121-2/+1
| | | | | | Use int instead of boolean_t for flags argument type in vnode_pager_generic_putpages() prototype; change the argument name to reflect that it is flags.
* MFC r315281:kib2017-03-285-28/+27
| | | | | | | | | | Use atop() instead of OFF_TO_IDX() for convertion of addresses or addresses offsets, as intended. MFC r315580 (by alc): Simplify the logic for clipping the range returned by the pager to fit within the map entry. Use atop() rather than OFF_TO_IDX() on addresses.
* MFC r315552:kib2017-03-261-1/+1
| | | | Fix off-by-one in the vm_fault_populate() code.
OpenPOWER on IntegriCloud