| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Replace many instances of VM_WAIT with blocking page allocation flags.
(cherry picked from commit 2069f0080fbdcf49b623bc3c1eda76524a4d1a77)
|
|
|
|
|
|
|
| |
Add pctrie_init() and vm_radix_init() to initialize generic pctrie and
vm_radix trie.
(cherry picked from commit 449ea22b392f956d4af1311d01e6ca647ebda976)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace global swhash in swap pager with per-object trie to track swap
blocks assigned to the object pages.
MFC r322970 (by alc):
Do not call vm_pager_page_unswapped() on the fast fault path.
MFC r322971 (by alc):
Update a couple vm_object lock assertions in the swap pager.
MFC r323224:
In swp_pager_meta_build(), handle a race with other thread allocating
swapblk for our index while we dropped the object lock.
MFC r323226:
Do not leak empty swblk.
(cherry picked from commit 36d113490a64de94e4172f3d916e74d8eff5b7db)
|
|
|
|
|
|
|
| |
Adjust interface of swapon_check_swzone() to its actual usage.
PR: 221356
(cherry picked from commit 2481224bb101ec60b11dc294c29ba3fbbc176659)
|
|
|
|
|
|
|
| |
Make the swap_pager_full variable static.
PR: 221356
(cherry picked from commit ba21942ce28b39691547ca8cd966f6304b5ce025)
|
|
|
|
|
|
| |
Remove unused function swap_pager_isswapped().
(cherry picked from commit c6e718d6cb8b7c73f74c2910fc47637f436573d2)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Increase the pageout cluster size to 32 pages.
Decouple the pageout cluster size from the size of the hash table entry
used by the swap pager for mapping (object, pindex) to a block on the
swap device(s), and keep the size of a hash table entry at its current
size.
Eliminate a pointless macro.
(cherry picked from commit 90fed17dafd94f9a34e74086f35e7e8a540e00a7)
|
|
|
|
|
|
|
|
| |
Pages that are passed to swap_pager_putpages() should already be fully
dirty. Assert that they are fully dirty rather than redundantly calling
vm_page_dirty() on them.
(cherry picked from commit 804e94da8f1b60ea3d603f65d73c1fc9e6f6729f)
|
|
|
|
|
|
| |
Eliminate an unused macro.
(cherry picked from commit 7c32a320e0086c6cf5712bd48c9e8fa6d4ca6d5c)
|
|
|
|
|
|
| |
Add vm_page_alloc_after().
(cherry picked from commit 8e264f308c8b33afa7e707ce0f70254f4e1bea1b)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Start annotating global _padalign locks with __exclusive_cache_line
While these locks are guarnteed to not share their respective cache lines,
their current placement leaves unnecessary holes in lines which preceeded them.
For instance the annotation of vm_page_queue_free_mtx allows 2 neighbour
cachelines (previously separate by the lock) to be collapsed into 1.
The annotation is only effective on architectures which have it implemented in
their linker script (currently only amd64). Thus locks are not converted to
their not-padaligned variants as to not affect the rest.
=============
Annotate global process locks with __exclusive_cache_line
=============
Annotate Giant with __exclusive_cache_line
=============
Annotate sysctlmemlock with __exclusive_cache_line.
(cherry picked from commit dc9eed165c25d9af290b93f577ad7ac9d7b3788c)
|
|
|
|
|
|
| |
vm_page_array initialization improvements.
(cherry picked from commit 15927ea545dd5119e164043a24026155250f8a2b)
|
|
|
|
|
|
| |
Allow for fictitious physical pages in vm_page_scan_contig().
(cherry picked from commit 332a8c368c824313c1bcac21a4ad1c73666818ae)
|
|
|
|
|
|
| |
Check that the page which is freed as zeroed, indeed has all-zero content.
(cherry picked from commit 0519574f8cf9e9258b0499d6f2833990b377c5d7)
|
|
|
|
|
|
|
| |
In vm_page_free_phys_pglist(), do not take vm_page_queue_free_mtx if
there is nothing to do.
(cherry picked from commit d054fc982f42ac1c95e784cabcf25c437b0dc81c)
|
|
|
|
|
|
|
|
|
|
|
| |
Generalize vm_page_ps_is_valid() to support testing other predicates on
the (super)page, renaming the function to vm_page_ps_test().
In vm_page_ps_test(), always check that the base pages within the specified
superpage all belong to the same object. To date, that check has not been
needed, but upcoming changes require it.
(cherry picked from commit 8df894b522e2199c482090bcc1064dadc3259a72)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Optimize vm_page_try_to_free(). Specifically, the call to pmap_remove_all()
can be avoided when the page's containing object has a reference count of
zero. (If the object has a reference count of zero, then none of its pages
can possibly be mapped.)
Address nearby style issues in vm_page_try_to_free(), and change its
return type to "bool".
Optimize vm_object_page_remove() by eliminating pointless calls to
pmap_remove_all(). If the object to which a page belongs has no
references, then that page cannot possibly be mapped.
(cherry picked from commit 2d2427db5b735ecdb6fe8ad9251f524b4260bb6a)
|
|
|
|
|
|
| |
Speed up vm_page_array initialization.
(cherry picked from commit 7c7c98c4dc6d7946663f0050ac2155d81bc4542a)
|
|
|
|
|
|
| |
Fix a logic error in the item size calculation for internal UMA zones.
(cherry picked from commit 81448270d4454329f3302889a4d99f3bbca26f4e)
|
|
|
|
|
|
|
| |
Remove inline specifier from vm_page_free_wakeup(), do not
micro-manage compiler.
(cherry picked from commit c31a8a35798f25fb0758d0e64c1013c203f75ca9)
|
|
|
|
|
|
| |
Split vm_page_free_toq().
(cherry picked from commit c8dd21ff3bde9b30fa86dd16c8dae3c2c34e1250)
|
|
|
|
|
|
|
| |
Modify vm_page_grab_pages() to handle VM_ALLOC_NOWAIT, use it in
sendfile_swapin().
(cherry picked from commit 00ffd58e267b0466241a684db7dbfd7f2fecbf80)
|
|
|
|
|
|
|
|
|
|
| |
Introduce vm_page_grab_pages(), which is intended to replace loops calling
vm_page_grab() on consecutive page indices. Besides simplifying the code
in the caller, vm_page_grab_pages() allows for batching optimizations.
For example, the current implementation replaces calls to vm_page_lookup()
on consecutive page indices by cheaper calls to vm_page_next().
(cherry picked from commit 9d710dfe3f1905122f3d9e3c84da8e4dc03363ee)
|
|
|
|
|
|
| |
Add a vm_page_change_lock() helper.
(cherry picked from commit e44297aa7c8b20f74352986ad5c27fed648542cc)
|
|
|
|
|
|
| |
Make vm_page_sunbusy() assert that the page is unlocked.
(cherry picked from commit 8a00dc568742c6a3e32ef33b446a660cefa790f1)
|
|
|
|
|
|
| |
Optimize vm_object_madvise().
(cherry picked from commit 7093b6d4b52a9bc798ae8b86f7ef56f1d1fd2b03)
|
|
|
|
|
|
| |
Restore layout of struct vm_map_entry.
Approved by: re (delphij)
|
|
|
|
|
|
| |
Fix loop termination in vm_map_find_min().
Approved by: re (delphij)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bump __FreeBSD_version. This is an MFS of stable/11 r320666.
MFC r320317:
Implement address space guards.
MFC r320338:
Remove stale part of the comment.
MFC r320339:
Correctly handle small MAP_STACK requests.
MFC r320344:
For now, allow mprotect(2) over the guards to succeed regardless of
the requested protection.
MFC r320430:
Treat the addr argument for mmap(2) request without MAP_FIXED flag as
a hint.
MFC r320560 (by alc):
Modify vm_map_growstack() to protect itself from the possibility of the
gap entry in the vm map being smaller than the sysctl-derived stack guard
size.
Approved by: re (delphij)
|
|
|
|
|
|
|
| |
Omit v_cache_count when computing the number of free pages, since its
value is always 0.
Approved by: re (gjb, kib)
|
|
|
|
|
|
| |
Do not try to unmark MAP_ENTRY_IN_TRANSITION marked by other thread.
Approved by: re (gjb)
|
|
|
|
|
|
|
| |
Call pmap_copy() only for map entries which have the backing object
instantiated.
Approved by: re (delphij)
|
|
|
|
|
|
|
| |
Assert that the protection of a new map entry is a subset of the max
protection.
Approved by: re (delphij)
|
|
|
|
|
|
|
| |
Ignore the P_SYSTEM process flag, and do not request
VM_MAP_WIRE_SYSTEM mode when wiring the newly grown stack.
Approved by: re (marius)
|
|
|
|
|
|
| |
Some minor improvements to vnode_pager_generic_putpages().
Approved by: re (marius)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
r313186 changed how the size of the VM page array was calculated to be
less wasteful. For most systems, the amount of memory is divided by
the overhead required by each page (a page of data plus a struct vm_page)
to determine the maximum number of available pages. However, if the
remainder for the first non-available page was at least a page of data
(so that the only memory missing was a struct vm_page), this last page
was left in phys_avail[] but was not allocated an entry in the VM page
array. Handle this case by explicitly excluding the page from
phys_avail[].
Approved by: re (kib)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In r118390, the swap pager's approach to striping swap allocation over
multiple devices was changed. However, swapoff_one() was not fully and
correctly converted. In particular, with r118390's introduction of a per-
device blist, the maximum swap block size, "dmmax", became irrelevant to
swapoff_one()'s operation. Moreover, swapoff_one() was performing out-of-
range operations on the per-device blist that were silently ignored by
blist_fill().
This change corrects both of these problems with swapoff_one(), which will
allow us to potentially increase MAX_PAGEOUT_CLUSTER. Previously,
swapoff_one() would panic inside of blist_fill() if you increased
MAX_PAGEOUT_CLUSTER.
MFC r319001
After r118390, the variable "dmmax" was neither the correct strip size
nor the correct maximum block size. Moreover, after r318995, it serves
no purpose except to provide information to user space through a read-
sysctl.
This change eliminates the variable "dmmax" but retains the sysctl. It
also corrects the value returned by the sysctl.
MFC r319604
Halve the memory being internally allocated by the blist allocator. In
short, half of the memory that is allocated to implement the radix tree is
wasted because we did not change "u_daddr_t" to be a 64-bit unsigned int
when we changed "daddr_t" to be a 64-bit (signed) int. (See r96849 and
r96851.)
MFC r319612
When the function blist_fill() was added to the kernel in r107913, the swap
pager used a different scheme for striping the allocation of swap space
across multiple devices. And, although blist_fill() was intended to support
fill operations with large counts, the old striping scheme never performed a
fill larger than the stripe size. Consequently, the misplacement of a
sanity check in blst_meta_fill() went undetected. Now, moving forward in
time to r118390, a new scheme for striping was introduced that maintained a
blist allocator per device, but as noted in r318995, swapoff_one() was not
fully and correctly converted to the new scheme. This change completes what
was started in r318995 by fixing the underlying bug in blst_meta_fill() that
stops swapoff_one() from simply performing a single blist_fill() operation.
MFC r319627
Starting in r118390, swaponsomething() began to reserve the blocks at the
beginning of a swap area for a disk label. However, neither r118390 nor
r118544, which increased the reservation from one to two blocks, correctly
accounted for these blocks when updating the variable "swap_pager_avail".
This change corrects that error.
MFC r319655
Originally, this file could be compiled as a user-space application for
testing purposes. However, over the years, various changes to the kernel
have broken this feature. This revision applies some fixes to get user-
space compilation working again. There are no changes in this revision
to code that is used by the kernel.
Approved by: re (kib)
|
|
|
|
|
|
|
|
|
|
|
| |
in r318716.
Note it is a direct commit to stable/11 because head removed support for
idle page zeroing in r305362.
PR: 219994
Reviewed by: markj
Approved by: re (gjb)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
r315272:
Implement INHERIT_ZERO for minherit(2).
INHERIT_ZERO is an OpenBSD feature.
When a page is marked as such, it would be zeroed
upon fork().
This would be used in new arc4random(3) functions.
PR: 182610
Reviewed by: kib (earlier version)
Differential Revision: https://reviews.freebsd.org/D427
r315370:
The adj_free and max_free values of new_entry will be calculated and
assigned by subsequent vm_map_entry_link(), therefore, remove the
pointless copying.
Submitted by: alc
|
|
|
|
|
| |
r308489, r308706:
Add PQ_LAUNDRY and remove PG_CACHED pages.
|
| |
|
|
|
|
| |
Add kern_mincore() helper for mincore() syscall.
|
|
|
|
| |
Fix a race between vm_map_wire() and vm_map_protect().
|
| |
|
| |
|
|
|
|
|
| |
Extract calculation of ioflags from the vm_pager_putpages flags into a
helper.
|
|
|
|
|
| |
Some style fixes for vnode_pager_generic_putpages(), in the local
declaration block.
|
|
|
|
|
|
| |
Use int instead of boolean_t for flags argument type in
vnode_pager_generic_putpages() prototype; change the argument name to
reflect that it is flags.
|
|
|
|
|
|
|
|
|
|
| |
Use atop() instead of OFF_TO_IDX() for convertion of addresses or
addresses offsets, as intended.
MFC r315580 (by alc):
Simplify the logic for clipping the range returned by the pager to fit
within the map entry.
Use atop() rather than OFF_TO_IDX() on addresses.
|
|
|
|
| |
Fix off-by-one in the vm_fault_populate() code.
|