| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
Clear the MAP_WIREFUTURE flag on the vm map in exec_new_vmspace() when it
recycles the current vm space. Otherwise, an mlockall(MCL_FUTURE) could
still be in effect on the process after an execve(2), which violates the
specification for mlockall(2).
It's pointless for vm_map_stack() to check the MEMLOCK limit. It will
never be asked to wire the stack. Moreover, it doesn't even implement
wiring of the stack.
|
|
|
|
| |
Style.
|
|
|
|
| |
Do not try to unmark MAP_ENTRY_IN_TRANSITION marked by other thread.
|
|
|
|
|
| |
Call pmap_copy() only for map entries which have the backing object
instantiated.
|
|
|
|
|
| |
Assert that the protection of a new map entry is a subset of the max
protection.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
r315272:
Implement INHERIT_ZERO for minherit(2).
INHERIT_ZERO is an OpenBSD feature.
When a page is marked as such, it would be zeroed
upon fork().
This would be used in new arc4random(3) functions.
PR: 182610
Reviewed by: kib (earlier version)
Differential Revision: https://reviews.freebsd.org/D427
r315370:
The adj_free and max_free values of new_entry will be calculated and
assigned by subsequent vm_map_entry_link(), therefore, remove the
pointless copying.
Submitted by: alc
|
|
|
|
| |
This allows processes to be identified by PID as well as a pointer address.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tidy up the early parts of vm_map_insert().
MFC r267645 (by alc):
When MAP_STACK_GROWS_{DOWN,UP} are passed to vm_map_insert() set the
corresponding flag(s) in the new map entry.
Pass MAP_STACK_GROWS_DOWN to vm_map_insert() from vm_map_growstack() when
extending the stack in the downward direction.
MFC r267850 (by alc):
Place the check that blocks map entry coalescing on stack entries in
vm_map_simplify_entry().
MFC r267917 (by alc):
Delay the call to crhold() in vm_map_insert() until we know that we won't
have to undo it by calling crfree().
Eliminate an unnecessary variable from vm_map_insert().
MFC r311014:
Style fixes for vm_map_insert().
Tested by: pho
|
|
|
|
| |
Style.
|
|
|
|
|
| |
There is no reason in the current kernel to disallow write access to
the COW wired entry if the entry permissions allow it. Remove the check.
|
|
|
|
| |
Do not pretend that vm_fault(9) supports unwiring the address.
|
|
|
|
|
| |
Account for the main process stack being one page below the highest
user address when ABI uses shared page.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add kern.racct.enable tunable and RACCT_DISABLED config option.
The point of this is to be able to add RACCT (with RACCT_DISABLED)
to GENERIC, to avoid having to rebuild the kernel to use rctl(8).
MFC r282901:
Build GENERIC with RACCT/RCTL support by default. Note that it still
needs to be enabled by adding "kern.racct.enable=1" to /boot/loader.conf.
Note those two are MFC-ed together, because the latter one changes the
name of RACCT_DISABLED option to RACCT_DEFAULT_TO_DISABLED. Should have
committed the renaming separately...
Relnotes: yes
Sponsored by: The FreeBSD Foundation
|
|
|
|
|
|
|
|
|
| |
vmspace_release() may sleep if the last reference is being released,
so add a WITNESS_WARN() to catch cases where it is called with a
non-sleepable lock held.
MFC after: 1 month
Sponsored by: Sandvine Inc.
|
|
|
|
|
|
|
| |
Avoid calling vm_map_pmap_enter() for the MADV_WILLNEED on the wired
entry, the pages must be already mapped.
Approved by: re (gjb)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
wired region. Rework the handling of unwire to do the it in batch,
both at pmap and object level.
All commits below are by alc.
MFC r268327:
Introduce pmap_unwire().
MFC r268591:
Implement pmap_unwire() for powerpc.
MFC r268776:
Implement pmap_unwire() for arm.
MFC r268806:
pmap_unwire(9) man page.
MFC r269134:
When unwiring a region of an address space, do not assume that the
underlying physical pages are mapped by the pmap. This fixes a leak
of the wired pages on the unwiring of the region mapped with no access
allowed.
MFC r269339:
In the implementation of the new function pmap_unwire(), the call to
MOEA64_PVO_TO_PTE() must be performed before any changes are made to the
PVO. Otherwise, MOEA64_PVO_TO_PTE() will panic.
MFC r269365:
Correct a long-standing problem in moea{,64}_pvo_enter() that was revealed
by the combination of r268591 and r269134: When we attempt to add the
wired attribute to an existing mapping, moea{,64}_pvo_enter() do nothing.
(They only set the wired attribute on newly created mappings.)
MFC r269433:
Handle wiring failures in vm_map_wire() with the new functions
pmap_unwire() and vm_object_unwire().
Retire vm_fault_{un,}wire(), since they are no longer used.
MFC r269438:
Rewrite a loop in vm_map_wire() so that gcc doesn't think that the variable
"rv" is uninitialized.
MFC r269485:
Retire pmap_change_wiring().
Reviewed by: alc
|
|
|
|
|
|
| |
Add a page size field to struct vm_page.
Approved by: alc
|
|
|
|
|
|
| |
Assert that the new entry is inserted into the right location in the
map entries list, and that it does not overlap with the previous and
next entries.
|
|
|
|
| |
Add MAP_EXCL flag for mmap(2).
|
|
|
|
| |
Use correct names for the flags.
|
|
|
|
|
|
|
| |
Make mmap(MAP_STACK) search for the available address space.
MFC r267497 (by alc):
Use local variable instead of sgrowsiz.
|
|
|
|
| |
Remove the assert which can be triggered by the userspace.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the new-and-improved vm_fault_copy_entry() (r265843), we can always
avoid soft page faults when adding write access to user wired entries in
vm_map_protect(). Previously, we only avoided the soft page fault when
the underlying pages were copy-on-write. In other words, we avoided the
pages faults that might sleep on page allocation, but not the trivial
page faults to update the physical map.
On a fork allow read-only wired pages to be copy-on-write shared between
the parent and child processes. Previously, we copied these pages even
though they are read only. However, the reason for copying them is
historical and no longer exists. In recent times, vm_map_protect() has
developed the ability to copy pages when write access is added to wired
copy-on-write pages. So, in this case, copy-on-write sharing of wired
pages is not to be feared. It is not going to lead to copy-on-write
faults on wired memory.
|
|
|
|
|
| |
In execve(2), postpone the free of old vmspace until the threads are resumed
and exited.
|
|
|
|
|
|
| |
About 9% of the pmap_protect() calls being performed by
vm_map_copy_entry() are unnecessary.
Eliminate the unnecessary calls.
|
|
|
|
|
| |
When printing the map with the ddb 'show procvm' command, do not dump
page queues for the backing objects.
|
|
|
|
| |
Print the entry address in addition to the object.
|
|
|
|
| |
Initialize vm_map_entry member wiring_thread on the map entry creation.
|
|
|
|
|
| |
Do not coalesce stack entry. Pass MAP_STACK_GROWS_DOWN and
MAP_STACK_GROWS_UP flags to vm_map_insert() from vm_map_stack()
|
|
|
|
|
| |
Verify for zero-length requests and act as if it is always successfull
without performing any action on the address space.
|
|
|
|
|
| |
Add assertions to cover all places in the wiring and unwiring code
where MAP_ENTRY_IN_TRANSITION is set or cleared.
|
|
|
|
|
|
|
|
| |
point in defining a fini function for these zones.
Reviewed by: kib
Approved by: re (glebius)
Sponsored by: EMC / Isilon Storage Division
|
|
|
|
|
|
|
|
|
|
|
|
| |
- add fields to 'struct pmap' that are required to manage nested page tables.
- add a parameter to 'vmspace_alloc()' that can be used to override the
default pmap initialization routine 'pmap_pinit()'.
These changes are pushed ahead of the remaining changes in 'bhyve_npt_pmap'
in anticipation of the upcoming KBI freeze for 10.0.
Reviewed by: kib@, alc@
Approved by: re (glebius)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
an address in the first 2GB of the process's address space. This flag should
have the same semantics as the same flag on Linux.
To facilitate this, add a new parameter to vm_map_find() that specifies an
optional maximum virtual address. While here, fix several callers of
vm_map_find() to use a VMFS_* constant for the findspace argument instead of
TRUE and FALSE.
Reviewed by: alc
Approved by: re (kib)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
MADV_DONTNEED) and madvise(..., MADV_FREE). Specifically, introduce a new
pmap function, pmap_advise(), that operates on a range of virtual addresses
within the specified pmap, allowing for a more efficient implementation of
MADV_DONTNEED and MADV_FREE. Previously, the implementation of
MADV_DONTNEED and MADV_FREE relied on per-page pmap operations, such as
pmap_clear_reference(). Intuitively, the problem with this implementation
is that the pmap-level locks are acquired and released and the page table
traversed repeatedly, once for each resident page in the range
that was specified to madvise(2). A more subtle flaw with the previous
implementation is that pmap_clear_reference() would clear the reference bit
on all mappings to the specified page, not just the mapping in the range
specified to madvise(2).
Since our malloc(3) makes heavy use of madvise(2), this change can have a
measureable impact. For example, the system time for completing a parallel
"buildworld" on a 6-core amd64 machine was reduced by about 1.5% to 2.0%.
Note: This change only contains pmap_advise() implementations for a subset
of our supported architectures. I will commit implementations for the
remaining architectures after further testing. For now, a stub function is
sufficient because of the advisory nature of pmap_advise().
Discussed with: jeff, jhb, kib
Tested by: pho (i386), marcel (ia64)
Sponsored by: EMC / Isilon Storage Division
|
|
|
|
|
|
|
|
|
|
|
| |
which is the part of struct vmspace, allocated from UMA_ZONE_NOFREE
zone. Initialize the pmap lock in the vmspace zone init function, and
remove pmap lock initialization and destruction from pmap_pinit() and
pmap_release().
Suggested and reviewed by: alc (previous version)
Tested by: pho
Sponsored by: The FreeBSD Foundation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
address alignment of mappings.
- MAP_ALIGNED(n) requests a mapping aligned on a boundary of (1 << n).
Requests for n >= number of bits in a pointer or less than the size of
a page fail with EINVAL. This matches the API provided by NetBSD.
- MAP_ALIGNED_SUPER is a special case of MAP_ALIGNED. It can be used
to optimize the chances of using large pages. By default it will align
the mapping on a large page boundary (the system is free to choose any
large page size to align to that seems best for the mapping request).
However, if the object being mapped is already using large pages, then
it will align the virtual mapping to match the existing large pages in
the object instead.
- Internally, VMFS_ALIGNED_SPACE is now renamed to VMFS_SUPER_SPACE, and
VMFS_ALIGNED_SPACE(n) is repurposed for specifying a specific alignment.
MAP_ALIGNED(n) maps to using VMFS_ALIGNED_SPACE(n), while
MAP_ALIGNED_SUPER maps to VMFS_SUPER_SPACE.
- mmap() of a device object now uses VMFS_OPTIMAL_SPACE rather than
explicitly using VMFS_SUPER_SPACE. All device objects are forced to
use a specific color on creation, so VMFS_OPTIMAL_SPACE is effectively
equivalent.
Reviewed by: alc
MFC after: 1 month
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
transparent layering and better fragmentation.
- Normalize functions that allocate memory to use kmem_*
- Those that allocate address space are named kva_*
- Those that operate on maps are named kmap_*
- Implement recursive allocation handling for kmem_arena in vmem.
Reviewed by: alc
Tested by: pho
Sponsored by: EMC / Isilon Storage Division
|
|
|
|
|
|
|
|
|
|
|
|
| |
locks don't accidentally appear to have been already
initialized.
In particular, this fixes a consistent kernel crash on
armv6 with:
panic: lock "vm map (user)" 0xc09cc050 already initialized
that appeared with r251709.
PR: arm/180820
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Add a new address space allocation method (VMFS_OPTIMAL_SPACE) for
vm_map_find() that will try to alter the alignment of a mapping to match
any existing superpage mappings of the object being mapped. If no
suitable address range is found with the necessary alignment,
vm_map_find() will fall back to using the simple first-fit strategy
(VMFS_ANY_SPACE).
- Change mmap() without MAP_FIXED, shmat(), and the GEM mapping ioctl to
use VMFS_OPTIMAL_SPACE instead of VMFS_ANY_SPACE.
Reviewed by: alc (earlier version)
MFC after: 2 weeks
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
parallel creation of the map entries, e.g. by mmap() or stack growing.
It also breaks when other entry is wired in parallel.
The vm_map_wire() iterates over the map entries in the region, and
assumes that map entries it finds are marked as in transition before,
also that any entry marked as in transition, are marked by the current
invocation of vm_map_wire(). This is not true for new entries in the
holes.
Add the thread owner of the MAP_ENTRY_IN_TRANSITION flag to struct
vm_map_entry. In vm_map_wire() and vm_map_unwire(), only process the
entries which transition owner is the current thread.
Reported and tested by: pho
Reviewed by: alc
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
|
|
|
|
|
|
|
|
|
|
| |
to a memory-mapped file in the traced process's address space
even if neither the traced process nor the tracing process had
write access to that file.
Security: CVE-2013-2171
Security: FreeBSD-SA-13:06.mmap
Approved by: so
|
|
|
|
|
|
|
|
|
|
|
|
| |
o Relax locking assertions for pmap_enter_object() and add them also
to architectures that currently don't have any
o Introduce VM_OBJECT_LOCK_DOWNGRADE() which is basically a downgrade
operation on the per-object rwlock
o Use all the mechanisms above to make vm_map_pmap_enter() to work
mostl of the times only with readlocks.
Sponsored by: EMC / Isilon storage division
Reviewed by: alc
|
|
|
|
|
|
|
|
|
|
|
| |
with the MAP_ENTRY_VN_WRITECNT flag:
- Move the assertion that verifies the state of the v_writecount and
vnp.writecount, under the block where the object is locked.
- Check that the object type is OBJT_VNODE before asserting.
Reported by: avg
Reviewed by: alc
MFC after: 1 week
|
| |
|
|
|
|
|
|
| |
their "write" versions.
Sponsored by: EMC / Isilon storage division
|
|
|
|
|
|
|
|
| |
* VM_OBJECT_LOCK and VM_OBJECT_UNLOCK are mapped to write operations
* VM_OBJECT_SLEEP() is introduced as a general purpose primitve to
get a sleep operation using a VM_OBJECT_LOCK() as protection
* The approach must bear with vm_pager.h namespace pollution so many
files require including directly rwlock.h
|
|
|
|
|
|
| |
Reviewed by: alc
Approved by: kib (mentor)
MFC after: 1 week
|
|
|
|
|
|
|
|
|
| |
GENERIC kernel size reduced in 16 bytes and RACCT kernel in 336 bytes.
Suggested by: alc
Reviewed by: alc
Approved by: kib (mentor)
MFC after: 1 week
|
|
|
|
|
|
|
|
| |
- Add sysctl vm.old_mlock which may turn such accounting off.
Reviewed by: avg, trasz
Approved by: kib (mentor)
MFC after: 1 week
|