| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
Reported by: ia64 tinderbox
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
is now synchronized by a mutex, whereas access to user maps is still
synchronized by a lockmgr()-based lock. Why? No single type of lock,
including sx locks, meets the requirements of both types of vm map.
Sometimes we sleep while holding the lock on a user map. Thus, a
a mutex isn't appropriate. On the other hand, both lockmgr()-based
and sx locks release Giant when a thread/process blocks during
contention for a lock. This could lead to a race condition in a legacy
driver (that relies on Giant for synchronization) if it attempts to
kmem_malloc() and fails to immediately obtain the lock. Fortunately,
we never sleep while holding a system map lock.
|
|
|
|
| |
- Correct a cast.
|
|
|
|
|
|
| |
- Introduce map_sleep_mtx and use it to replace Giant in
vm_map_unlock_and_wait() and vm_map_wakeup(). (Original
version by: tegge.)
|
|
|
|
|
|
| |
- Add a mtx_destroy() to vm_object_collapse(). (This allows a bzero()
to migrate from _vm_object_allocate() to vm_object_zinit(), where it
will be performed less often.)
|
|
|
|
|
| |
lock by making vm_page_rename()'s caller, rather than vm_page_rename(),
responsible for acquiring it.
|
|
|
|
| |
vm_page_flag_clear().
|
|
|
|
|
|
|
|
|
| |
so happens that OBJPC_SYNC has the same value as VM_PAGER_PUT_SYNC so no
harm done. But fix it :-)
No operational changes.
MFC after: 1 day
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
comes along and flushes a file which has been mmap()'d SHARED/RW, with
dirty pages, it was flushing the underlying VM object asynchronously,
resulting in thousands of 8K writes. With this change the VM Object flushing
code will cluster dirty pages in 64K blocks.
Note that until the low memory deadlock issue is reviewed, it is not safe
to allow the pageout daemon to use this feature. Forced pageouts still
use fs block size'd ops for the moment.
MFC after: 3 days
|
|
|
|
|
| |
- Use VM_ALLOC_WIRED.
- Perform vm_page_wakeup() after pmap_enter(), like we do everywhere else.
|
|
|
|
|
| |
acquire the page queues lock.
- Acquire the page queues lock in vm_object_page_clean().
|
| |
|
| |
|
| |
|
|
|
|
| |
...).
|
|
|
|
|
|
| |
vm/vm_page.c revision 1.220.)
Submitted by: bde
|
|
|
|
|
|
|
| |
sysctl code. This makes 'systat -vm 1's syscall count work again.
Submitted by: Michal Mertl <mime@traveller.cz>
Note: also slated for 5.0
|
| |
|
|
|
|
|
| |
at appropriate times. For the moment, the mutex is only used on
the kmem_object.
|
|
|
|
| |
vm/vm_page.c, it is unused.
|
|
|
|
|
|
| |
which incorporates page queue and field locking, is complete.
- Assert that the page queue lock rather than Giant is held in
vm_page_flag_set().
|
|
|
|
|
|
| |
vm_page_flag_set().
- Replace vm_page_sleep_busy() with proper page queues locking
and vm_page_sleep_if_busy().
|
|
|
|
|
| |
- Replace vm_page_sleep_busy() with proper page queues locking
and vm_page_sleep_if_busy().
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
skipping read-only pages, which can result in valuable non-text-related
data not getting dumped, the ELF loader and the dynamic loader now mark
read-only text pages NOCORE and the coredump code only checks (primarily) for
complete inaccessibility of the page or NOCORE being set.
Certain applications which map large amounts of read-only data will
produce much larger cores. A new sysctl has been added,
debug.elf_legacy_coredump, which will revert to the old behavior.
This commit represents collaborative work by all parties involved.
The PR contains a program demonstrating the problem.
PR: kern/45994
Submitted by: "Peter Edwards" <pmedwards@eircom.net>, Archie Cobbs <archie@dellroad.org>
Reviewed by: jdp, dillon
MFC after: 7 days
|
|
|
|
| |
around vm_page_lookup() and vm_page_free().
|
|
|
|
|
|
|
| |
This should be considered highly experimental for the moment.
Submitted by: David Schultz <dschultz@uclink.Berkeley.EDU>
MFC after: 3 weeks
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
resource starvation we clean-up as much of the vmspace structure as we
can when the last process using it exits. The rest of the structure
is cleaned up when it is reaped. But since exit1() decrements the ref
count it is possible for a double-free to occur if someone else, such as
the process swapout code, references and then dereferences the structure.
Additionally, the final cleanup of the structure should not occur until
the last process referencing it is reaped.
This commit solves the problem by introducing a secondary reference count,
calling 'vm_exitingcnt'. The normal reference count is decremented on exit
and vm_exitingcnt is incremented. vm_exitingcnt is decremented when the
process is reaped. When both vm_exitingcnt and vm_refcnt are 0, the
structure is freed for real.
MFC after: 3 weeks
|
|
|
|
| |
the object (i.e., acquire Giant).
|
|
|
|
| |
vm_object_page_remove().
|
|
|
|
| |
vm_object_page_remove().
|
|
|
|
| |
vm_page_remove(), and vm_page_free_toq().
|
|
|
|
|
|
|
| |
of the vm_page structure. Make the style of the pmap_protect() calls
consistent.
Approved by: re (blanket)
|
|
|
|
|
|
|
| |
of the vm_page structure. Nearby, remove an unnecessary semicolon and
return statement.
Approved by: re (blanket)
|
|
|
|
| |
Approved by: re (blanket)
|
|
|
|
| |
Approved by: re (blanket)
|
|
|
|
| |
Approved by: re (blanket)
|
|
|
|
| |
Approved by: re (blanket)
|
|
|
|
|
|
| |
because it updates flags within the vm page.
Approved by: re (blanket)
|
|
|
|
|
|
| |
to cover pmap_remove_all().
Approved by: re
|
|
|
|
| |
Approved by: re
|
|
|
|
|
|
| |
vm_pageout_page_free().
Approved by: re
|
|
|
|
| |
Approved by: re
|
|
|
|
|
|
|
|
|
| |
intended to be used by significant memory consumers so that they may drain
some of their caches.
Inspired by: phk
Approved by: re
Tested on: x86, alpha
|
|
|
|
| |
Spotted by: jake
|
| |
|
|
|
|
| |
Reported by: grehan
|
|
|
|
| |
use it directly.
|
|
|
|
| |
in various revisions of vm/vm_map.c between 1.148 and 1.153.
|