| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
|
| |
|
|
|
|
| |
for accessing a vm object's shadows.
|
|
|
|
|
|
| |
changes are required.)
- Remove special-case macros for kmem object locking. They are
no longer used.
|
| |
|
|
|
|
|
| |
(&kmem_object_store), respectively. This allows the address of these
objects to be resolved at link-time rather than run-time.
|
| |
|
|
|
|
|
|
| |
to a LIST.
Approved by: re (rwatson)
|
|
|
|
|
|
| |
to avoid false reports of lock-order reversal with a system map mutex.
Approved by: re (jhb)
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
trustworthy for vnode-backed objects.
- Restore the old behavior of vm_object_page_remove() when the end
of the given range is zero. Add a comment to vm_object_page_remove()
regarding this behavior.
Reported by: iedowse
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
- Convert some dead code in vm_object_reference() into a comment.
|
|
|
|
|
| |
- Avoid repeatedly mtx_init()ing and mtx_destroy()ing the vm_object's lock
using UMA's uminit callback, in this case, vm_object_zinit().
|
|
|
|
|
| |
- In vm_object_deallocate(), lock the child when removing the parent
from the child's shadow list.
|
|
|
|
|
| |
(2) remove a pointless assertion, and (3) make a trivial change to a
comment.
|
|
|
|
|
|
|
|
|
|
| |
- Eliminate an odd, special-case feature:
if start == end == 0 then all pages are removed. Only one caller
used this feature and that caller can trivially pass the object's
size.
- Assert that the vm_object is locked on entry; don't bother testing
for a NULL vm_object.
- Style: Fix lines that are longer than 80 characters.
|
| |
|
|
|
|
|
| |
- Make vm_object_pip_sleep() static.
- Lock the vm_object when performing vm_object_pip_wait().
|
|
|
|
| |
swap_pager_freespace().
|
|
|
|
|
|
| |
- Add a parameter to vm_pageout_flush() that tells vm_pageout_flush()
whether its caller has locked the vm_object. (This is a temporary
measure to bootstrap vm_object locking.)
|
|
|
|
|
|
|
| |
vm_object_pip_add() and vm_object_pip_wakeup().
- Remove GIANT_REQUIRED from vm_object_pip_subtract() and
vm_object_pip_subtract().
- Lock the vm_object when performing vm_object_page_remove().
|
|
|
|
| |
vm_object_pip_wakeup().
|
| |
|
|
|
|
| |
- Assert that the vm_object lock is held in vm_object_pip_subtract().
|
|
|
|
|
| |
- Assert that the vm_object lock is held in vm_object_pip_wakeupn().
- Add a new macro VM_OBJECT_LOCK_ASSERT().
|
| |
|
| |
|
|
|
|
| |
without Giant held.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
where physical addresses larger than virtual addresses, such as i386s
with PAE.
- Use this to represent physical addresses in the MI vm system and in the
i386 pmap code. This also changes the paddr parameter to d_mmap_t.
- Fix printf formats to handle physical addresses >4G in the i386 memory
detection code, and due to kvtop returning vm_paddr_t instead of u_long.
Note that this is a name change only; vm_paddr_t is still the same as
vm_offset_t on all currently supported platforms.
Sponsored by: DARPA, Network Associates Laboratories
Discussed with: re, phk (cdevsw change)
|
|
|
|
|
| |
%j in printfs, so put a newsted include in <sys/systm.h> where the printf
prototype lives and save everybody else the trouble.
|
|
|
|
| |
Discussed on: arch@
|
|
|
|
| |
Approved by: trb
|
|
|
|
|
| |
two cases that existed before for performance optimization purposes can
be reduced to one.
|
|
|
|
| |
Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
|
|
|
|
|
| |
(This procedure needs further work, but this change is sufficient for
locking the kmem_object.)
|
|
|
|
|
| |
kmem_object without Giant. In that case, assert that the kmem_object's
mutex is held.
|
|
|
|
| |
especially in troff files.
|
|
|
|
|
|
| |
- Add a mtx_destroy() to vm_object_collapse(). (This allows a bzero()
to migrate from _vm_object_allocate() to vm_object_zinit(), where it
will be performed less often.)
|
|
|
|
|
| |
lock by making vm_page_rename()'s caller, rather than vm_page_rename(),
responsible for acquiring it.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
comes along and flushes a file which has been mmap()'d SHARED/RW, with
dirty pages, it was flushing the underlying VM object asynchronously,
resulting in thousands of 8K writes. With this change the VM Object flushing
code will cluster dirty pages in 64K blocks.
Note that until the low memory deadlock issue is reviewed, it is not safe
to allow the pageout daemon to use this feature. Forced pageouts still
use fs block size'd ops for the moment.
MFC after: 3 days
|
|
|
|
|
| |
acquire the page queues lock.
- Acquire the page queues lock in vm_object_page_clean().
|
| |
|
|
|
|
|
| |
at appropriate times. For the moment, the mutex is only used on
the kmem_object.
|
|
|
|
| |
vm/vm_page.c, it is unused.
|