| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
| |
- Remove the Giant required from vm_page_free_toq(). (Any locking
errors will be caught by vm_page_remove().)
This remedies a panic that occurred when kmem_malloc(NOWAIT) performed
without Giant failed to allocate the necessary pages.
Reported by: phk
|
|
|
|
|
|
|
| |
- Weaken the assertion in vm_page_insert() to require Giant only if the
vm_object isn't locked.
Reported by: "Ilmar S. Habibulin" <ilmar@watson.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
called without Giant; and obj_alloc() in turn calls vm_page_alloc()
without Giant. This causes an assertion failure in vm_page_alloc().
Fortunately, obj_alloc() is now MPSAFE. So, we need only clean up
some assertions.
- Weaken the assertion in vm_page_lookup() to require Giant only
if the vm_object isn't locked.
- Remove an assertion from vm_page_alloc() that duplicates a check
performed in vm_page_lookup().
In collaboration with: gallatin, jake, jeff
|
|
|
|
|
|
| |
in rev 1.61 of pmap.c.
- Now that pmap_page_is_free() is empty and since it is just a hack for
the Alpha pmap, remove it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
where physical addresses larger than virtual addresses, such as i386s
with PAE.
- Use this to represent physical addresses in the MI vm system and in the
i386 pmap code. This also changes the paddr parameter to d_mmap_t.
- Fix printf formats to handle physical addresses >4G in the i386 memory
detection code, and due to kvtop returning vm_paddr_t instead of u_long.
Note that this is a name change only; vm_paddr_t is still the same as
vm_offset_t on all currently supported platforms.
Sponsored by: DARPA, Network Associates Laboratories
Discussed with: re, phk (cdevsw change)
|
| |
|
|
|
|
|
| |
after mapping it. This makes it possible to determine if a physical
page has a backing vm_page or not.
|
| |
|
| |
|
|
|
|
|
| |
with the page queue lock.
- Assert that the page queue lock is held in vm_page_free_wakeup().
|
|
|
|
|
| |
- Assert that the page queues lock rather than Giant is held in
vm_page_hold().
|
|
|
|
|
| |
counter outside the scope of existing locks.
- Eliminate a redundant clearing of vm_pageout_deficit.
|
|
|
|
|
|
|
|
|
|
| |
The objective being to eliminate some cases of page queues locking.
(See, for example, vm/vm_fault.c revision 1.160.)
Reviewed by: tegge
(Also, pointed out by tegge that I changed vm_fault.c before changing
vm_page.c. Oops.)
|
|
|
|
| |
expression.
|
|
|
|
|
|
|
| |
requests when the number of free pages is below the reserved threshold.
Previously, VM_ALLOC_ZERO was only honored when the number of free pages
was above the reserved threshold. Honoring it in all cases generally
makes sense, does no harm, and simplifies the code.
|
|
|
|
| |
cnt.v_wire_count.
|
| |
|
|
|
|
| |
locking of the kmem_object.
|
|
|
|
|
| |
lock by making vm_page_rename()'s caller, rather than vm_page_rename(),
responsible for acquiring it.
|
|
|
|
| |
vm_page_flag_clear().
|
|
|
|
|
|
| |
which incorporates page queue and field locking, is complete.
- Assert that the page queue lock rather than Giant is held in
vm_page_flag_set().
|
|
|
|
| |
vm_page_remove(), and vm_page_free_toq().
|
|
|
|
| |
Approved by: re
|
| |
|
|
|
|
| |
use it directly.
|
|
|
|
|
|
|
|
|
| |
indirectly through vm_page_protect(). The one remaining page flag that
is updated by vm_page_protect() is already being updated by our various
pmap implementations.
Note: A later commit will similarly change the VM_PROT_READ case and
eliminate vm_page_protect().
|
|
|
|
| |
is empty.
|
| |
|
|
|
|
|
|
|
|
| |
because it's no longer used. (See revision 1.215.)
- Fix a harmless bug: the number of vm_page structures allocated wasn't
properly adjusted when uma_bootstrap() was introduced. Consequently,
we were allocating 30 unused vm_page structures.
- Wrap a long line.
|
|
|
|
| |
it is unused.
|
|
|
|
|
|
|
|
|
|
| |
vm_page_alloc not to insert this page into an object. The pindex is
still used for colorization.
- Rework vm_page_select_* to accept a color instead of an object and
pindex to work with VM_PAGE_NOOBJ.
- Document other VM_ALLOC_ flags.
Reviewed by: peter, jake
|
|
|
|
| |
a part of vm_page.h revision 1.87 and vm_page.c revision 1.167.)
|
|
|
|
|
|
|
|
|
| |
on-write (COW) mechanism. (This mechanism is used by the zero-copy
TCP/IP implementation.)
- Extend the scope of the page queues lock in vm_fault()
to cover vm_page_cowfault().
- Modify vm_page_cowfault() to release the page queues lock
if it sleeps.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
be no major change in performance from this change at this time but this
will allow other work to progress: Giant lock removal around VM system
in favor of per-object mutexes, ranged fsyncs, more optimal COMMIT rpc's for
NFS, partial filesystem syncs by the syncer, more optimal object flushing,
etc. Note that the buffer cache is already using a similar splay tree
mechanism.
Note that a good chunk of the old hash table code is still in the tree.
Alan or I will remove it prior to the release if the new code does not
introduce unsolvable bugs, else we can revert more easily.
Submitted by: alc (this is Alan's code)
Approved by: re
|
| |
|
|
|
|
|
|
| |
pmap_zero_page() and pmap_zero_page_area() were modified to accept
a struct vm_page * instead of a physical address, vm_page_zero_fill()
and vm_page_zero_fill_area() have served no purpose.
|
| |
|
|
|
|
| |
obsolete.)
|
|
|
|
|
| |
flag. (This is the only place in the entire kernel where the PG_MAPPED
flag is tested. It will be removed soon.)
|
|
|
|
|
| |
in vm_page_grab(). Also, replace the nearby tsleep() with an msleep()
on the page queues lock.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- v_vflag is protected by the vnode lock and is used when synchronization
with VOP calls is needed.
- v_iflag is protected by interlock and is used for dealing with vnode
management issues. These flags include X/O LOCK, FREE, DOOMED, etc.
- All accesses to v_iflag and v_vflag have either been locked or marked with
mp_fixme's.
- Many ASSERT_VOP_LOCKED calls have been added where the locking was not
clear.
- Many functions in vfs_subr.c were restructured to provide for stronger
locking.
Idea stolen from: BSD/OS
|
|
|
|
| |
vm_page_alloc(VM_ALLOC_WIRED).
|
|
|
|
| |
o Assert that the page queues lock is held in vm_page_deactivate().
|
|
|
|
| |
o Assert that the page queues lock is held in vm_page_io_finish().
|
|
|
|
| |
o Assert that the page queues lock is held in vm_page_io_start().
|
|
|
|
|
| |
vm_page_sleep_busy(). vm_page_sleep_if_busy() uses the page
queues lock.
|
| |
|
|
|
|
| |
o Assert that the page queue lock is held in vm_page_dontneed().
|
|
|
|
|
| |
in kern/vfs_bio.c are already locked.)
o Assert that the page queues lock is held in vm_page_try_to_cache().
|
| |
|