| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Reviewed by: jeff
|
|
|
|
| |
2032 bytes and has no measurable impact on performance.
|
|
|
|
| |
Gcc has silently not been doing this for a long time.
|
|
|
|
| |
gcc (quite rightly) unhappy. Move it earlier.
|
|
|
|
|
|
| |
Use the macro max number of swap devices rather than cache the constant
in a variable.
Avoid a (now) pointless variable.
|
|
|
|
|
|
|
| |
uma_zalloc) is called with exactly one of either M_WAITOK or M_NOWAIT and
that it is called with neither M_TRYWAIT or M_DONTWAIT. Print a warning
if anything is wrong. Default to M_WAITOK of no flag is given. This is the
same test as in malloc(9).
|
|
|
|
| |
radix-tree code imposes, truncate the device instead of rejecting it.
|
|
|
|
|
|
|
|
|
| |
the "toss the largest process" emergency handling) from vm_map.c to
swap_pager.c.
The quantity calculated depends strongly on the internals of the
swap_pager and by moving it, we no longer need to expose the
internal metrics of the swap_pager to the world.
|
|
|
|
|
|
|
| |
paging space and how much of it is in use (in pages).
Use this interface from the Linuxolator instead of groping around in the
internals of the swap_pager.
|
|
|
|
|
|
|
|
|
| |
is not natural and needlessly exposes a lot of dirty laundry.
Move private interfaces between the two from swap_pager.h to swap_pager.c
and staticize as much as possible.
No functional change.
|
|
|
|
|
|
|
|
|
|
|
| |
files, so that SWAP_META_PAGES does not vary either.
swap_pager.c ended up with a value of 16, everybody else 8. Go with
the 16 for now.
This should only have any effect in the "kill processes because we
are out of swap" scenario, where it will make some sort of estimate
of something more precise.
|
|
|
|
| |
`firstaddr' from `v' if we know that the former equals zero.
|
|
|
|
|
|
|
|
|
| |
- Change vm_pageout_object_deactivate_pages()'s first parameter from a
vm_map_t to a pmap_t.
- Change vm_pageout_object_deactivate_pages()'s and
vm_pageout_map_deactivate_pages()'s last parameter from a vm_pindex_t
to a long. Since the number of pages in an address space doesn't
require 64 bits on an i386, vm_pindex_t is overkill.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
order to avoid the overhead of later page faults. In general, it
implements two cases: one for vnode-backed objects and one for
device-backed objects. Only the device-backed case is really
machine-dependent, belonging in the pmap.
This commit moves the vnode-backed case into the (relatively) new
function vm_map_pmap_enter(). On amd64 and i386, this commit only
amounts to code rearrangement. On alpha and ia64, the new machine
independent (MI) implementation of the vnode case is smaller and more
efficient than their pmap-based implementations. (The MI
implementation takes advantage of the fact that objects in -CURRENT
are ordered collections of pages.) On sparc64, pmap_object_init_pt()
hadn't (yet) been implemented.
|
| |
|
|
|
|
|
| |
is returned by vm_page_select_cache() to belong to the object that is
already locked by the caller to vm_page_alloc().
|
|
|
|
| |
returning an error if the address is too high.
|
|
|
|
| |
pmap_object_init_pt().
|
|
|
|
|
|
|
|
|
|
|
| |
implementation of a largely MI pmap_object_init_pt() for vnode-backed
objects. pmap_enter_quick() is implemented via pmap_enter() on sparc64
and powerpc.
- Correct a mismatch between pmap_object_init_pt()'s prototype and its
various implementations. (I plan to keep pmap_object_init_pt() as
the MD hook for device-backed objects on i386 and amd64.)
- Correct an error in ia64's pmap_enter_quick() and adjust its interface
to match the other versions. Discussed with: marcel
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
color in vm_page_alloc(). (This also has small performance benefits.)
- Eliminate vm_page_select_free(); vm_page_alloc() might as well
call vm_pageq_find() directly.
|
|
|
|
|
| |
shadow_count can now rely on its mutex for synchronization. Remove one use
of Giant from vm_map_insert().
|
|
|
|
| |
page. Add the ability to lock the containing object to those conditions.
|
|
|
|
| |
a NULL pointer; remove some now unused code.
|
|
|
|
|
|
|
| |
of pcpu locks. This makes uma_zone somewhat smaller (by (LOCKNAME_LEN *
sizeof(char) + sizeof(struct mtx) * maxcpu) bytes, to be exact).
No Objections from jeff.
|
|
|
|
| |
certain free paths.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
releasing the lock only if we are about to sleep (e.g., vm_pager_get_pages()
or vm_pager_has_pages()). If we sleep, we have marked the vm object with
the paging-in-progress flag.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several of the subtypes have an associated vnode which is used for
stuff like the f*() functions.
By giving the vnode a speparate field, a number of checks for the specific
subtype can be replaced simply with a check for f_vnode != NULL, and
we can later free f_data up to subtype specific use.
At this point in time, f_data still points to the vnode, so any code I
might have overlooked will still work.
|
|
|
|
| |
progress on the next object before clearing it on the current object.
|
|
|
|
|
| |
deal with the case where we need to sleep on a busy page with two vm object
locks held.
|
|
|
|
| |
vm_fault(); remove a pointless assignment statement from that path.
|
|
|
|
| |
of indentation a bit.
|
| |
|
|
|
|
|
| |
- Assert that the vm object and its backing vm object are both locked in
vm_object_qcollapse().
|
| |
|
| |
|
|
|
|
|
|
|
| |
the vm map lock. What's really needed is vm object locking, which
is (for the moment) provided Giant.
Reviewed by: tegge
|
| |
|
|
|
|
|
|
| |
used by the zero-copy sockets implementation.
Reviewed by: gallatin
|
| |
|
|
|
|
|
| |
Remove pointless initialization of b_spc field, which now no longer
exists.
|
|
|
|
| |
to check that the buffer points to the correct vnode.
|