| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
or user vm_maps. In accordance with the standards for munlock(2),
and in contrast to vm_map_user_pageable(), this implementation does not
allow holes in the specified region. This implementation uses the
"in transition" flag described below.
o Introduce a new flag, "in transition," to the vm_map_entry.
Eventually, vm_map_delete() and vm_map_simplify_entry() will respect
this flag by deallocating in-transition vm_map_entrys, allowing
the vm_map lock to be safely released in vm_map_unwire() and (the
forthcoming) vm_map_wire().
o Modify vm_map_simplify_entry() to respect the in-transition flag.
In collaboration with: tegge
|
|
|
|
| |
Submitted by: Mark Santcroos <marks@ripe.net>
|
|
|
|
| |
in obj_alloc.
|
|
|
|
|
| |
to vm_object_split(). Its interface should still be changed
to resemble vm_object_shadow().
|
|
|
|
|
|
|
|
| |
declaration that shadows another.
Note: This function should really be vm_object_split(), not vm_map_split().
Reviewed by: md5
|
|
|
|
|
| |
option ENABLE_VFS_IOOPT. Unless this option is in effect,
vm_object_pmap_copy_1() is not used.
|
|
|
|
|
|
|
|
|
| |
vm_map_create(), and vm_map_submap().
o Make further use of a local variable in vm_map_entry_splay()
that caches a reference to one of a vm_map_entry's children.
(This reduces code size somewhat.)
o Revert a part of revision 1.66, deinlining vmspace_pmap().
(This function is MPSAFE.)
|
|
|
|
|
|
|
|
| |
deinlining vm_map_entry_behavior() and vm_map_entry_set_behavior()
actually increases the kernel's size.
o Make vm_map_entry_set_behavior() static and add a comment describing
its purpose.
o Remove an unnecessary initialization statement from vm_map_entry_splay().
|
|
|
|
| |
Sponsored by: DARPA, NAI Labs
|
|
|
|
|
|
|
|
|
|
| |
into the vm_object layer:
o Acquire and release Giant in vm_object_shadow() and
vm_object_page_remove().
o Remove the GIANT_REQUIRED assertion preceding vm_map_delete()'s call
to vm_object_page_remove().
o Remove the acquisition and release of Giant around vm_map_lookup()'s
call to vm_object_shadow().
|
|
|
|
| |
will be updated to only define(__i386__) for ANSI cleanliness.
|
| |
|
| |
|
|
|
|
|
|
|
| |
and vm_map_delete(). Assert GIANT_REQUIRED in vm_map_delete()
only if operating on the kernel_object or the kmem_object.
o Remove GIANT_REQUIRED from vm_map_remove().
o Remove the acquisition and release of Giant from munmap().
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the last accessed datum is moved to the root of the splay tree.
Therefore, on lookups in which the hint resulted in O(1) access,
the splay tree still achieves O(1) access. In contrast, on lookups
in which the hint failed miserably, the splay tree achieves amortized
logarithmic complexity, resulting in dramatic improvements on vm_maps
with a large number of entries. For example, the execution time
for replaying an access log from www.cs.rice.edu against the thttpd
web server was reduced by 23.5% due to the large number of files
simultaneously mmap()ed by this server. (The machine in question has
enough memory to cache most of this workload.)
Nothing comes for free: At present, I see a 0.2% slowdown on "buildworld"
due to the overhead of maintaining the splay tree. I believe that
some or all of this can be eliminated through optimizations
to the code.
Developed in collaboration with: Juan E Navarro <jnavarro@cs.rice.edu>
Reviewed by: jeff
|
| |
|
|
|
|
|
|
|
| |
that td_intr_nesting_level is 0 (like malloc() does). Since malloc() calls
uma we can probably remove the check in malloc() for this now. Also,
perform an extra witness check in that case to make sure we don't hold
any locks when performing a M_WAITOK allocation.
|
|
|
|
| |
(vm_map_inherit() no longer requires Giant to be held.)
|
|
|
|
|
|
|
| |
release Giant around vm_map_madvise()'s call to pmap_object_init_pt().
o Replace GIANT_REQUIRED in vm_object_madvise() with the acquisition
and release of Giant.
o Remove the acquisition and release of Giant from madvise().
|
| |
|
| |
|
|
|
|
|
|
| |
Retire daddr64_t and use daddr_t instead.
Sponsored by: DARPA & NAI Labs.
|
|
|
|
|
| |
to lock order reversals. uma_reclaim now builds a list of freeable slabs and
then unlocks the zones to do all of the frees.
|
|
|
|
|
|
| |
several reasons before. Fixing it involved restructuring the generic hash
code to require calling code to handle locking, unlocking, and freeing hashes
on error conditions.
|
|
|
|
|
| |
from vm_map_inherit(). (minherit() need not acquire Giant
anymore.)
|
|
|
|
|
|
|
|
|
| |
vm_object_deallocate(), replacing the assertion GIANT_REQUIRED.
o Remove GIANT_REQUIRED from vm_map_protect() and vm_map_simplify_entry().
o Acquire and release Giant around vm_map_protect()'s call to pmap_protect().
Altogether, these changes eliminate the need for mprotect() to acquire
and release Giant.
|
|
|
|
|
|
|
| |
for uiomoveco(), uioread(), and vm_uiomove() regardless
of whether ENABLE_VFS_IOOPT is defined or not.
Submitted by: bde
|
|
|
|
| |
on ENABLE_VFS_IOOPT.
|
|
|
|
|
|
| |
for shadow objects.
Submitted by: bde
|
|
|
|
| |
an operation on a vm_object and belongs in the latter place.
|
|
|
|
|
|
| |
on ENABLE_VFS_IOOPT.
o Add a comment to the effect that this code is experimental
support for zero-copy I/O.
|
|
|
|
| |
be used.
|
|
|
|
|
| |
o Acquire and release Giant around vm_map_lookup()'s call
to vm_object_shadow().
|
|
|
|
|
|
|
| |
creating the vm_object. This was broken after the code was rearranged to
grab giant itself.
Spotted by: alc
|
|
|
|
|
|
| |
without holding Giant.
o Begin documenting the trivial cases of the locking protocol
on vm_object.
|
|
|
|
|
| |
vm_map_check_protection().
o Call vm_map_check_protection() without Giant held in munmap().
|
|
|
|
|
| |
exclusively. The interface still, however, distinguishes
between a shared lock and an exclusive lock.
|
|
|
|
|
| |
this memory is modified after it has been freed we can now report it's
previous owner.
|
|
|
|
|
| |
weird potential race if we were preempted right as we were doing the dbg
checks.
|
|
|
|
|
|
|
| |
- Changed uma_zcreate to accept the size argument as a size_t intead of
int.
Approved by: jeff
|
|
|
|
|
|
|
| |
mallochash. Mallochash is going to go away as soon as I introduce the
kfree/kmalloc api and partially overhaul the malloc wrapper. This can't happen
until all users of the malloc api that expect memory to be aligned on the size
of the allocation are fixed.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Implement the following checks on freed memory in the bucket path:
- Slab membership
- Alignment
- Duplicate free
This previously was only done if we skipped the buckets. This code will slow
down INVARIANTS a bit, but it is smp safe. The checks were moved out of the
normal path and into hooks supplied in uma_dbg.
|
|
|
|
|
|
|
| |
an issue on the Alpha platform found by jeff@.)
o Simplify vm_page_lookup().
Reviewed by: jhb
|
|
|
|
|
|
|
|
|
|
|
| |
0xdeadc0de and then check for it just before memory is handed off as part
of a new request. This will catch any post free/pre alloc modification of
memory, as well as introduce errors for anything that tries to dereference
it as a pointer.
This code takes the form of special init, fini, ctor and dtor routines that
are specificly used by malloc. It is in a seperate file because additional
debugging aids will want to live here as well.
|
|
|
|
|
|
| |
uma_zalloc and friends. Remove this functionality from the malloc wrapper.
Document this change in uma.h and adjust variable names in uma_core.
|
|
|
|
| |
that took its place for the purposes of acquiring and releasing Giant.
|
|
|
|
|
|
|
| |
mutex class. Currently this is only used for kmapentzone because kmapents
are are potentially allocated when freeing memory. This is not dangerous
though because no other allocations will be done while holding the
kmapentzone lock.
|
|
|
|
|
|
|
|
|
|
|
|
| |
i386/ia64/alpha - catch up to sparc64/ppc:
- replace pmap_kernel() with refs to kernel_pmap
- change kernel_pmap pointer to (&kernel_pmap_store)
(this is a speedup since ld can set these at compile/link time)
all platforms (as suggested by jake):
- gc unused pmap_reference
- gc unused pmap_destroy
- gc unused struct pmap.pm_count
(we never used pm_count - we track address space sharing at the vmspace)
|
| |
|