| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
| |
Retire daddr64_t and use daddr_t instead.
Sponsored by: DARPA & NAI Labs.
|
|
|
|
|
| |
to lock order reversals. uma_reclaim now builds a list of freeable slabs and
then unlocks the zones to do all of the frees.
|
|
|
|
|
|
| |
several reasons before. Fixing it involved restructuring the generic hash
code to require calling code to handle locking, unlocking, and freeing hashes
on error conditions.
|
|
|
|
|
| |
from vm_map_inherit(). (minherit() need not acquire Giant
anymore.)
|
|
|
|
|
|
|
|
|
| |
vm_object_deallocate(), replacing the assertion GIANT_REQUIRED.
o Remove GIANT_REQUIRED from vm_map_protect() and vm_map_simplify_entry().
o Acquire and release Giant around vm_map_protect()'s call to pmap_protect().
Altogether, these changes eliminate the need for mprotect() to acquire
and release Giant.
|
|
|
|
|
|
|
| |
for uiomoveco(), uioread(), and vm_uiomove() regardless
of whether ENABLE_VFS_IOOPT is defined or not.
Submitted by: bde
|
|
|
|
| |
on ENABLE_VFS_IOOPT.
|
|
|
|
|
|
| |
for shadow objects.
Submitted by: bde
|
|
|
|
| |
an operation on a vm_object and belongs in the latter place.
|
|
|
|
|
|
| |
on ENABLE_VFS_IOOPT.
o Add a comment to the effect that this code is experimental
support for zero-copy I/O.
|
|
|
|
| |
be used.
|
|
|
|
|
| |
o Acquire and release Giant around vm_map_lookup()'s call
to vm_object_shadow().
|
|
|
|
|
|
|
| |
creating the vm_object. This was broken after the code was rearranged to
grab giant itself.
Spotted by: alc
|
|
|
|
|
|
| |
without holding Giant.
o Begin documenting the trivial cases of the locking protocol
on vm_object.
|
|
|
|
|
| |
vm_map_check_protection().
o Call vm_map_check_protection() without Giant held in munmap().
|
|
|
|
|
| |
exclusively. The interface still, however, distinguishes
between a shared lock and an exclusive lock.
|
|
|
|
|
| |
this memory is modified after it has been freed we can now report it's
previous owner.
|
|
|
|
|
| |
weird potential race if we were preempted right as we were doing the dbg
checks.
|
|
|
|
|
|
|
| |
- Changed uma_zcreate to accept the size argument as a size_t intead of
int.
Approved by: jeff
|
|
|
|
|
|
|
| |
mallochash. Mallochash is going to go away as soon as I introduce the
kfree/kmalloc api and partially overhaul the malloc wrapper. This can't happen
until all users of the malloc api that expect memory to be aligned on the size
of the allocation are fixed.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Implement the following checks on freed memory in the bucket path:
- Slab membership
- Alignment
- Duplicate free
This previously was only done if we skipped the buckets. This code will slow
down INVARIANTS a bit, but it is smp safe. The checks were moved out of the
normal path and into hooks supplied in uma_dbg.
|
|
|
|
|
|
|
| |
an issue on the Alpha platform found by jeff@.)
o Simplify vm_page_lookup().
Reviewed by: jhb
|
|
|
|
|
|
|
|
|
|
|
| |
0xdeadc0de and then check for it just before memory is handed off as part
of a new request. This will catch any post free/pre alloc modification of
memory, as well as introduce errors for anything that tries to dereference
it as a pointer.
This code takes the form of special init, fini, ctor and dtor routines that
are specificly used by malloc. It is in a seperate file because additional
debugging aids will want to live here as well.
|
|
|
|
|
|
| |
uma_zalloc and friends. Remove this functionality from the malloc wrapper.
Document this change in uma.h and adjust variable names in uma_core.
|
|
|
|
| |
that took its place for the purposes of acquiring and releasing Giant.
|
|
|
|
|
|
|
| |
mutex class. Currently this is only used for kmapentzone because kmapents
are are potentially allocated when freeing memory. This is not dangerous
though because no other allocations will be done while holding the
kmapentzone lock.
|
|
|
|
|
|
|
|
|
|
|
|
| |
i386/ia64/alpha - catch up to sparc64/ppc:
- replace pmap_kernel() with refs to kernel_pmap
- change kernel_pmap pointer to (&kernel_pmap_store)
(this is a speedup since ld can set these at compile/link time)
all platforms (as suggested by jake):
- gc unused pmap_reference
- gc unused pmap_destroy
- gc unused struct pmap.pm_count
(we never used pm_count - we track address space sharing at the vmspace)
|
| |
|
| |
|
|
|
|
|
|
| |
of lockmgr().
o Add missing synchronization to vmspace_swap_count(): Obtain a read lock
on the vm_map before traversing it.
|
|
|
|
|
| |
On systems where physical memory is also direct mapped (alpha, sparc,
ia64 etc) this is slightly harmful.
|
|
|
|
|
|
|
| |
in the same style as sys/proc.h.
o Undo the de-inlining of several trivial, MPSAFE methods on the vm_map.
(Contrary to the commit message for vm_map.h revision 1.66 and vm_map.c
revision 1.206, de-inlining these methods increased the kernel's size.)
|
|
|
|
| |
o Fix some style(9) bugs.
|
|
|
|
| |
Submitted by: jeff@
|
| |
|
|
|
|
|
| |
after initialization in vm_fault1().
o Fix some style problems in vm_fault1().
|
|
|
|
| |
modification is made to the vm_map while only a read lock is held.
|
|
|
|
|
|
|
|
|
|
|
| |
due to conditions that suggest the possible need for stack growth.
This has two beneficial effects: (1) we can
now remove calls to vm_map_growstack() from the MD trap handlers and (2)
simple page faults are faster because we no longer unnecessarily perform
vm_map_growstack() on every page fault.
o Remove vm_map_growstack() from the i386's trap_pfault().
o Remove the acquisition and release of Giant from i386's trap_pfault().
(vm_fault() still acquires it.)
|
|
|
|
|
|
|
|
| |
statclock can access it in the tail end of statclock_process() at an
unfortunate time. This bit me several times on an SMP alpha (UP2000)
and the problem went away with this change. I'm not sure why it doesn't
break x86 as well. Maybe it's because the clocks are much faster
on alpha (HZ=1024 by default).
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
and pmap_copy_page(). This gets rid of a couple more physical addresses
in upper layers, with the eventual aim of supporting PAE and dealing with
the physical addressing mostly within pmap. (We will need either 64 bit
physical addresses or page indexes, possibly both depending on the
circumstances. Leaving this to pmap itself gives more flexibilitly.)
Reviewed by: jake
Tested on: i386, ia64 and (I believe) sparc64. (my alpha was hosed)
|
|
|
|
|
|
|
|
|
|
| |
hash while holding the lock on a zone. Fix this by doing the allocation
seperately from the actual hash expansion.
The lock is dropped before the allocation and reacquired before the expansion.
The expansion code checks to see if we lost the race and frees the new hash
if we do. We really never will lose this race because the hash expansion is
single threaded via the timeout mechanism.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fortunately we have no large zones with maximums specified yet, so it wasn't
breaking anything.
Implement blocking when a zone exceeds the maximum and M_WAITOK is specified.
Previously this just failed like the old zone allocator did. The old zone
allocator didn't support WAITOK/NOWAIT though so we should do what we
advertise.
While I was in there I cleaned up some more zalloc logic to further simplify
that code path and reduce redundant code. This was needed to make the blocking
work properly anyway.
|
|
|
|
| |
Pointed out by: pete, jake, jhb
|
|
|
|
| |
this happens it is OK.
|
|
|
|
| |
v_free_min. This should help performance in memory starved situations.
|
|
|
|
|
|
|
|
| |
can tell this could not have caused any problems yet because UMA is still
called with giant.
Pointy hat to: jeff
Noticed by: jake
|