| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
vm/vm_page.c revision 1.220.)
Submitted by: bde
|
|
|
|
|
|
|
| |
sysctl code. This makes 'systat -vm 1's syscall count work again.
Submitted by: Michal Mertl <mime@traveller.cz>
Note: also slated for 5.0
|
| |
|
|
|
|
|
| |
at appropriate times. For the moment, the mutex is only used on
the kmem_object.
|
|
|
|
| |
vm/vm_page.c, it is unused.
|
|
|
|
|
|
| |
which incorporates page queue and field locking, is complete.
- Assert that the page queue lock rather than Giant is held in
vm_page_flag_set().
|
|
|
|
|
|
| |
vm_page_flag_set().
- Replace vm_page_sleep_busy() with proper page queues locking
and vm_page_sleep_if_busy().
|
|
|
|
|
| |
- Replace vm_page_sleep_busy() with proper page queues locking
and vm_page_sleep_if_busy().
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
skipping read-only pages, which can result in valuable non-text-related
data not getting dumped, the ELF loader and the dynamic loader now mark
read-only text pages NOCORE and the coredump code only checks (primarily) for
complete inaccessibility of the page or NOCORE being set.
Certain applications which map large amounts of read-only data will
produce much larger cores. A new sysctl has been added,
debug.elf_legacy_coredump, which will revert to the old behavior.
This commit represents collaborative work by all parties involved.
The PR contains a program demonstrating the problem.
PR: kern/45994
Submitted by: "Peter Edwards" <pmedwards@eircom.net>, Archie Cobbs <archie@dellroad.org>
Reviewed by: jdp, dillon
MFC after: 7 days
|
|
|
|
| |
around vm_page_lookup() and vm_page_free().
|
|
|
|
|
|
|
| |
This should be considered highly experimental for the moment.
Submitted by: David Schultz <dschultz@uclink.Berkeley.EDU>
MFC after: 3 weeks
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
resource starvation we clean-up as much of the vmspace structure as we
can when the last process using it exits. The rest of the structure
is cleaned up when it is reaped. But since exit1() decrements the ref
count it is possible for a double-free to occur if someone else, such as
the process swapout code, references and then dereferences the structure.
Additionally, the final cleanup of the structure should not occur until
the last process referencing it is reaped.
This commit solves the problem by introducing a secondary reference count,
calling 'vm_exitingcnt'. The normal reference count is decremented on exit
and vm_exitingcnt is incremented. vm_exitingcnt is decremented when the
process is reaped. When both vm_exitingcnt and vm_refcnt are 0, the
structure is freed for real.
MFC after: 3 weeks
|
|
|
|
| |
the object (i.e., acquire Giant).
|
|
|
|
| |
vm_object_page_remove().
|
|
|
|
| |
vm_object_page_remove().
|
|
|
|
| |
vm_page_remove(), and vm_page_free_toq().
|
|
|
|
|
|
|
| |
of the vm_page structure. Make the style of the pmap_protect() calls
consistent.
Approved by: re (blanket)
|
|
|
|
|
|
|
| |
of the vm_page structure. Nearby, remove an unnecessary semicolon and
return statement.
Approved by: re (blanket)
|
|
|
|
| |
Approved by: re (blanket)
|
|
|
|
| |
Approved by: re (blanket)
|
|
|
|
| |
Approved by: re (blanket)
|
|
|
|
| |
Approved by: re (blanket)
|
|
|
|
|
|
| |
because it updates flags within the vm page.
Approved by: re (blanket)
|
|
|
|
|
|
| |
to cover pmap_remove_all().
Approved by: re
|
|
|
|
| |
Approved by: re
|
|
|
|
|
|
| |
vm_pageout_page_free().
Approved by: re
|
|
|
|
| |
Approved by: re
|
|
|
|
|
|
|
|
|
| |
intended to be used by significant memory consumers so that they may drain
some of their caches.
Inspired by: phk
Approved by: re
Tested on: x86, alpha
|
|
|
|
| |
Spotted by: jake
|
| |
|
|
|
|
| |
Reported by: grehan
|
|
|
|
| |
use it directly.
|
|
|
|
| |
in various revisions of vm/vm_map.c between 1.148 and 1.153.
|
|
|
|
|
|
|
|
| |
to reflect its new location, and add page queue and flag locking.
Notes: (1) alpha, i386, and ia64 had identical implementations
of pmap_collect() in terms of machine-independent interfaces;
(2) sparc64 doesn't require it; (3) powerpc had it as a TODO.
|
| |
|
|
|
|
| |
ZONE_LOCK.
|
|
|
|
|
|
| |
if we're removing write access from the page's PTEs.
- Export pmap_remove_all() on alpha, i386, and ia64. (It's already
exported on sparc64.)
|
|
|
|
|
|
| |
protected. Furthermore, in some RISC architectures with no normal
byte operations, the surrounding 3 bytes are also affected by the
read-modify-write that has to occur.
|
|
|
|
|
|
|
|
|
| |
indirectly through vm_page_protect(). The one remaining page flag that
is updated by vm_page_protect() is already being updated by our various
pmap implementations.
Note: A later commit will similarly change the VM_PROT_READ case and
eliminate vm_page_protect().
|
|
|
|
|
|
| |
after a user wire error fails when the entry is already system wired.
Reported by: tegge
|
|
|
|
| |
is empty.
|
|
|
|
|
|
|
|
|
|
|
| |
sysctls to MI code; this reduces code duplication and makes all of them
available on sparc64, and the latter two on powerpc.
The semantics by the i386 and pc98 hw.availpages is slightly changed:
previously, holes between ranges of available pages would be included,
while they are excluded now. The new behaviour should be more correct
and brings i386 in line with the other architectures.
Move physmem to vm/vm_init.c, where this variable is used in MI code.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
because it's no longer used. (See revision 1.215.)
- Fix a harmless bug: the number of vm_page structures allocated wasn't
properly adjusted when uma_bootstrap() was introduced. Consequently,
we were allocating 30 unused vm_page structures.
- Wrap a long line.
|
|
|
|
| |
it is unused.
|