| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
critical and should not be killed when pageout is looking for more
memory pages in all the wrong places.
Reviewed by: arch@
Sponsored by: St. Bernard Software
|
|
|
|
| |
Reviewed by: peter
|
|
|
|
| |
kmem_free if Giant isn't already held.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
where physical addresses larger than virtual addresses, such as i386s
with PAE.
- Use this to represent physical addresses in the MI vm system and in the
i386 pmap code. This also changes the paddr parameter to d_mmap_t.
- Fix printf formats to handle physical addresses >4G in the i386 memory
detection code, and due to kvtop returning vm_paddr_t instead of u_long.
Note that this is a name change only; vm_paddr_t is still the same as
vm_offset_t on all currently supported platforms.
Sponsored by: DARPA, Network Associates Laboratories
Discussed with: re, phk (cdevsw change)
|
| |
|
|
|
|
|
| |
%j in printfs, so put a newsted include in <sys/systm.h> where the printf
prototype lives and save everybody else the trouble.
|
|
|
|
|
| |
after mapping it. This makes it possible to determine if a physical
page has a backing vm_page or not.
|
|
|
|
|
|
|
|
|
| |
are machine dependent because they are not required to update the tlb when
mappings are added or removed, and doing so is machine dependent.
In addition, an implementation may require that pages mapped with pmap_kenter
have a backing vm_page_t, which is not necessarily true of all physical
pages, and so may choose to pass the vm_page_t to pmap_kenter instead of the
physical address in order to make this requirement clear.
|
|
|
|
|
|
|
|
|
| |
process to kill, don't block on a map lock while holding the
process lock. Instead, skip processes whose map locks are held
and find something else to kill.
- Add vm_map_trylock_read() to support the above.
Reviewed by: alc, mike (mentor)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- On receive, vm_map_lookup() needs to trigger the creation of a shadow
object. To make that happen, call vm_map_lookup() with PROT_WRITE
instead of PROT_READ in vm_pgmoveco().
- On send, a shadow object will be created by the vm_map_lookup() in
vm_fault(), but vm_page_cowfault() will delete the original page from
the backing object rather than simply letting the legacy COW mechanism
take over. In other words, the new page should be added to the shadow
object rather than replacing the old page in the backing object. (i.e.
vm_page_cowfault() should not be called in this case.) We accomplish
this by making sure fs.object == fs.first_object before calling
vm_page_cowfault() in vm_fault().
Submitted by: gallatin, alc
Tested by: ken
|
|
|
|
| |
Discussed on: arch@
|
|
|
|
|
|
|
| |
modules to authorize disabling of swap against a particular vnode.
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories
|
|
|
|
| |
to WITNESS_WARN().
|
|
|
|
|
| |
Use VOP_IOCTL(DIOCGMEDIASIZE) to check the size of a potential swap device
instead of the cdevsw->d_psize() method.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
- Remove the buftimelock mutex and acquire the buf's interlock to protect
these fields instead.
- Hold the vnode interlock while locking bufs on the clean/dirty queues.
This reduces some cases from one BUF_LOCK with a LK_NOWAIT and another
BUF_LOCK with a LK_TIMEFAIL to a single lock.
Reviewed by: arch, mckusick
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Get rid of the useless atop() / pmap_phys_address() detour. The
device mmap handlers must now give back the physical address
without atop()'ing it.
- Don't borrow the physical address of the mapping in the returned
int. Now we properly pass a vm_offset_t * and expect it to be
filled by the mmap handler when the mapping was successful. The
mmap handler must now return 0 when successful, any other value
is considered as an error. Previously, returning -1 was the only
way to fail. This change thus accidentally fixes some devices
which were bogusly returning errno constants which would have been
considered as addresses by the device pager.
- Garbage collect the poorly named pmap_phys_address() now that it's
no longer used.
- Convert all the d_mmap_t consumers to the new API.
I'm still not sure wheter we need a __FreeBSD_version bump for this,
since and we didn't guarantee API/ABI stability until 5.1-RELEASE.
Discussed with: alc, phk, jake
Reviewed by: peter
Compile-tested on: LINT (i386), GENERIC (alpha and sparc64)
Runtime-tested on: i386
|
| |
|
|
|
|
| |
Approved by: trb
|
| |
|
|
|
|
|
|
| |
It's unnecessary for two reasons: (1) Giant is at present already held in
such cases and (2) our various implementations of pmap_growkernel() look to
be MP safe. (For example, for sparc64 the proof of (2) is trivial.)
|
|
|
|
| |
synchronization of access to kernel_vm_end.
|
|
|
|
|
|
| |
synchronized.
Suggested by: tegge
|
| |
|
|
|
|
|
|
| |
than a positive number.
- In pagedaemon_wakeup(), set vm_pages_needed to 1 rather than
incrementing it to accomplish the same.
|
| |
|
|
|
|
| |
- Style changes to vm_pageout(): declarations and white-space.
|
|
|
|
|
| |
with the page queue lock.
- Assert that the page queue lock is held in vm_page_free_wakeup().
|
|
|
|
|
| |
two cases that existed before for performance optimization purposes can
be reduced to one.
|
| |
|
| |
|
|
|
|
| |
Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
|
| |
|
|
|
|
|
|
|
| |
I/O, CAM, and AIO. Still TODO: streamline useracc() checks.
Reviewed by: alc, tegge
MFC after: 7 days
|
|
|
|
|
| |
- Assert that the page queues lock rather than Giant is held in
vm_page_hold().
|
|
|
|
|
| |
Submitted by: tmm
Pointy hat to: jeff
|
|
|
|
|
|
| |
So add a VM_METER compat define.
Submitted by: Andy Fawcett <andy@athame.co.uk>
|
|
|
|
| |
portable copy.
|
|
|
|
|
| |
counter outside the scope of existing locks.
- Eliminate a redundant clearing of vm_pageout_deficit.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
dereferenced when a process exits due to the vmspace ref-count being
bumped. Change shmexit() and shmexit_myhook() to take a vmspace instead
of a process and call it in vmspace_dofree(). This way if it is missed
in exit1()'s early-resource-free it will still be caught when the zombie is
reaped.
Also fix a potential race in shmexit_myhook() by NULLing out
vmspace->vm_shm prior to calling shm_delete_mapping() and free().
MFC after: 7 days
|
| |
|
|
|
|
|
| |
removal of unnecessary casts and throw in some minor cleanups to see if
anyone complains, just for the hell of it.
|
|
|
|
|
|
|
|
|
|
| |
The objective being to eliminate some cases of page queues locking.
(See, for example, vm/vm_fault.c revision 1.160.)
Reviewed by: tegge
(Also, pointed out by tegge that I changed vm_fault.c before changing
vm_page.c. Oops.)
|
|
|
|
| |
VM_ALLOC_ZERO to vm_page_alloc().
|
|
|
|
|
|
|
|
|
|
| |
pointer types, and remove a huge number of casts from code using it.
Change struct xfile xf_data to xun_data (ABI is still compatible).
If we need to add a #define for f_data and xf_data we can, but I don't
think it will be necessary. There are no operational changes in this
commit.
|
|
|
|
| |
expression.
|
|
|
|
|
|
|
| |
(the patch in the PR was stale).
PR: kern/5689
Submitted by: Hiten Pandya <hiten@unixdaemons.com>
|
|
|
|
|
|
|
| |
requests when the number of free pages is below the reserved threshold.
Previously, VM_ALLOC_ZERO was only honored when the number of free pages
was above the reserved threshold. Honoring it in all cases generally
makes sense, does no harm, and simplifies the code.
|