| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
Switch eventtimers(9) from using struct bintime to sbintime_t.
Even before this not a single driver really supported full dynamic range of
struct bintime even in theory, not speaking about practical inexpediency.
This change legitimates the status quo and cleans up the code.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
VM_OBJECT_LOCKED() macro is only used to implement a custom version
of lock assertions right now (which likely spread out thanks to
copy and paste).
Remove it and implement actual assertions.
Sponsored by: EMC / Isilon storage division
Reviewed by: alc
Tested by: pho
|
| |
| |
| |
| |
| |
| |
| |
| | |
Remove unused inclusion of vm/vm_pager.h and vm/vnode_pager.h.
Sponsored by: EMC / Isilon storage division
Tested by: pho
Reviewed by: alc
|
| |
| |
| |
| |
| |
| |
| |
| | |
Rename current VM_OBJECT_LOCK_ASSERT(foo, RA_WLOCKED) into
VM_OBJECT_ASSERT_WLOCKED(foo)
Sponsored by: EMC / Isilon storage division
Requested by: alc
|
| |
| |
| |
| | |
Sponsored by: EMC / Isilon storage division
|
| |
| |
| |
| |
| | |
make the check available in any case if INVARIANTS is switched off.
Remove VM_OBJECT_LOCKED().
|
|/
|
|
|
|
|
|
| |
* VM_OBJECT_LOCK and VM_OBJECT_UNLOCK are mapped to write operations
* VM_OBJECT_SLEEP() is introduced as a general purpose primitve to
get a sleep operation using a VM_OBJECT_LOCK() as protection
* The approach must bear with vm_pager.h namespace pollution so many
files require including directly rwlock.h
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the missing piece for FreeBSD/Wii, but there's still a lot of
work ahead. We have to reset the MMU in locore before continuing
the boot process because we don't know how the boot loaders might
have setup the BATs. We also disable the PCI BAT because there's no PCI
bus on the Wii.
Thanks to Nathan Whitehorn and Peter Grenhan for their help.
Submitted by: Margarida Gouveia
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sleep, and perform the page allocations with VM_ALLOC_SYSTEM
class. Previously, the allocation was also allowed to completely drain
the reserve of the free pages, being translated to VM_ALLOC_INTERRUPT
request class for vm_page_alloc() and similar functions.
Allow the caller of malloc* to request the 'deep drain' semantic by
providing M_USE_RESERVE flag, now translated to VM_ALLOC_INTERRUPT
class. Previously, it resulted in less aggressive VM_ALLOC_SYSTEM
allocation class.
Centralize the translation of the M_* malloc(9) flags in the single
inline function malloc2vm_flags().
Discussion started by: "Sears, Steven" <Steven.Sears@netapp.com>
Reviewed by: alc, mdf (previous version)
Tested by: pho (previous version)
MFC after: 2 weeks
|
|
|
|
|
|
|
|
|
|
|
| |
There is one known issue: Some probes will display an error message along the
lines of: "Invalid address (0)"
I tested this with both a simple dtrace probe and dtruss on a few different
binaries on 32-bit. I only compiled 64-bit, did not run it, but I don't expect
problems without the modules loaded. Volunteers are welcome.
MFC after: 1 month
|
|
|
|
|
|
|
| |
cache line in order to avoid manual frobbing but using
struct rwlock_padalign.
Reviewed by: alc, jimharris
|
|
|
|
| |
Mach that doesn't exist in FreeBSD.
|
|
|
|
|
|
|
| |
there is no need to check if Giant is acquired after it.
Reviewed by: kib
MFC after: 1 week
|
| |
|
| |
|
|
|
|
|
|
| |
so as to not confuse things.
Submitted by: Margarida Gouveia
|
|
|
|
|
|
|
| |
Correct the return type of the pmap_ts_referenced() implementations.
Reported by: jhibbits [1]
Tested by: andreast
|
|
|
|
|
|
| |
to this pmap.
Tested by: andreast, jhibbits
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
relative to the start address (unless the start address is 0, which is
not the case).
This is currently not a problem because all powerpc architectures are
using loader(8) which passes metadata to the kernel including the
correct `endkernel' address. If we don't use loader(8), register 4
and 5 will have the size of the kernel ELF file, not its end address.
We fix that simply by adding `kernel_text' to `end' to compute
`endkernel'.
Discussed with: nathanw
|
| |
|
| |
|
|
|
|
|
|
|
| |
range operations like pmap_remove() and pmap_protect() as well as allowing
simple operations like pmap_extract() not to involve any global state.
This substantially reduces lock coverages for the global table lock and
improves concurrency.
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Use isync/lwsync unconditionally for acquire/release. Use of isync
guarantees a complete memory barrier, which is important for serialization
of bus space accesses with mutexes on multi-processor systems.
- Go back to using sync as the I/O memory barrier, which solves the same
problem as above with respect to mutex release using lwsync, while not
penalizing non-I/O operations like a return to sync on the atomic release
operations would.
- Place an acquisition barrier around thread lock acquisition in
cpu_switchin().
|
| |
|
|
|
|
|
|
|
|
| |
guarantees on acquire for the tlbie mutex. Conversely, the TLB invalidation
sequence provides guarantees that do not need to be redundantly applied on
release. Roll a small custom lock that is just right. Simultaneously,
convert the SLB tree changes back to lwsync, as changing them to sync
was a misdiagnosis of the tlbie barrier problem this commit actually fixes.
|
|
|
|
|
| |
a heavyweight sync instead of a lightweight sync to function properly.
Thanks to mdf for the clarification.
|
|
|
|
|
|
|
|
| |
of sync (lwsync is an alternate encoding of sync on systems that do not
support it, providing graceful fallback). This provides more than an order
of magnitude reduction in the time required to acquire or release a mutex.
MFC after: 2 months
|
|
|
|
|
|
|
|
| |
the page. This PMAP requires an additional lock besides the PMAP lock
in pmap_extract_and_hold(), which vm_page_pa_tryrelock() did not release.
Suggested by: kib
MFC after: 4 days
|
|
|
|
|
|
| |
before (potentially) migrating it to a different CPU.
MFC after: 5 days
|
|
|
|
|
|
| |
remove it.
MFC after: 2 weeks
|
|
|
|
|
|
|
|
| |
for whether the page is physical. On dense phys mem systems (32-bit),
VM_PHYS_TO_PAGE will not return NULL for device memory pages if device
memory is above physical memory even if there is no allocated vm_page.
Attempting to use the returned page could then cause either memory
corruption or a page fault.
|
|
|
|
|
| |
not to be a good idea, and of course the PV entry list for a page is never
empty after the page has been mapped.
|
|
|
|
| |
invalidated, as opposed to a ref/changed bit update.
|
|
|
|
| |
improves concurrency slightly.
|
|
|
|
|
|
|
|
| |
caches, by invalidating kernel icaches only when needed and not flushing
user caches for shared pages.
Suggested by: kib
MFC after: 2 weeks
|
|
|
|
|
|
| |
are are not mapped during ranged operations and reduce the scope of the
tlbie lock only to the actual tlbie instruction instead of the entire
sequence. There are a few more optimization possibilities here as well.
|
|
|
|
|
|
| |
prevent a race where another process could conclude the page was clean.
Submitted by: alc
|
|
|
|
|
|
|
|
| |
uses of the page queues mutex with a new rwlock that protects the page
table and the PV lists. This reduces system time during a parallel
buildworld by 35%.
Reviewed by: alc
|
|
|
|
|
| |
with exceptions enabled, leave them enabled and use a regular mutex to
guard TLB invalidations instead of a spinlock.
|
|
|
|
| |
confuse the VM.
|
|
|
|
|
| |
and remove moea64_attr_*() in favor of direct calls to vm_page_dirty()
and friends.
|
|
|
|
|
|
|
|
|
| |
manipulation of the pvo_vlink and pvo_olink entries is already protected
by the table lock, so most remaining instances of the acquisition of the
page queues lock can likely be replaced with the table lock, or removed
if the table lock is already held.
Reviewed by: alc
|
|
|
|
|
|
| |
module.
Suggested by: alc
|
|
|
|
|
|
|
|
| |
or look them up individually in pmap_remove() and apply the same logic
in the other ranged operation (pmap_protect). This speeds up make
installworld by a factor of 2 on powerpc64.
MFC after: 1 week
|
|
|
|
|
|
| |
the point of this loop is to remove elements. This worked by accident before.
MFC after: 2 days
|
|
|
|
|
|
| |
profiled too now.
MFC after: 2 weeks
|
|
|
|
| |
MFC after: 2 months
|
|
|
|
|
|
|
| |
and 32bit ABIs. Also try to enable nxstacks for PAE/i386 when supported,
and some variants of powerpc32.
MFC after: 2 months (if ever)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
profiling and kernel profiling. To enable kernel profiling one has to build
kgmon(8). I will enable the build once I managed to build and test powerpc
(32-bit) kernels with profiling support.
- add a powerpc64 PROF_PROLOGUE for _mcount.
- add macros to avoid adding the PROF_PROLOGUE in certain assembly entries.
- apply these macros where needed.
- add size information to the MCOUNT function.
MFC after: 3 weeks, together with r230291
|