| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
include the comments with CONFIGARGS
|
|
|
|
| |
after r199762.
|
|
|
|
|
|
|
| |
VM_FAULT_DIRTY. The information provided by this flag can be trivially
inferred by vm_fault().
Discussed with: kib
|
|
|
|
|
|
| |
if_watchdog and if_timer from the first port.
Reviewed by: gonzo
|
|
|
|
|
|
|
|
|
|
|
| |
function cpu_set_syscall_retval().
Suggested by: marcel
Reviewed by: marcel, davidxu
PowerPC, ARM, ia64 changes: marcel
Sparc64 tested and reviewed by: marius, also sunv reviewed
MIPS tested by: gonzo
MFC after: 1 month
|
|
|
|
|
|
|
| |
- Move dpcpu initialization to mips_proc0_init. It's
more appropriate place for it. Besides dpcpu_init
requires pmap module to be initialized and calling it
int pmap.c hangs the system
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
while in kernel mode, and later changing signal mask to block the
signal, was fixed for sigprocmask(2) and ptread_exit(3). The same race
exists for sigreturn(2), setcontext(2) and swapcontext(2) syscalls.
Use kern_sigprocmask() instead of direct manipulation of td_sigmask to
reschedule newly blocked signals, closing the race.
Reviewed by: davidxu
Tested by: pho
MFC after: 1 month
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the memory or D-cache, depending on the semantics of the platform.
vm_sync_icache() is basically a wrapper around pmap_sync_icache(),
that translates the vm_map_t argumument to pmap_t.
o Introduce pmap_sync_icache() to all PMAP implementation. For powerpc
it replaces the pmap_page_executable() function, added to solve
the I-cache problem in uiomove_fromphys().
o In proc_rwmem() call vm_sync_icache() when writing to a page that
has execute permissions. This assures that when breakpoints are
written, the I-cache will be coherent and the process will actually
hit the breakpoint.
o This also fixes the Book-E PMAP implementation that was missing
necessary locking while trying to deal with the I-cache coherency
in pmap_enter() (read: mmu_booke_enter_locked).
The key property of this change is that the I-cache is made coherent
*after* writes have been done. Doing it in the PMAP layer when adding
or changing a mapping means that the I-cache is made coherent *before*
any writes happen. The difference is key when the I-cache prefetches.
|
|
|
|
|
|
|
|
|
|
| |
by looking at the bases used for non-relocatable executables by gnu ld(1),
and adjusting it slightly.
Discussed with: bz
Reviewed by: kan
Tested by: bz (i386, amd64), bsam (linux)
MFC after: some time
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
first and the native ia32 compat as middle (before other things).
o(ld)brandinfo as well as third party like linux, kfreebsd, etc.
stays on SI_ORDER_ANY coming last.
The reason for this is only to make sure that even in case we would
overflow the MAX_BRANDS sized array, the native FreeBSD brandinfo
would still be there and the system would be operational.
Reviewed by: kib
MFC after: 1 month
|
|
|
|
|
| |
Reviewed by: jhb
MFC after: 3 weeks
|
|
|
|
|
| |
architecture specific include file containing the _ALIGN*
stuff which <sys/socket.h> needs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
has proven to have a good effect when entering KDB by using a NMI,
but it completely violates all the good rules about interrupts
disabled while holding a spinlock in other occasions. This can be the
cause of deadlocks on events where a normal IPI_STOP is expected.
* Adds an new IPI called IPI_STOP_HARD on all the supported architectures.
This IPI is responsible for sending a stop message among CPUs using a
privileged channel when disponible. In other cases it just does match a
normal IPI_STOP.
Right now the IPI_STOP_HARD functionality uses a NMI on ia32 and amd64
architectures, while on the other has a normal IPI_STOP effect. It is
responsibility of maintainers to eventually implement an hard stop
when necessary and possible.
* Use the new IPI facility in order to implement a new userend SMP kernel
function called stop_cpus_hard(). That is specular to stop_cpu() but
it does use the privileged channel for the stopping facility.
* Let KDB use the newly introduced function stop_cpus_hard() and leave
stop_cpus() for all the other cases
* Disable interrupts on CPU0 when starting the process of APs suspension.
* Style cleanup and comments adding
This patch should fix the reboot/shutdown deadlocks many users are
constantly reporting on mailing lists.
Please don't forget to update your config file with the STOP_NMI
option removal
Reviewed by: jhb
Tested by: pho, bz, rink
Approved by: re (kib)
|
|
|
|
|
|
|
|
|
|
|
| |
a device pager (OBJT_DEVICE) object in that it uses fictitious pages to
provide aliases to other memory addresses. The primary difference is that
it uses an sglist(9) to determine the physical addresses for a given offset
into the object instead of invoking the d_mmap() method in a device driver.
Reviewed by: alc
Approved by: re (kensmith)
MFC after: 2 weeks
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
dependent memory attributes:
Rename vm_cache_mode_t to vm_memattr_t. The new name reflects the
fact that there are machine-dependent memory attributes that have
nothing to do with controlling the cache's behavior.
Introduce vm_object_set_memattr() for setting the default memory
attributes that will be given to an object's pages.
Introduce and use pmap_page_{get,set}_memattr() for getting and
setting a page's machine-dependent memory attributes. Add full
support for these functions on amd64 and i386 and stubs for them on
the other architectures. The function pmap_page_set_memattr() is also
responsible for any other machine-dependent aspects of changing a
page's memory attributes, such as flushing the cache or updating the
direct map. The uses include kmem_alloc_contig(), vm_page_alloc(),
and the device pager:
kmem_alloc_contig() can now be used to allocate kernel memory with
non-default memory attributes on amd64 and i386.
vm_page_alloc() and the device pager will set the memory attributes
for the real or fictitious page according to the object's default
memory attributes.
Update the various pmap functions on amd64 and i386 that map pages to
incorporate each page's memory attributes in the mapping.
Notes: (1) Inherent to this design are safety features that prevent
the specification of inconsistent memory attributes by different
mappings on amd64 and i386. In addition, the device pager provides a
warning when a device driver creates a fictitious page with memory
attributes that are inconsistent with the real page that the
fictitious page is an alias for. (2) Storing the machine-dependent
memory attributes for amd64 and i386 as a dedicated "int" in "struct
md_page" represents a compromise between space efficiency and the ease
of MFCing these changes to RELENG_7.
In collaboration with: jhb
Approved by: re (kib)
|
|
|
|
|
|
|
|
|
|
|
| |
o add to platforms where it was missing (arm, i386, powerpc, sparc64, sun4v)
o define as "1" on amd64 and i386 where there is no restriction
o make the type returned consistent with ALIGN
o remove _ALIGNED_POINTER
o make associated comments consistent
Reviewed by: bde, imp, marcel
Approved by: re (kensmith)
|
|
|
|
| |
Approved by: re@ (rwatson)
|
|
|
|
|
| |
Noticed by: jmallett
Approved by: re (kib)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
IF_ADDR_UNLOCK() across network device drivers when accessing the
per-interface multicast address list, if_multiaddrs. This will
allow us to change the locking strategy without affecting our driver
programming interface or binary interface.
For two wireless drivers, remove unnecessary locking, since they
don't actually access the multicast address list.
Approved by: re (kib)
MFC after: 6 weeks
|
|
|
|
|
|
|
|
|
|
|
| |
required by video card drivers. Specifically, this change introduces
vm_cache_mode_t with an appropriate VM_CACHE_DEFAULT definition on all
architectures. In addition, this changes adds a vm_cache_mode_t parameter
to kmem_alloc_contig() and vm_phys_alloc_contig(). These will be the
interfaces for allocating mapped kernel memory and physical memory,
respectively, with non-default cache modes.
In collaboration with: jhb
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Modules and kernel code alike may use DPCPU_DEFINE(),
DPCPU_GET(), DPCPU_SET(), etc. akin to the statically defined
PCPU_*. Requires only one extra instruction more than PCPU_* and is
virtually the same as __thread for builtin and much faster for shared
objects. DPCPU variables can be initialized when defined.
- Modules are supported by relocating the module's per-cpu linker set
over space reserved in the kernel. Modules may fail to load if there
is insufficient space available.
- Track space available for modules with a one-off extent allocator.
Free may block for memory to allocate space for an extent.
Reviewed by: jhb, rwatson, kan, sam, grehan, marius, marcel, stas
|
|
|
|
|
|
| |
on the directory like we have for all other target architectures.
Discussed with: imp (kind of)
|
|
|
|
| |
pcib_read/write_config DEVMETHOD.
|
|
|
|
| |
device shutdown method.
|
| |
|
|
|
|
|
| |
uart_8250. Remove it here since the UART on the ADM5120 isn't the
typical 16550: its completely different.
|
| |
|
|
|
|
| |
signature checking was turned on.
|
|
|
|
|
| |
mapping that permits write access. Otherwise, pmap_remove_write() will not
remove write access from any of the page's mappings.
|
|
|
|
|
| |
i386. Otherwise, my next to last commit (r192628) to this file doesn't
actually compile.
|
|
|
|
|
| |
configured to trap on a write access unless *all* of the page's dirty bits
are set.
|
|
|
|
|
|
|
|
|
|
|
|
| |
write fault or while wiring a mapping that must support write access.
In general, this change should reduce the number of traps that occur for
the purpose of setting the modified bit. More specifically, this change
should prevent traps while holding locks in a sysctl handler. See
kern/kern_sysctl.c revisions 1.168 and 1.195 (svn r192160) for further
details.
Tested by: gonzo
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
possible future I-cache coherency operation can succeed. On ARM
for example the L1 cache can be (is) virtually mapped, which
means that any I/O that uses temporary mappings will not see the
I-cache made coherent. On ia64 a similar behaviour has been
observed. By flushing the D-cache, execution of binaries backed
by md(4) and/or NFS work reliably.
For Book-E (powerpc), execution over NFS exhibits SIGILL once in
a while as well, though cpu_flush_dcache() hasn't been implemented
yet.
Doing an explicit D-cache flush as part of the non-DMA based I/O
read operation eliminates the need to do it as part of the
I-cache coherency operation itself and as such avoids pessimizing
the DMA-based I/O read operations for which D-cache are already
flushed/invalidated. It also allows future optimizations whereby
the bcopy() followed by the D-cache flush can be integrated in a
single operation, which could be implemented using on-chips DMA
engines, by-passing the D-cache altogether.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reimplement "kernel_pmap" in the standard way.
Eliminate unused variables. (These are mostly variables that were
discarded by the machine-independent layer after FreeBSD 4.x.)
Properly handle a vm_page_alloc() failure in pmap_init().
Eliminate dead or legacy (FreeBSD 4.x) code.
Eliminate unnecessary page queues locking.
Eliminate some excess white space.
Correct the synchronization of pmap_page_exists_quick().
Tested by: gonzo
|
|
|
|
|
|
|
|
|
|
|
|
| |
a fair number of static data structures, making this an unlikely
option to try to change without also changing source code. [1]
Change default cache line size on ia64, sparc64, and sun4v to 128
bytes, as this was what rtld-elf was already using on those
platforms. [2]
Suggested by: bde [1], jhb [2]
MFC after: 2 weeks
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce pmap_try_insert_pv_entry(), a function that conditionally
creates a pv entry if the number of entries is below the high water mark
for pv entries.
Introduce pmap_enter_quick_locked() and use it to reimplement
pmap_enter_object(). The old implementation was broken. For example,
it could block while holding a mutex lock.
Change pmap_enter_quick_locked() to fail rather than wait if it is
unable to allocate a page table page. This prevents a race between
pmap_enter_object() and the page daemon. Specifically, an inactive
page that is a successor to the page that was given to
pmap_enter_quick_locked() might become a cache page while
pmap_enter_quick_locked() waits and later pmap_enter_object() maps
the cache page violating the invariant that cache pages are never
mapped. Similarly, change
pmap_enter_quick_locked() to call pmap_try_insert_pv_entry() rather
than pmap_insert_entry(). Generally speaking,
pmap_enter_quick_locked() is used to create speculative mappings. So,
it should not try hard to allocate memory if free memory is scarce.
Tested by: gonzo
|
|
|
|
|
| |
MFC after: 2 weeks
Suggested by: alc
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CACHE_LINE_SIZE constant. These constants are intended to
over-estimate the cache line size, and be used at compile-time
when a run-time tuning alternative isn't appropriate or
available.
Defaults for all architectures are 64 bytes, except powerpc
where it is 128 bytes (used on G5 systems).
MFC after: 2 weeks
Discussed on: arch@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Move the new field (brand_note) to the end of the Brandinfo structure.
2) Add a new flag BI_BRAND_NOTE that indicates that the brand_note pointer
is valid.
3) Use the brand_note field if the flag BI_BRAND_NOTE is set and as old
modules won't have the flag set, so the new field brand_note would be
ignored.
Suggested by: jhb
Reviewed by: jhb
Approved by: kib (mentor)
MFC after: 6 days
|
|
|
|
|
| |
Follow one of the two most common indent schemes in this file.
This unbreaks a few mips kernel builds.
|
|
|
|
|
|
|
|
|
|
| |
to the full path of the image that is being executed.
Increase AT_COUNT.
Remove no longer true comment about types used in Linux ELF binaries,
listed types contain FreeBSD-specific entries.
Reviewed by: kan
|
|
|
|
|
|
|
|
|
|
|
|
| |
".note.ABI-tag" section.
The search order of a brand is changed, now first of all the
".note.ABI-tag" is looked through.
Move code which fetch osreldate for ELF binary to check_note() handler.
PR: 118473
Approved by: kib (mentor)
|
| |
|
|
|
|
| |
replace the amd64-ish version with a blank version.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
kernel. Rather than just kick off the page daemon, we actively retire
more mappings. The inner loop now looks a lot like the inner loop of
pmap_remove_all.
Also, get_pv_entry can't return NULL now, so remove panic if it did.
Reviewed by: alc@
|
|
|
|
|
| |
here, but inconsistently. Change all instances of it to kernel_pmap
to match rest of code.
|
| |
|