| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
config handler. Tidy up various local apic initialization.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
cpu_exit() as this is already performed in cpu_thread_exit() and the
debug state is per-thread rather than per-process.
|
| |
|
|
|
|
|
|
|
| |
cuts to the chase and fills in a provided s/g list. This is meant to optimize
out the cost of the callback since the callback doesn't serve much purpose for
mbufs since mbuf loads will never be deferred. This is just for amd64 and
i386 at the moment, other arches will be coming shortly.
|
|
|
|
|
|
| |
from 4.x kernel config files. User's wishing to upgrade from 4.x to 6
will need to go through 5.x, or grab this script from there. These
scripts will remain in RELENG_5...
|
| |
|
| |
|
|
|
|
| |
o Use capitalized "Ethernet" for consistency.
|
| |
|
| |
|
| |
|
|
|
|
| |
Export minimal symbols to allow this to happen.
|
|
|
|
|
|
|
|
|
|
|
| |
on entry and it assumes the responsibility for releasing the page queues
lock if it must sleep.
Remove a bogus comment from pmap_enter_quick().
Using the first change, modify vm_map_pmap_enter() so that the page queues
lock is acquired and released once, rather than each time that a page
is mapped.
|
|
|
|
|
| |
and faster in cases, such as pmap_kextract(), where the pde is known to
exist.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In such cases, the busying of the page and the unlocking of the
containing object by vm_map_pmap_enter() and vm_fault_prefault() is
unnecessary overhead. To eliminate this overhead, this change
modifies pmap_enter_quick() so that it expects the object to be locked
on entry and it assumes the responsibility for busying the page and
unlocking the object if it must sleep. Note: alpha, amd64, i386 and
ia64 are the only implementations optimized by this change; arm,
powerpc, and sparc64 still conservatively busy the page and unlock the
object within every pmap_enter_quick() call.
Additionally, this change is the first case where we synchronize
access to the page's PG_BUSY flag and busy field using the containing
object's lock rather than the global page queues lock. (Modifications
to the page's PG_BUSY flag and busy field have asserted both locks for
several weeks, enabling an incremental transition.)
|
| |
|
|
|
|
|
|
| |
possible, such as the inner loop of pmap_copy().
Remove two comments that apply to i386 but not amd64.
|
|
|
|
|
|
|
| |
pmap_remove()'s inner loop. Instead, call pmap_pde_to_pte(), a new
function, prior to the inner loop.
Reviewed by: peter@, tegge@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
specified register, but a pointer to the in-memory representation of
that value. The reason for this is twofold:
1. Not all registers can be represented by a register_t. In particular
FP registers fall in that category. Passing the new register value
by reference instead of by value makes this point moot.
2. When we receive a G or P packet, both are for writing a register,
the packet will have the register value in target-byte order and
in the memory representation (modulo the fact that bytes are sent
as 2 printable hexadecimal numbers of course). We only need to
decode the packet to have a pointer to the register value.
This change fixes the bug of extracting the register value of the P
packet as a hexadecimal number instead of as a bit array. The quick
(and dirty) fix to bswap the register value in gdb_cpu_setreg() as
it has been added on i386 and amd64 can therefore be removed and has
in fact been that.
Tested on: alpha, amd64, i386, ia64, sparc64
|
|
|
|
| |
lines while here.
|
|
|
|
|
| |
possible, like on i386. Registers are handled differently for caller
vs callee saved registers.
|
| |
|
|
|
|
|
| |
eliminate the evil cpu_reset_proxy code now that it will never be
activated. i386 should pick this up as well.
|
|
|
|
|
| |
trigger for other misbehaviour in the sym driver that was causing freezes at
boot. Thanks to phk@ for reporting and testing this.
|
|
|
|
| |
including other headers.
|
| |
|
|
|
|
|
|
|
| |
Allocate the bounce zone at either tag creation or map creation to help
avoid null-pointer derefs later on. Track total pages per zone so that
each zone can get a minimum allocation at tag creation time instead of
being defeated by mis-behaving tags that suck up the max amount.
|
|
|
|
| |
Reviewed by: arch@
|
|
|
|
| |
Reviewed by: arch@
|
|
|
|
| |
Reviewed by: arch@
|
|
|
|
| |
not through bouncing.
|
|
|
|
| |
Use tag-specific pools of bounce pages instead of a single global pool.
|
| |
|
|
|
|
|
| |
rev 1.61 (scottl): Add KTR tracing
rev 1.62 (scottl): Optimize (td->pmap, inlines, etc)
|
|
|
|
| |
amd64 and i386 anyways. The stats are only kept for informational purposes.
|
|
|
|
| |
Discussed on: -current
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
control the number of lines per page rather than a constant. The variable
can be examined and changed in ddb as '$lines'. Setting the variable to
0 will effectively turn off paging.
- Change db_putchar() to force out pending whitespace before outputting
newlines and carriage returns so that one can rub out content on the
current line via '\r \r' type strings.
- Change the simple pager to rub out the --More-- prompt explicitly when
the routine exits.
- Add some aliases to the simple pager to make it more compatible with
more(1): 'e' and 'j' do a single line. 'd' does half a page, and
'f' does a full page.
MFC after: 1 month
Inspired by: kris
|
|
|
|
|
|
|
| |
hw.pci.host_mem_start tunable. Add comments to TUNABLE_INT and
TUNABLE_QUAD recommending against their use.
MFC after: 3 weeks
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
that was greater than 4G. I originally used the same values as i386 in
order to save opening a new PML4 page slot, but in the day of gigabytes
of memory, worrying about a 4K page seems futile. Moving from 8 to 32G
moves the page to a different index, it doesn't increase the number of
pages used.
|
| |
|
| |
|
|
|
|
|
|
| |
This removes the last MD portion of acpi_cpu.c.
MFC after: 2 weeks
|
|
|
|
|
|
|
|
|
| |
Restructure pmap_enter() to prevent the loss of a page modified (PG_M) bit
in a race between processors. (This restructuring assumes the newly atomic
pte_load_store() for correct operation.)
Reviewed by: tegge@
PR: i386/61852
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the raw values including for child process statistics and only compute the
system and user timevals on demand.
- Fix the various kern_wait() syscall wrappers to only pass in a rusage
pointer if they are going to use the result.
- Add a kern_getrusage() function for the ABI syscalls to use so that they
don't have to play stackgap games to call getrusage().
- Fix the svr4_sys_times() syscall to just call calcru() to calculate the
times it needs rather than calling getrusage() twice with associated
stackgap, etc.
- Add a new rusage_ext structure to store raw time stats such as tick counts
for user, system, and interrupt time as well as a bintime of the total
runtime. A new p_rux field in struct proc replaces the same inline fields
from struct proc (i.e. p_[isu]ticks, p_[isu]u, and p_runtime). A new p_crux
field in struct proc contains the "raw" child time usage statistics.
ruadd() has been changed to handle adding the associated rusage_ext
structures as well as the values in rusage. Effectively, the values in
rusage_ext replace the ru_utime and ru_stime values in struct rusage. These
two fields in struct rusage are no longer used in the kernel.
- calcru() has been split into a static worker function calcru1() that
calculates appropriate timevals for user and system time as well as updating
the rux_[isu]u fields of a passed in rusage_ext structure. calcru() uses a
copy of the process' p_rux structure to compute the timevals after updating
the runtime appropriately if any of the threads in that process are
currently executing. It also now only locks sched_lock internally while
doing the rux_runtime fixup. calcru() now only requires the caller to
hold the proc lock and calcru1() only requires the proc lock internally.
calcru() also no longer allows callers to ask for an interrupt timeval
since none of them actually did.
- calcru() now correctly handles threads executing on other CPUs.
- A new calccru() function computes the child system and user timevals by
calling calcru1() on p_crux. Note that this means that any code that wants
child times must now call this function rather than reading from p_cru
directly. This function also requires the proc lock.
- This finishes the locking for rusage and friends so some of the Giant locks
in exit1() and kern_wait() are now gone.
- The locking in ttyinfo() has been tweaked so that a shared lock of the
proctree lock is used to protect the process group rather than the process
group lock. By holding this lock until the end of the function we now
ensure that the process/thread that we pick to dump info about will no
longer vanish while we are trying to output its info to the console.
Submitted by: bde (mostly)
MFC after: 1 month
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
that is no longer required. (In fact, it is not clear that it was ever
required in HEAD or RELENG_4, only RELENG_3 required a work-around.) Now,
as before revision 1.251, if the preexisting PTE is invalid, pmap_enter()
does not call pmap_invalidate_page() to update the TLB(s).
Note: Even with this change, the handling of a copy-on-write fault is
inefficient, in such cases pmap_enter() calls pmap_invalidate_page() twice.
Discussed with: bde@
PR: kern/16568
|