summaryrefslogtreecommitdiffstats
path: root/sys/amd64
Commit message (Collapse)AuthorAgeFilesLines
* JumboMFi386: use bitmapped IPI handler. Update elcr and default mptablepeter2005-01-2113-138/+298
| | | | config handler. Tidy up various local apic initialization.
* MFi386: handle PSL_T properly across fork. Typo fix.peter2005-01-211-1/+13
|
* MFi386: whitespace, copyright header, etc updatespeter2005-01-216-7/+3
|
* MFi386: use %rip - 1 for the symbol search address (for noreturn funcs)peter2005-01-211-2/+8
|
* Remove redundant code to drop per-thread debug register state fromjhb2005-01-141-7/+0
| | | | | cpu_exit() as this is already performed in cpu_thread_exit() and the debug state is per-thread rather than per-process.
* There are no PC98 amd64 machines, so gc a few stray ifdefs.imp2005-01-112-9/+0
|
* Introduce bus_dmamap_load_mbuf_sg(). Instead of taking a callback arg, thisscottl2005-01-072-13/+53
| | | | | | | cuts to the chase and fills in a provided s/g list. This is meant to optimize out the cost of the callback since the callback doesn't serve much purpose for mbufs since mbuf loads will never be deferred. This is just for amd64 and i386 at the moment, other arches will be coming shortly.
* These are no longer relevant. They are scripts for extracting hintsimp2005-01-071-115/+0
| | | | | | from 4.x kernel config files. User's wishing to upgrade from 4.x to 6 will need to go through 5.x, or grab this script from there. These scripts will remain in RELENG_5...
* Begin all license/copyright comments with /*-imp2005-01-0534-34/+34
|
* PC98 will never be defined for amd64imp2005-01-051-5/+0
|
* o Use tab instead of spaces for puc(4) line.kuriyama2005-01-051-4/+4
| | | | o Use capitalized "Ethernet" for consistency.
* Minor sync to i386 GENERIC in the form of comments and whitespace.jhb2004-12-301-2/+3
|
* MFi386: Restore cpu_reset proxy code to enable reset from ddb on an AP.njl2004-12-271-4/+39
|
* Reduce diffs to i386.njl2004-12-271-16/+11
|
* Get rid of #ifdef for legacy system. Move that into the MD code.imp2004-12-241-0/+7
| | | | Export minimal symbols to allow this to happen.
* Modify pmap_enter_quick() so that it expects the page queues to be lockedalc2004-12-231-3/+2
| | | | | | | | | | | on entry and it assumes the responsibility for releasing the page queues lock if it must sleep. Remove a bogus comment from pmap_enter_quick(). Using the first change, modify vm_map_pmap_enter() so that the page queues lock is acquired and released once, rather than each time that a page is mapped.
* Use vtopde() instead of pmap_pde() in pmap_kextract(); vtopde() is smalleralc2004-12-211-1/+1
| | | | | and faster in cases, such as pmap_kextract(), where the pde is known to exist.
* In the common case, pmap_enter_quick() completes without sleeping.alc2004-12-151-2/+12
| | | | | | | | | | | | | | | | | | In such cases, the busying of the page and the unlocking of the containing object by vm_map_pmap_enter() and vm_fault_prefault() is unnecessary overhead. To eliminate this overhead, this change modifies pmap_enter_quick() so that it expects the object to be locked on entry and it assumes the responsibility for busying the page and unlocking the object if it must sleep. Note: alpha, amd64, i386 and ia64 are the only implementations optimized by this change; arm, powerpc, and sparc64 still conservatively busy the page and unlock the object within every pmap_enter_quick() call. Additionally, this change is the first case where we synchronize access to the page's PG_BUSY flag and busy field using the containing object's lock rather than the global page queues lock. (Modifications to the page's PG_BUSY flag and busy field have asserted both locks for several weeks, enabling an incremental transition.)
* MFi386: rev 1.12: re-allow fast interrupts to cause preemptionpeter2004-12-061-2/+0
|
* Replace (inlined) pmap_pte() calls with smaller, faster code wherealc2004-12-041-7/+7
| | | | | | possible, such as the inner loop of pmap_copy(). Remove two comments that apply to i386 but not amd64.
* For efficiency eliminate the call to pmap_pte() from pmap_protect()'s andalc2004-12-021-14/+18
| | | | | | | pmap_remove()'s inner loop. Instead, call pmap_pde_to_pte(), a new function, prior to the inner loop. Reviewed by: peter@, tegge@
* Change gdb_cpu_setreg() to not take the value to which to set themarcel2004-12-012-5/+4
| | | | | | | | | | | | | | | | | | | | | specified register, but a pointer to the in-memory representation of that value. The reason for this is twofold: 1. Not all registers can be represented by a register_t. In particular FP registers fall in that category. Passing the new register value by reference instead of by value makes this point moot. 2. When we receive a G or P packet, both are for writing a register, the packet will have the register value in target-byte order and in the memory representation (modulo the fact that bytes are sent as 2 printable hexadecimal numbers of course). We only need to decode the packet to have a pointer to the register value. This change fixes the bug of extracting the register value of the P packet as a hexadecimal number instead of as a bit array. The quick (and dirty) fix to bswap the register value in gdb_cpu_setreg() as it has been added on i386 and amd64 can therefore be removed and has in fact been that. Tested on: alpha, amd64, i386, ia64, sparc64
* Remove unused cnt variable for the SMP case. Trim some excessive blankpeter2004-11-301-5/+1
| | | | lines while here.
* Update the gdb register extraction support to use the pcb whereverpeter2004-11-301-23/+33
| | | | | possible, like on i386. Registers are handled differently for caller vs callee saved registers.
* MFi386: join the %cr0 setup line now that i386 has lost the I386 ifdefs.peter2004-11-291-2/+1
|
* Take advantage of the shutdown processing being wired to the BSP andpeter2004-11-291-45/+3
| | | | | eliminate the evil cpu_reset_proxy code now that it will never be activated. i386 should pick this up as well.
* Don't flag alignment constraints as a reason for bouncing. This fixes thescottl2004-11-291-1/+1
| | | | | trigger for other misbehaviour in the sym driver that was causing freezes at boot. Thanks to phk@ for reporting and testing this.
* Don't include sys/user.h merely for its side-effect of recursivelydas2004-11-275-5/+5
| | | | including other headers.
* Remove an extra #includescottl2004-11-211-1/+0
|
* Consolidate all of the bounce tests into the BUS_DMA_COULD_BOUNCE flag.scottl2004-11-211-29/+45
| | | | | | | Allocate the bounce zone at either tag creation or map creation to help avoid null-pointer derefs later on. Track total pages per zone so that each zone can get a minimum allocation at tag creation time instead of being defeated by mis-behaving tags that suck up the max amount.
* Remove references to U area and garbage collect includes.das2004-11-201-4/+1
| | | | Reviewed by: arch@
* Remove UAREA_PAGES.das2004-11-201-1/+0
| | | | Reviewed by: arch@
* U areas are going away, so don't allocate one for process 0.das2004-11-201-3/+0
| | | | Reviewed by: arch@
* Revert part of rev 1.56. Tag boundaries are handled by splitting segments,scottl2004-11-191-9/+5
| | | | not through bouncing.
* MFi386 rev 1.63-1.64:scottl2004-11-101-37/+142
| | | | Use tag-specific pools of bounce pages instead of a single global pool.
* MFi386 1.238 (jhb): Allow hints to disable cpuspeter2004-11-051-1/+16
|
* MFi386:peter2004-11-051-28/+79
| | | | | rev 1.61 (scottl): Add KTR tracing rev 1.62 (scottl): Optimize (td->pmap, inlines, etc)
* Don't use atomic ops to increment interrupt stats. This was only done onscottl2004-11-031-3/+3
| | | | amd64 and i386 anyways. The stats are only kept for informational purposes.
* Reduce annoying SCSI probing delay from 15 to 5 seconds in all GENRIC kernels.andre2004-11-021-1/+1
| | | | Discussed on: -current
* - Change the ddb paging "support" to use a variable (db_lines_per_page) tojhb2004-11-012-2/+2
| | | | | | | | | | | | | | | | | control the number of lines per page rather than a constant. The variable can be examined and changed in ddb as '$lines'. Setting the variable to 0 will effectively turn off paging. - Change db_putchar() to force out pending whitespace before outputting newlines and carriage returns so that one can rub out content on the current line via '\r \r' type strings. - Change the simple pager to rub out the --More-- prompt explicitly when the routine exits. - Add some aliases to the simple pager to make it more compatible with more(1): 'e' and 'j' do a single line. 'd' does half a page, and 'f' does a full page. MFC after: 1 month Inspired by: kris
* Add TUNABLE_LONG and TUNABLE_ULONG, and use the latter for thedes2004-10-311-4/+3
| | | | | | | hw.pci.host_mem_start tunable. Add comments to TUNABLE_INT and TUNABLE_QUAD recommending against their use. MFC after: 3 weeks
* Whitespace cleanupdes2004-10-311-5/+5
|
* MFi386: preserve dcons buffer passed by loader.simokawa2004-10-281-0/+16
|
* Raise MAXDSIZ from 8G to 32G. The old limit was just an arbitary choicepeter2004-10-271-1/+1
| | | | | | | | that was greater than 4G. I originally used the same values as i386 in order to save opening a new PML4 page slot, but in the day of gigabytes of memory, worrying about a 4K page seems futile. Moving from 8 to 32G moves the page to a different index, it doesn't increase the number of pages used.
* Print flags in the nexus for child devices.njl2004-10-141-0/+2
|
* MFi386: sync with latest updatespeter2004-10-111-3/+36
|
* Move the code for halting the CPU (acpi_cpu_c1) into machdep files.njl2004-10-112-1/+8
| | | | | | This removes the last MD portion of acpi_cpu.c. MFC after: 2 weeks
* Make pte_load_store() an atomic operation in all cases, not just i386 PAE.alc2004-10-082-11/+34
| | | | | | | | | Restructure pmap_enter() to prevent the loss of a page modified (PG_M) bit in a race between processors. (This restructuring assumes the newly atomic pte_load_store() for correct operation.) Reviewed by: tegge@ PR: i386/61852
* Rework how we store process times in the kernel such that we always storejhb2004-10-051-16/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the raw values including for child process statistics and only compute the system and user timevals on demand. - Fix the various kern_wait() syscall wrappers to only pass in a rusage pointer if they are going to use the result. - Add a kern_getrusage() function for the ABI syscalls to use so that they don't have to play stackgap games to call getrusage(). - Fix the svr4_sys_times() syscall to just call calcru() to calculate the times it needs rather than calling getrusage() twice with associated stackgap, etc. - Add a new rusage_ext structure to store raw time stats such as tick counts for user, system, and interrupt time as well as a bintime of the total runtime. A new p_rux field in struct proc replaces the same inline fields from struct proc (i.e. p_[isu]ticks, p_[isu]u, and p_runtime). A new p_crux field in struct proc contains the "raw" child time usage statistics. ruadd() has been changed to handle adding the associated rusage_ext structures as well as the values in rusage. Effectively, the values in rusage_ext replace the ru_utime and ru_stime values in struct rusage. These two fields in struct rusage are no longer used in the kernel. - calcru() has been split into a static worker function calcru1() that calculates appropriate timevals for user and system time as well as updating the rux_[isu]u fields of a passed in rusage_ext structure. calcru() uses a copy of the process' p_rux structure to compute the timevals after updating the runtime appropriately if any of the threads in that process are currently executing. It also now only locks sched_lock internally while doing the rux_runtime fixup. calcru() now only requires the caller to hold the proc lock and calcru1() only requires the proc lock internally. calcru() also no longer allows callers to ask for an interrupt timeval since none of them actually did. - calcru() now correctly handles threads executing on other CPUs. - A new calccru() function computes the child system and user timevals by calling calcru1() on p_crux. Note that this means that any code that wants child times must now call this function rather than reading from p_cru directly. This function also requires the proc lock. - This finishes the locking for rusage and friends so some of the Giant locks in exit1() and kern_wait() are now gone. - The locking in ttyinfo() has been tweaked so that a shared lock of the proctree lock is used to protect the process group rather than the process group lock. By holding this lock until the end of the function we now ensure that the process/thread that we pick to dump info about will no longer vanish while we are trying to output its info to the console. Submitted by: bde (mostly) MFC after: 1 month
* Undo revision 1.251. This change was a performance pessimizing work-aroundalc2004-10-031-1/+1
| | | | | | | | | | | | | that is no longer required. (In fact, it is not clear that it was ever required in HEAD or RELENG_4, only RELENG_3 required a work-around.) Now, as before revision 1.251, if the preexisting PTE is invalid, pmap_enter() does not call pmap_invalidate_page() to update the TLB(s). Note: Even with this change, the handling of a copy-on-write fault is inefficient, in such cases pmap_enter() calls pmap_invalidate_page() twice. Discussed with: bde@ PR: kern/16568
OpenPOWER on IntegriCloud