summaryrefslogtreecommitdiffstats
path: root/sys/amd64
Commit message (Collapse)AuthorAgeFilesLines
* Since DELAY() was moved, most <machine/clock.h> #includes have beenphk2006-05-161-1/+0
| | | | unnecessary.
* Kill more references to lnc(4).ru2006-05-161-3/+1
| | | | Submitted by: grep(1)
* Remove some remnants of lnc(4).marius2006-05-141-4/+0
|
* Clean out sysctl machdep.* related defines.phk2006-05-112-12/+3
| | | | The cmos clock related stuff should really be in MI code.
* regen (linux rt_sigpending)netchild2006-05-103-5/+6
|
* Implement rt_sigpending in the linuxolator.netchild2006-05-102-2/+2
| | | | | PR: 92671 Submitted by: Markus Niemist"o <markus.niemisto@gmx.net>
* Add in linsysfs. A linux 2.6 like sys filesystem to pacify the Linuxambrisko2006-05-092-0/+5
| | | | | | | LSI MegaRAID SAS utility. Sponsored by: IronPort Systems Man page help from: brueffer
* Forgot the amd/linux32 part since sys/*/linux didn't match :-(ambrisko2006-05-061-0/+6
| | | | Pointed out by: Alexander (thanks)
* add ath and wlan crypto supportsam2006-05-031-0/+6
| | | | MFC after: 1 month
* Allow bus_dmamap_load() to pass ENOMEM back to the caller. This puts it intoscottl2006-05-031-4/+10
| | | | | | | conformance with the mbuf and uio load routines. ENOMEM can only happen with BUS_DMA_NOWAIT is passed in, thus the deferals are disabled. I don't like doing this, but fixing this fixes assumptions in other important drivers, which is a net benefit for now.
* Add various constants for the PAT MSR and the PAT PTE and PDE flags.jhb2006-05-014-0/+63
| | | | | | | Initialize the PAT MSR during boot to map PAT type 2 to Write-Combining (WC) instead of Uncached (UC-). MFC after: 1 month
* Add a new 'pmap_invalidate_cache()' to flush the CPU caches via thejhb2006-05-016-2/+64
| | | | | | | wbinvd() instruction. This includes a new IPI so that all CPU caches on all CPUs are flushed for the SMP case. MFC after: 1 month
* Eliminate unnecessary, recursive acquisitions and releases of the pagealc2006-04-291-5/+2
| | | | | | queues lock by free_pv_entry() and pmap_remove_pages(). Reduce the scope of the page queues lock in pmap_remove_pages().
* Rewrite of puc(4). Significant changes are:marcel2006-04-281-2/+0
| | | | | | | | | | | | | | | | | | | | o Properly use rman(9) to manage resources. This eliminates the need to puc-specific hacks to rman. It also allows devinfo(8) to be used to find out the specific assignment of resources to serial/parallel ports. o Compress the PCI device "database" by optimizing for the common case and to use a procedural interface to handle the exceptions. The procedural interface also generalizes the need to setup the hardware (program chipsets, program clock frequencies). o Eliminate the need for PUC_FASTINTR. Serdev devices are fast by default and non-serdev devices are handled by the bus. o Use the serdev I/F to collect interrupt status and to handle interrupts across ports in priority order. o Sync the PCI device configuration to include devices found in NetBSD and not yet merged to FreeBSD. o Add support for Quatech 2, 4 and 8 port UARTs. o Add support for a couple dozen Timedia serial cards as found in Linux.
* Enable the rr232x driver for amd64.scottl2006-04-282-0/+6
|
* In general, bits in the page directory entry (PDE) and the page tablealc2006-04-271-4/+8
| | | | | | | | | | | | | entry (PTE) have the same meaning. The exception to this rule is the eighth bit (0x080). It is the PS bit in a PDE and the PAT bit in a PTE. This change avoids the possibility that pmap_enter() confuses a PAT bit with a PS bit, avoiding a panic(). Eliminate a diagnostic printf() from the i386 pmap_enter() that serves no current purpose, i.e., I've seen no bug reports in the last two years that are helped by this printf(). Reviewed by: jhb
* Move vm.pmap.pv_entry_count out from the PV_STATS ifdefs. It is alwayspeter2006-04-261-2/+3
| | | | available and is a real counter, not a statistic.
* Check if reported HTT cores are physical cores. This commit does notjkim2006-04-251-0/+8
| | | | | affect AMD CPUs at all because HTT bit is disabled earlier. Intel multicore CPUs and ULE scheduler may be affected.
* Add another Intel CPU feature flag, xTPR (Send Task Priority Messages).jkim2006-04-241-1/+1
|
* Check if deterministic cache parameters leaf is valid before use.jkim2006-04-241-1/+2
|
* Adjust dangerous-shared-cache-detection logic from "all shared datacperciva2006-04-241-2/+2
| | | | | | | | | | | | | | | caches are dangerous" to "a shared L1 data cache is dangerous". This is a compromise between paranoia and performance: Unlike the L1 cache, nobody has publicly demonstrated a cryptographic side channel which exploits the L2 cache -- this is harder due to the larger size, lower bandwidth, and greater associativity -- and prohibiting shared L2 caches turns Intel Core Duo processors into Intel Core Solo processors. As before, the 'machdep.hyperthreading_allowed' sysctl will allow even the L1 data cache to be shared. Discussed with: jhb, scottl Security: See FreeBSD-SA-05:09.htt for background material.
* Move AHC_REG_PRETTY_PRINT and AHD_REG_PRETTY_PRINT belowdelphij2006-04-241-4/+4
| | | | their corresponding devices.
* Oops. Minidumps were developed on 6.x, in without the small pv entry code.peter2006-04-211-0/+3
| | | | | Add some strategic dump_add_page()/dump_drop_page() lines to include pv chunks in the minidumps - these operate in the direct map region like UMA.
* Introduce minidumps. Full physical memory crash dumps are still availablepeter2006-04-216-3/+491
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | via the debug.minidump sysctl and tunable. Traditional dumps store all physical memory. This was once a good thing when machines had a maximum of 64M of ram and 1GB of kvm. These days, machines often have many gigabytes of ram and a smaller amount of kvm. libkvm+kgdb don't have a way to access physical ram that is not mapped into kvm at the time of the crash dump, so the extra ram being dumped is mostly wasted. Minidumps invert the process. Instead of dumping physical memory in in order to guarantee that all of kvm's backing is dumped, minidumps instead dump only memory that is actively mapped into kvm. amd64 has a direct map region that things like UMA use. Obviously we cannot dump all of the direct map region because that is effectively an old style all-physical-memory dump. Instead, introduce a bitmap and two helper routines (dump_add_page(pa) and dump_drop_page(pa)) that allow certain critical direct map pages to be included in the dump. uma_machdep.c's allocator is the intended consumer. Dumps are a custom format. At the very beginning of the file is a header, then a copy of the message buffer, then the bitmap of pages present in the dump, then the final level of the kvm page table trees (2MB mappings are expanded into a 4K page mappings), then the sparse physical pages according to the bitmap. libkvm can now conveniently access the kvm page table entries. Booting my test 8GB machine, forcing it into ddb and forcing a dump leads to a 48MB minidump. While this is a best case, I expect minidumps to be in the 100MB-500MB range. Obviously, never larger than physical memory of course. minidumps are on by default. It would want be necessary to turn them off if it was necessary to debug corrupt kernel page table management as that would mess up minidumps as well. Both minidumps and regular dumps are supported on the same machine.
* Set the rid for a resoruce allocated with rman_reserve_resource.imp2006-04-201-1/+1
|
* Correct a local information leakage bug affecting AMD FPUs.cperciva2006-04-191-0/+36
| | | | Security: FreeBSD-SA-06:14.fpu
* If we're doing a try-alloc of a pv entry and give up early, do not forgetpeter2006-04-181-0/+1
| | | | | | | to reduce the pv_entry_count counter. This was found by Tor Egge. In the same email, Tor also pointed out the pv_stats problem in the previous commit, but I'd forgotten about it until I went looking for this email about this allocation problem.
* pv_entry_count is more than a statistic. It is used for resource limiting.peter2006-04-181-3/+3
| | | | Do not compile out its counter updates if pv entry stats are turned off.
* Include opt_pmap.h for PMAP_SHPGPERPROC.alc2006-04-131-0/+1
| | | | PR: 94509
* Retire pmap_track_modified(). We no longer need it because we do notalc2006-04-121-42/+10
| | | | | | | | create managed mappings within the clean submap. To prevent regressions, add assertions blocking the creation of managed mappings within the clean submap. Reviewed by: tegge
* Hook bce up to the buildps2006-04-101-0/+1
|
* Cache the value of the lower half of each I/O APIC redirection table entryjhb2006-04-051-6/+4
| | | | | | | | | | so that we only have to do an ioapic_write() instead of an ioapic_read() followed by an ioapic_write() every time we mask and unmask level triggered interrupts. This cuts the execution time for these operations roughly in half. Profiled by: Paolo Pisati <p.pisati@oltrelinux.com> MFC after: 1 week
* Convert pv_entry_frees and pv_entry_allocs stats counters from int to long,peter2006-04-041-3/+4
| | | | they wrap way too quickly.
* Sync with i386: Map exceptions to signals in gdb_cpu_signal() somarcel2006-04-042-6/+25
| | | | | | that kgdb(1) gets a SIGTRAP when it needs to. Pointed out by: grehan@
* The PC is register 16, not 18.marcel2006-04-041-1/+1
| | | | Pointed out by: grehan@
* Eliminate HAVE_STOPPEDPCBS. On ia64 the PCPU holds a pointer to themarcel2006-04-031-0/+2
| | | | | | | | PCB in which the context of stopped CPUs is stored. To access this PCB from KDB, we introduce a new define, called KDB_STOPPEDPCB. The definition, when present, lives in <machine/kdb.h> and abstracts where MD code saves the context. Define KDB_STOPPEDPCB on i386, amd64, alpha and sparc64 in accordance to previous code.
* Shrink the amd64 pv entry from 48 bytes to about 24 bytes. On a machinepeter2006-04-032-151/+309
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | with large mmap files mapped into many processes, this saves hundreds of megabytes of ram. pv entries were individually allocated and had two tailq entries and two pointers (or addresses). Each pv entry was linked to a vm_page_t and a process's address space (pmap). It had the virtual address and a pointer to the pmap. This change replaces the individual allocation with a per-process allocation system. A page ("pv chunk") is allocated and this provides 168 pv entries for that process. We can now eliminate one of the 16 byte tailq entries because we can simply iterate through the pv chunks to find all the pv entries for a process. We can eliminate one of the 8 byte pointers because the location of the pv entry implies the containing pv chunk, which has the pointer. After overheads from the pv chunk bitmap and tailq linkage, this works out that each pv entry has an effective size of 24.38 bytes. Future work still required, and other problems: * when running low on pv entries or system ram, we may need to defrag the chunk pages and free any spares. The stats (vm.pmap.*) show that this doesn't seem to be that much of a problem, but it can be done if needed. * running low on pv entries is now a much bigger problem. The old get_pv_entry() routine just needed to reclaim one other pv entry. Now, since they are per-process, we can only use pv entries that are assigned to our current process, or by stealing an entire page worth from another process. Under normal circumstances, the pmap_collect() code should be able to dislodge some pv entries from the current process. But if needed, it can still reclaim entire pv chunk pages from other processes. * This should port to i386 really easily, except there it would reduce pv entries from 24 bytes to about 12 bytes. (I have integrated Alan's recent changes.)
* Remove the unused sva and eva arguments from pmap_remove_pages().peter2006-04-031-8/+1
|
* Introduce pmap_try_insert_pv_entry(), a function that conditionally createsalc2006-04-021-13/+28
| | | | | | | | | | | | | | | | | | | a pv entry if the number of entries is below the high water mark for pv entries. Use pmap_try_insert_pv_entry() in pmap_copy() instead of pmap_insert_entry(). This avoids possible recursion on a pmap lock in get_pv_entry(). Eliminate the explicit low-memory checks in pmap_copy(). The check that the number of pv entries was below the high water mark was largely ineffective because it was located in the outer loop rather than the inner loop where pv entries were allocated. Instead of checking, we attempt the allocation and handle the failure. Reviewed by: tegge Reported by: kris MFC after: 5 days
* Add kbdmux(4) to GENERIC on amd64emax2006-03-311-0/+2
| | | | | Requested by: scottl Tested by: scottl
* Hook the MFI driver up to the build.scottl2006-03-291-0/+1
|
* If the XSDT address in the RSDP for an ACPI 2.0 machine is NULL, then falljhb2006-03-271-4/+5
| | | | | | | | back to using the RSDT instead. ACPI-CA already follows this same strategy as a workaround for yet another instance of brain-damaged BIOS writers. PR: i386/93963 Submitted by: Masayuki FUKUI <fukui.FreeBSD@fanet.net>
* Eliminate unnecessary invalidations of the entire TLB by pmap_remove().alc2006-03-211-1/+7
| | | | | | | | Specifically, on mappings with PG_G set pmap_remove() not only performs the necessary per-page invlpg invalidations but also performs an unnecessary invalidation of the entire set of non-PG_G entries. Reviewed by: tegge
* Remove stale KSE code.davidxu2006-03-211-13/+1
| | | | Reviewed by: alc
* Drop some unneeded casts since we program the kernel in C rather than C++.jhb2006-03-202-2/+2
|
* regen: fix of linuxolator with testing in a cross-buildnetchild2006-03-203-9/+15
|
* Fix the linuxolator on amd64 (cross-build).netchild2006-03-201-2/+3
|
* Regen.ru2006-03-193-3/+4
|
* Unbreak COMPAT_LINUX32 option support on amd64.ru2006-03-192-0/+2
| | | | Broken by: netchild
* regennetchild2006-03-181-3/+0
|
OpenPOWER on IntegriCloud