summaryrefslogtreecommitdiffstats
path: root/sys/amd64
Commit message (Collapse)AuthorAgeFilesLines
...
* Severely strip down the repocopied i386/bios.c and bios.h files. It turnspeter2004-09-242-823/+3
| | | | out that bios_sigsearch() etc is useful for finding tables in roms.
* Correct a long-standing error in _pmap_unwire_pte_hold() affectingalc2004-09-221-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | multiprocessors. Specifically, the error is conditioning the call to pmap_invalidate_page() on whether the pmap is active on the current CPU. This call must be unconditional. Regardless of whether the pmap is active on the CPU performing _pmap_unwire_pte_hold(), it could be active on another CPU. For example, a call to pmap_remove_all() by the page daemon could result in a call to _pmap_unwire_pte_hold() with the pmap inactive on the current CPU and active on another CPU. In such circumstances, failing to call pmap_invalidate_page() results in a stale TLB entry on the other CPU that still maps the now deallocated page table page. What happens next is typically a mysterious panic in pmap_enter() by the other CPU, either "pmap_enter: attempted pmap_enter on 4MB page" or "pmap_enter: pte vanished, va: 0x%lx". Both occur because the former page table page has been recycled and allocated to a new purpose. Consequently, it no longer contains zeroes. See also Peter's i386/i386/pmap.c revision 1.448 and the related e-mail thread last year. Many thanks to the engineers at Sandvine for providing clear and concise information until all of the pieces of the puzzle fell into place and for testing an earlier patch. MT5 Candidate
* MFi386: adapt rev 1.19 (debugger fixes)peter2004-09-221-2/+10
|
* Minor sync-up with i386. Catch up on de-quoting and de-counting afterpeter2004-09-221-14/+23
| | | | config changes.
* MFi386: add ispfw (except using correct device<tab><tab>ispfw format,peter2004-09-221-0/+1
| | | | <space><tab> is for the options line)
* - Add support for "paging" in stack trace output. That is, when you dojhb2004-09-201-2/+4
| | | | | | | | | | | | | a stack trace from ddb, the output will pause with a '--More--' prompt every 18 lines. If you hit Enter, it will print another line and prompt again. If you hit space it will output another page and then prompt. If you hit 'q' or 'x' it will abort the rest of the stack trace. - Fix the sparc64 userland stack trace to honor the total count of lines to print. This is useful if your trace happens to walk back onto 0xdeadc0de and gets stuck in an endless loop. MFC after: 1 month Tested on: i386, alpha, sparc64
* Simplify the reference counting of page table pages. Specifically, usealc2004-09-191-27/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the page table page's wired count rather than its hold count to contain the reference count. My rationale for this change is based on several factors: 1. The machine-independent and pmap layers used the same hold count field in subtly different ways. The machine-independent layer uses the hold count to implement a form of ephemeral wiring that is used by pipes, physio, etc. In other words, subsystems where we wish to temporarily block a page from being swapped out while it is mapped into the kernel's address space. Such pages are never removed from the page queues. Instead, the page daemon recognizes a non-zero hold count to mean "hands off this page." In contrast, page table pages are never in the page queues; they are wired from birth to death. The hold count was being used as a kind of reference count, specifically, the number of valid page table entries within the page. Not surprisingly, these two different uses imply different synchronization rules: in the machine- independent layer access to the hold count requires the page queues lock; whereas in the pmap layer the pmap lock is required. Thus, continued use by the pmap layer of vm_page_unhold(), which asserts that the page queues lock is held, made no sense. 2. _pmap_unwire_pte_hold() was too forgiving in its handling of the wired count. An unexpected wired count on a page table page was ignored and the underlying page leaked. 3. In a word, microoptimization. Using the wired count exclusively, rather than a combination of the wired and hold counts, makes the code slightly smaller and faster. Reviewed by: tegge@
* Remove an outdated assertion from _pmap_allocpte(). (When vm_page_alloc()alc2004-09-191-3/+0
| | | | | succeeds, the page's queue field is unconditionally set to PQ_NONE by vm_pageq_remove_nowakeup().)
* Release the page queues lock earlier in pmap_protect() and pmap_remove() inalc2004-09-181-6/+5
| | | | order to reduce contention.
* Add new a function isa_dma_init() which returns an errno when it failsphk2004-09-151-13/+11
| | | | | | | | | and which takes a M_WAITOK/M_NOWAIT flag argument. Add compatibility isa_dmainit() macro which whines loudly if isa_dma_init() fails. Problem uncovered by: tegge
* Remove now unused #include files.phk2004-09-151-52/+0
|
* Use an atomic op to update the pte in pmap_protect(). This is to preventalc2004-09-121-6/+7
| | | | | | | | | | | | | | | | | | | | | the loss of a page modified (PG_M) bit in a race between processors. Quoting Tor: One scenario where the old code could cause a lost PG_M bit is a multithreaded linux program (or FreeBSD program using the linuxthreads port) where one thread was starting a subprocess. The thread doing fork() would call vmspace_fork(), which would then call vm_map_copy_entry() which would call pmap_protect() on an area possibly accessed by other threads. Additionally, make the clearing of PG_M by pmap_protect() unconditional if write permission is removed. Previously, PG_M could persist on a read-only unmanaged page. That seems inconsistent and confusing. In collaboration with: tegge@ MT5 candidate PR: 61852
* Double the number of kernel page tables for amd64 and for i386/PAE. The oldscottl2004-09-111-1/+2
| | | | | | | | value was only enough for 8GB of RAM, the new value can do 16GB. This still isn't optimal since it doesn't scale. Fixing this for amd64 looks to be fairly easy, but for i386 will be quite difficult. Reviewed by: peter
* Add device driver support for the VIA Networking Technologieswpaul2004-09-101-0/+3
| | | | | | | | | | | | VT6122 gigabit ethernet chip and integrated 10/100/1000 copper PHY. The vge driver has been added to GENERIC for i386, pc98 and amd64, but not to sparc or ia64 since I don't have the ability to test it there. The vge(4) driver supports VLANs, checksum offload and jumbo frames. Also added the lge(4) and nge(4) drivers to GENERIC for i386 and pc98 since I was in the neighborhood. There's no reason to leave them out anymore.
* Use atomic ops in pmap_clear_ptes() to prevent SMP races that couldalc2004-09-081-4/+7
| | | | | | | | result in the loss of an accessed or modified bit from the pte. In collaboration with: tegge@ MT5 candidate
* Fix a problem with tag->boundary inheritence that has existed since day onescottl2004-09-081-5/+5
| | | | | | | | | | | | | | and was propagated to nearly every platform. The boundary of the child needs to consider the boundary of the parent and pick the minimum of the two, not the maximum. However, if either is 0 then pick the appropriate one. This bug was exposed by a recent change to ATA, which should now be fixed by this change. The alignment and maxsegsz tag attributes likely also need a similar review in the near future. This is a MT5 candidate. Reviewed by: marcel Submitted by: sos (in part)
* Switch the default scheduler to 4BSD to match what will go into RELENG_5 soon.scottl2004-09-071-1/+3
| | | | | | | It can be switched back once 5.3 is tested and released. Also turn on PREEMPTION as many of the stability problems with it have been fixed. MT5: 3 days.
* Refactor a bunch of scheduler code to give basically the same behaviourjulian2004-09-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
* Turn PREEMPTION into a kernel option. Make sure that it's defined ifscottl2004-09-021-5/+0
| | | | | | FULL_PREEMPTION is defined. Add a runtime warning to ULE if PREEMPTION is enabled (code inspired by the PREEMPTION warning in kern_switch.c). This is a possible MT5 candidate.
* Give the 4bsd scheduler the ability to wake up idle processorsjulian2004-09-011-2/+0
| | | | | | when there is new work to be done. MFC after: 5 days
* Give setrunqueue() and sched_add() more of a clue as tojulian2004-09-011-1/+1
| | | | | | where they are coming from and what is expected from them. MFC after: 2 days
* Remove an unneeded argument..julian2004-08-311-2/+2
| | | | | | | | | The removed argument could trivially be derived from the remaining one. That in turn should be the same as curthread, but it is possible that curthread could be expensive to derive on some syste,s so leave it as an argument. Having both proc and thread as an argumen tjust gives an opportunity for them to get out sync. MFC after: 3 days
* Remove sched_free_thread() which was only usedjulian2004-08-312-9/+0
| | | | | | | | in diagnostics. It has outlived its usefulness and has started causing panics for people who turn on DIAGNOSTIC, in what is otherwise good code. MFC after: 2 days
* Add the mp_watchdog hooks, although it locks up my SMP test box. It mightpeter2004-08-301-0/+9
| | | | be useable to somebody.
* Remove unnecessary check for curthread == NULL.alc2004-08-301-1/+1
|
* s/smp_rv_mtx/smp_ipi_mtx/gobrien2004-08-282-8/+8
| | | | Requested by: jhb
* Fix a comment, IA32 was renamed to COMPAT_IA32arved2004-08-271-1/+1
| | | | Approved by: marcel
* Move the kernel-specific logic to adjust frompc from MI to MD. Formarcel2004-08-271-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | these two reasons: 1. On ia64 a function pointer does not hold the address of the first instruction of a functions implementation. It holds the address of a function descriptor. Hence the user(), btrap(), eintr() and bintr() prototypes are wrong for getting the actual code address. 2. The logic forces interrupt, trap and exception entry points to be layed-out contiguously. This can not be achieved on ia64 and is generally just bad programming. The MCOUNT_FROMPC_USER macro is used to set the frompc argument to some kernel address which represents any frompc that falls outside the kernel text range. The macro can expand to ~0U to bail out in that case. The MCOUNT_FROMPC_INTR macro is used to set the frompc argument to some kernel address to represent a call to a trap or interrupt handler. This to avoid that the trap or interrupt handler appear to be called from everywhere in the call graph. The macro can expand to ~0U to prevent adjusting frompc. Note that the argument is selfpc, not frompc. This commit defines the macros on all architectures equivalently to the original code in sys/libkern/mcount.c. People can take it from here... Compile-tested on: alpha, amd64, i386, ia64 and sparc64 Boot-tested on: i386
* The machine-independent parts of the virtual memory system always pass aalc2004-08-271-16/+0
| | | | | | | valid pmap to the pmap functions that require one. Remove the checks for NULL. (These checks have their origins in the Mach pmap.c that was integrated into BSD. None of the new code written specifically for FreeBSD included them.)
* Always compile PFIL_HOOKS into the kernel and remove the associated kernelandre2004-08-271-1/+0
| | | | | | | | | | | compile option. All FreeBSD packet filters now use the PFIL_HOOKS API and thus it becomes a standard part of the network stack. If no hooks are connected the entire packet filter hooks section and related activities are jumped over. This removes any performance impact if no hooks are active. Both OpenBSD and DragonFlyBSD have integrated PFIL_HOOKS permanently as well.
* Correct the arguments to kern_sigaltstack() as they were reversed.jhb2004-08-241-2/+2
| | | | | PR: kern/68079 Submitted by: Georg-W. Koltermann gwk at rahn-koltermann dot de
* Catch up with i386 nexus.c rev 1.59: add bus_get_resource_list().njl2004-08-241-0/+10
|
* It is now an error to call pmap_unuse_pt without the paddr of the pdepeter2004-08-241-3/+1
| | | | that contained the pte.
* Oops, I forgot to have the idle loop call mp_grab_cpu_hlt() on the amd64peter2004-08-241-0/+4
| | | | SMP case.
* Commit Doug White and Alan Cox's fix for the cross-ipi smp deadlock.peter2004-08-232-11/+8
| | | | | | | | | | | | | | | We were obtaining different spin mutexes (which disable interrupts after aquisition) and spin waiting for delivery. For example, KSE processes do LDT operations which use smp_rendezvous, while other parts of the system are doing things like tlb shootdowns with a different mutex. This patch uses the common smp_rendezvous mutex for all MD home-grown IPIs that spinwait for delivery. Having the single mutex means that the spinloop to aquire it will enable interrupts periodically, thus avoiding the cross-ipi deadlock. Obtained from: dwhite, alc Reviewed by: jhb
* Sync with i386 - Optimize intr_execute_handlers a bit etc.peter2004-08-164-27/+71
|
* Sync with i386 - remove unused includespeter2004-08-161-2/+0
|
* Sync with i386 - get the softc via the devclass rather than caching the devpeter2004-08-161-2/+1
|
* Sync with i386 - add ADAPTIVE_GIANT, remove pcicpeter2004-08-161-1/+1
|
* Sync with i386 - add foot shooting protection for the DDB/KDB thing.peter2004-08-161-0/+5
|
* Sync with i386 - set rbp reg to 0 for upcalls as a frame marker, not thatpeter2004-08-161-0/+1
| | | | it is guaranteed to be used in userland though.
* Sync with i386 - trace syscall entry/exit times, and a cosmetic fix.peter2004-08-161-1/+7
|
* Sync with i386 - fix bounds check in lapic_create()peter2004-08-161-1/+1
|
* Sync with i386 - pass resource requests up to parentpeter2004-08-161-92/+2
|
* Sync with i386 - s/cpu_swtch/cpu_switch/peter2004-08-161-1/+1
|
* Sync with i386 - dont count needed bounce pages if loading a buffer thatpeter2004-08-161-1/+1
| | | | was created with bud_dmamem_alloc()
* Sync with i386 - cosmetic fixespeter2004-08-161-1/+2
|
* Catch up with i386 - remove lots of no longer used symbolic constantspeter2004-08-161-76/+1
|
* Sync with i386peter2004-08-161-3/+11
|
* Complete 'IA32' -> 'COMPAT_IA32' change for the Linuxulator32.obrien2004-08-162-3/+3
|
OpenPOWER on IntegriCloud