summaryrefslogtreecommitdiffstats
path: root/sys/amd64/include/param.h
Commit message (Collapse)AuthorAgeFilesLines
* MFi386: whitespace, copyright header, etc updatespeter2005-01-211-1/+0
|
* Begin all license/copyright comments with /*-imp2005-01-051-1/+1
|
* Remove UAREA_PAGES.das2004-11-201-1/+0
| | | | Reviewed by: arch@
* Turn PREEMPTION into a kernel option. Make sure that it's defined ifscottl2004-09-021-5/+0
| | | | | | FULL_PREEMPTION is defined. Add a runtime warning to ULE if PREEMPTION is enabled (code inspired by the PREEMPTION warning in kern_switch.c). This is a possible MT5 candidate.
* Turn off PREEMPTION by default while it gets debugged. It's been causingscottl2004-08-011-0/+3
| | | | | 4 weeks of problems including deadlocks and instant panics. Note that the real bugs are likely in the scheduler.
* Implement preemption of kernel threads natively in the scheduler ratherjhb2004-07-021-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | than as one-off hacks in various other parts of the kernel: - Add a function maybe_preempt() that is called from sched_add() to determine if a thread about to be added to a run queue should be preempted to directly. If it is not safe to preempt or if the new thread does not have a high enough priority, then the function returns false and sched_add() adds the thread to the run queue. If the thread should be preempted to but the current thread is in a nested critical section, then the flag TDF_OWEPREEMPT is set and the thread is added to the run queue. Otherwise, mi_switch() is called immediately and the thread is never added to the run queue since it is switch to directly. When exiting an outermost critical section, if TDF_OWEPREEMPT is set, then clear it and call mi_switch() to perform the deferred preemption. - Remove explicit preemption from ithread_schedule() as calling setrunqueue() now does all the correct work. This also removes the do_switch argument from ithread_schedule(). - Do not use the manual preemption code in mtx_unlock if the architecture supports native preemption. - Don't call mi_switch() in a loop during shutdown to give ithreads a chance to run if the architecture supports native preemption since the ithreads will just preempt DELAY(). - Don't call mi_switch() from the page zeroing idle thread for architectures that support native preemption as it is unnecessary. - Native preemption is enabled on the same archs that supported ithread preemption, namely alpha, i386, and amd64. This change should largely be a NOP for the default case as committed except that we will do fewer context switches in a few cases and will avoid the run queues completely when preempting. Approved by: scottl (with his re@ hat)
* Be a little more consistent in the naming of the PML4 defines.peter2004-06-071-3/+3
|
* Since we have additional kernel virtual address space, allow the bufferalc2003-12-201-1/+1
| | | | cache to grow to 400M bytes.
* Initial landing of SMP support for FreeBSD/amd64.peter2003-11-171-0/+4
| | | | | | | | | | | | | | | | - This is heavily derived from John Baldwin's apic/pci cleanup on i386. - I have completely rewritten or drastically cleaned up some other parts. (in particular, bootstrap) - This is still a WIP. It seems that there are some highly bogus bioses on nVidia nForce3-150 boards. I can't stress how broken these boards are. I have a workaround in mind, but right now the Asus SK8N is broken. The Gigabyte K8NPro (nVidia based) is also mind-numbingly hosed. - Most of my testing has been with SCHED_ULE. SCHED_4BSD works. - the apic and acpi components are 'standard'. - If you have an nVidia nForce3-150 board, you are stuck with 'device atpic' in addition, because they somehow managed to forget to connect the 8254 timer to the apic, even though its in the same silicon! ARGH! This directly violates the ACPI spec.
* KSTACK_PAGES is a global option.peter2003-07-311-0/+2
|
* Migrate the thread stack management functions from the machine-dependentalc2003-06-141-1/+1
| | | | | | | | | | | | | | | | to the machine-independent parts of the VM. At the same time, this introduces vm object locking for the non-i386 platforms. Two details: 1. KSTACK_GUARD has been removed in favor of KSTACK_GUARD_PAGES. The different machine-dependent implementations used various combinations of KSTACK_GUARD and KSTACK_GUARD_PAGES. To disable guard page, set KSTACK_GUARD_PAGES to 0. 2. Remove the (unnecessary) clearing of PG_ZERO in vm_thread_new. In 5.x, (but not 4.x,) PG_ZERO can only be set if VM_ALLOC_ZERO is passed to vm_page_alloc() or vm_page_grab().
* Fix ALIGNED_POINTER(). sizeof((u_int32_t)) is not legal C.peter2003-06-041-1/+1
|
* Major pmap rework to take advantage of the larger address space on amd64peter2003-05-231-7/+5
| | | | | | | | | | | | | | | | | | | | | | | | | systems. Of note: - Implement a direct mapped region using 2MB pages. This eliminates the need for temporary mappings when getting ptes. This supports up to 512GB of physical memory for now. This should be enough for a while. - Implement a 4-tier page table system. Most of the infrastructure is there for 128TB of userland virtual address space, but only 512GB is presently enabled due to a mystery bug somewhere. The design of this was heavily inspired by the alpha pmap.c. - The kernel is moved into the negative address space(!). - The kernel has 2GB of KVM available. - Provide a uma memory allocator to use the direct map region to take advantage of the 2MB TLBs. - Fixed some assumptions in the bus_space macros about the ability to fit virtual addresses in an 'int'. Notable missing things: - pmap_growkernel() should be able to grow to 512GB of KVM by expanding downwards below kernbase. The kernel must be at the top 2GB of the negative address space because of gcc code generation strategies. - need to fix the >512GB user vm code. Approved by: re (blanket)
* Commit MD parts of a loosely functional AMD64 port. This is based onpeter2003-05-011-17/+48
| | | | | | | | | | | | | | | | | | | | | | a heavily stripped down FreeBSD/i386 (brutally stripped down actually) to attempt to get a stable base to start from. There is a lot missing still. Worth noting: - The kernel runs at 1GB in order to cheat with the pmap code. pmap uses a variation of the PAE code in order to avoid having to worry about 4 levels of page tables yet. - It boots in 64 bit "long mode" with a tiny trampoline embedded in the i386 loader. This simplifies locore.s greatly. - There are still quite a few fragments of i386-specific code that have not been translated yet, and some that I cheated and wrote dumb C versions of (bcopy etc). - It has both int 0x80 for syscalls (but using registers for argument passing, as is native on the amd64 ABI), and the 'syscall' instruction for syscalls. int 0x80 preserves all registers, 'syscall' does not. - I have tried to minimize looking at the NetBSD code, except in a couple of places (eg: to find which register they use to replace the trashed %rcx register in the syscall instruction). As a result, there is not a lot of similarity. I did look at NetBSD a few times while debugging to get some ideas about what I might have done wrong in my first attempt.
* Repocopy from x86_64/... to amd64/...peter2003-04-301-8/+7
| | | | | | Rename visible x86_64 references to amd64. Kill MID_MACHINE, its a.out specific, the only platform that supports it is i386. All of the other platforms should remove it too.
* Initiate deorbit burn for the i386-only a.out related support. Moves arepeter2002-09-171-7/+0
| | | | | | | | | | | | | | | under way to move the remnants of the a.out toolchain to ports. As the comment in src/Makefile said, this stuff is deprecated and one should not expect this to remain beyond 4.0-REL. It has already lasted WAY beyond that. Notable exceptions: gcc - I have not touched the a.out generation stuff there. ldd/ldconfig - still have some code to interface with a.out rtld. old as/ld/etc - I have not removed these yet, pending their move to ports. some includes - necessary for ldd/ldconfig for now. Tested on: i386 (extensively), alpha
* This is the start of the FreeBSD/x86_64 kernel.obrien2002-06-301-0/+138
OpenPOWER on IntegriCloud