summaryrefslogtreecommitdiffstats
path: root/sys/powerpc/aim
Commit message (Collapse)AuthorAgeFilesLines
* Introduce a procedure, pmap_page_init(), that initializes thealc2005-06-101-0/+8
| | | | | | | | | | | | | | | | | | | vm_page's machine-dependent fields. Use this function in vm_pageq_add_new_page() so that the vm_page's machine-dependent and machine-independent fields are initialized at the same time. Remove code from pmap_init() for initializing the vm_page's machine-dependent fields. Remove stale comments from pmap_init(). Eliminate the Boolean variable pmap_initialized from the alpha, amd64, i386, and ia64 pmap implementations. Its use is no longer required because of the above changes and earlier changes that result in physical memory that is being mapped at initialization time being mapped without pv entries. Tested by: cognet, kensmith, marcel
* Change cpu_set_kse_upcall to more generic style, so we can reuse itdavidxu2005-04-231-5/+13
| | | | | | | in other codes. Add cpu_set_user_tls, use it to tweak user register and setup user TLS. I ever wanted to merge it into cpu_set_kse_upcall, but since cpu_set_kse_upcall is also used by M:N threads which may not need this feature, so I wrote a separated cpu_set_user_tls.
* Don't enter the debugger if KDB_UNATTENDED is set or ifps2005-04-201-2/+0
| | | | | | debug.debugger_on_panic=0. MFC after: 2 weeks
* Use PCPU_LAZY_INC() for cnt.v_{intr,trap,syscalls} rather than atomicjhb2005-04-121-2/+2
| | | | operations in some places and simple non-per CPU math in others.
* Change an instance of md_savecrit to md_saved_msr that I missed.jhb2005-04-081-1/+1
|
* Divorce critical sections from spinlocks. Critical sections as denoted byjhb2005-04-042-0/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | critical_enter() and critical_exit() are now solely a mechanism for deferring kernel preemptions. They no longer have any affect on interrupts. This means that standalone critical sections are now very cheap as they are simply unlocked integer increments and decrements for the common case. Spin mutexes now use a separate KPI implemented in MD code: spinlock_enter() and spinlock_exit(). This KPI is responsible for providing whatever MD guarantees are needed to ensure that a thread holding a spin lock won't be preempted by any other code that will try to lock the same lock. For now all archs continue to block interrupts in a "spinlock section" as they did formerly in all critical sections. Note that I've also taken this opportunity to push a few things into MD code rather than MI. For example, critical_fork_exit() no longer exists. Instead, MD code ensures that new threads have the correct state when they are created. Also, we no longer try to fixup the idlethreads for APs in MI code. Instead, each arch sets the initial curthread and adjusts the state of the idle thread it borrows in order to perform the initial context switch. This change is largely a big NOP, but the cleaner separation it provides will allow for more efficient alternative locking schemes in other parts of the kernel (bare critical sections rather than per-CPU spin mutexes for per-CPU data for example). Reviewed by: grehan, cognet, arch@, others Tested on: i386, alpha, sparc64, powerpc, arm, possibly more
* Include <sys/signalvar.h> for trapsignal prototype.grehan2005-03-151-0/+1
|
* Replaced previous hw.physmem extraction with des's mods togrehan2005-03-071-50/+2
| | | | | | getenv_ulong() - much simpler. Pointed out by: des
* physmem is a much better indicator for 'real' memory on PPC than Maxmemgrehan2005-03-071-3/+3
| | | | | | since there are often significant holes in the memory map due to the kernel, loader and OFW data structures not being included: Maxmem is the highest available, so can be misleading.
* Allow user to undersize memory with hw.physmem loader variable.grehan2005-03-071-1/+62
| | | | Obtained from: i386/machdep.c:getmemsize()
* Catch up with "physical memory" sysctl change.grehan2005-03-011-0/+2
| | | | (MFi386: rev 1.608)
* Catch the case where the idle loop is entered with interrupts disabled,grehan2005-02-281-1/+9
| | | | causing a hard hang.
* - switch pcpu to a struct declaration ala amd64. It may be more efficient togrehan2005-02-281-3/+2
| | | | | | cache-align this struct, but that's a topic for a far-in-the-future commit. - eliminate commented-out reference to a non-existent pcpu field.
* Correctly set kernelname for kern.bootfile sysctlgrehan2005-02-281-0/+10
| | | | | Noticed by: gad Code stolen from: sparc64
* Add PVO_FAKE flag to pvo entries for PG_FICTITIOUS mappings, togrehan2005-02-251-14/+25
| | | | | | | avoid trying to reverse-map a device physical address to the vm_page array and walking into non-existent vm weeds. found by: Xorg server exiting
* Finish the job of sorting all includes and fix the build by includingnjl2005-02-061-23/+27
| | | | | | malloc.h before proc.h on sparc64. Noticed by das@ Compiled on: alpha, amd64, i386, pc98, sparc64
* Sort includes a little so that bus.h comes before cpu.h (for device_t).njl2005-02-041-4/+4
|
* Add an implementation of cpu_est_clockrate(9). This function estimates thenjl2005-02-041-0/+9
| | | | current clock frequency for the given CPU id in units of Hz.
* - add wall_cmos_clock and adjkerntz variables, required by msdosfsgrehan2005-02-041-0/+21
| | | | | - support adjkerntz sysctl to silence NTP, though it's a null implementation at the moment.
* Fix (accidental?) lock order reversal in pmap_remove. Found whengrehan2005-01-211-1/+1
| | | | a process that has mmap'd device mem exits.
* - Remove some OBE comments regarding cpu_exit(). cpu_exit() is no longerjhb2005-01-141-7/+0
| | | | | | | | the last action of kern_exit(). Instead, it is a MD callout to cleanup per-process state during exit. - Add notes of concern to Alpha and ia64 about the possible need to drop fp state in cpu_thread_exit() rather than in cpu_exit() since it is per-thread state rather than per-process.
* /* -> /*- for license, minor formatting changesimp2005-01-0714-19/+19
|
* Correctly initialise the 2nd kernel segment, and don'tgrehan2004-12-291-1/+3
| | | | | | forget to actually install it in the segment register. This may fix some of the weird panics seen when kernel VM is heavily used.
* Modify pmap_enter_quick() so that it expects the page queues to be lockedalc2004-12-231-2/+0
| | | | | | | | | | | on entry and it assumes the responsibility for releasing the page queues lock if it must sleep. Remove a bogus comment from pmap_enter_quick(). Using the first change, modify vm_map_pmap_enter() so that the page queues lock is acquired and released once, rather than each time that a page is mapped.
* In the common case, pmap_enter_quick() completes without sleeping.alc2004-12-151-0/+8
| | | | | | | | | | | | | | | | | | In such cases, the busying of the page and the unlocking of the containing object by vm_map_pmap_enter() and vm_fault_prefault() is unnecessary overhead. To eliminate this overhead, this change modifies pmap_enter_quick() so that it expects the object to be locked on entry and it assumes the responsibility for busying the page and unlocking the object if it must sleep. Note: alpha, amd64, i386 and ia64 are the only implementations optimized by this change; arm, powerpc, and sparc64 still conservatively busy the page and unlock the object within every pmap_enter_quick() call. Additionally, this change is the first case where we synchronize access to the page's PG_BUSY flag and busy field using the containing object's lock rather than the global page queues lock. (Modifications to the page's PG_BUSY flag and busy field have asserted both locks for several weeks, enabling an incremental transition.)
* Don't include sys/user.h merely for its side-effect of recursivelydas2004-11-273-4/+2
| | | | including other headers.
* U areas are going away, so don't allocate one for process 0.das2004-11-201-3/+0
| | | | Reviewed by: arch@
* Lock the kernel pmap in pmap_kenter().alc2004-09-131-0/+2
| | | | Tested by: gallatin@
* Refactor a bunch of scheduler code to give basically the same behaviourjulian2004-09-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
* Remove an unneeded argument..julian2004-08-311-1/+1
| | | | | | | | | The removed argument could trivially be derived from the remaining one. That in turn should be the same as curthread, but it is possible that curthread could be expensive to derive on some syste,s so leave it as an argument. Having both proc and thread as an argumen tjust gives an opportunity for them to get out sync. MFC after: 3 days
* Remove sched_free_thread() which was only usedjulian2004-08-311-3/+0
| | | | | | | | in diagnostics. It has outlived its usefulness and has started causing panics for people who turn on DIAGNOSTIC, in what is otherwise good code. MFC after: 2 days
* - Introduce a lock for synchronizing access to the pvo and pteg tables.alc2004-08-301-8/+28
| | | | | | | - In pmap_enter(), only the acquisition and release of the page queues lock needs to check the bootstrap flag. Tested by: gallatin@
* Eliminate unnecessary indirection.alc2004-08-281-2/+2
|
* Add pmap locking to many of the functions.alc2004-08-261-16/+44
| | | | | | | Many thanks to Andrew Gallatin for resolving a powerpc-specific initialization problem in my original patch. Tested by: gallatin@
* Instead of "OpenFirmware", "openfirmware", etc. use the official spellingmarius2004-08-165-7/+7
| | | | | | "Open Firmware" from IEEE 1275 and OpenFirmware.org (no pun intended). Ok'ed by: tmm
* Add /dev/mem and /dev/kmem to powerpc.ssouhlal2004-08-162-0/+29
| | | | Approved by: grehan (mentor)
* In pmap_page_protect, clear the vm page's PG_WRITEABLE flag ifgrehan2004-08-051-1/+7
| | | | | downgrading to read-only. Found by triggering the KASSERT in vm_pageout_flush().
* - Push down the acquisition and release of Giant into pmap_enter_quick()alc2004-08-041-0/+2
| | | | | on those architectures without pmap locking. - Eliminate the acquisition and release of Giant in vm_map_pmap_enter().
* Kernel traps were not being passed to trap_fatal in somegrehan2004-08-021-1/+2
| | | | | | circumstances. Spotted by: gallatin
* - Push down the acquisition and release of Giant into pmap_protect() onalc2004-07-301-0/+4
| | | | | | | | those architectures without pmap locking. - Eliminate the acquisition and release of Giant from vm_map_protect(). (Translation: mprotect(2) runs to completion without touching Giant on alpha, amd64, i386 and ia64.)
* Implement MD parts of ptrace.ssouhlal2004-07-291-13/+43
| | | | Approved by: grehan (mentor)
* Save DAR/DSISR in DDB regsave area when stack overflow detected. It'sgrehan2004-07-271-0/+4
| | | | hard to work out where the problem was without these.
* Improve boot-time debugging with DDB by extracting the ksym start/endgrehan2004-07-271-0/+9
| | | | values from the loader.
* Implement the protection check required by the pmap_extract_and_hold()alc2004-07-261-3/+6
| | | | | | specification. Reviewed and tested by: grehan@
* Detect kernel stack excursion into guard pages. Drop into KDBgrehan2004-07-231-5/+36
| | | | | | with a wired stack if this is found. Mostly obtained from: NetBSD
* Bring KDB stack size into line with thread stack size (4 pages).grehan2004-07-231-1/+1
|
* Allow DSI exceptions to invoke DDB.grehan2004-07-231-1/+2
|
* Update the callframe structure to leave space for the frame pointergrehan2004-07-222-1/+6
| | | | | | | | | and saved link register as per the ABI call sequence. Update code that uses this (fork_trampoline etc) to use the correct genassym'd offsets. This fixes the 'invalid LR' message when backtracing kernel threads in DDB.
* Properly obey PPC context synchronization rules when modifyinggrehan2004-07-201-0/+2
| | | | | the address translation bits of the MSR. This fixes the boot-time panic reported by Drew Gallatin.
* Push down the acquisition and release of the page queues lock intoalc2004-07-151-0/+4
| | | | | | pmap_protect() and pmap_remove(). In general, they require the lock in order to modify a page's pv list or flags. In some cases, however, pmap_protect() can avoid acquiring the lock.
OpenPOWER on IntegriCloud