summaryrefslogtreecommitdiffstats
path: root/sys/ia64
Commit message (Collapse)AuthorAgeFilesLines
* When switching backing store during signal delivery, do the switch beforedfr2001-04-242-14/+16
| | | | | | creating the register frame for calling the handler. Also discard that frame before switching back to the old backing store after the handler returns.
* Align stack pointer and backing store pointer to 16 byte boundary whendfr2001-04-241-0/+5
| | | | delivering signals.
* Don't trash the user's pr on syscalls.dfr2001-04-242-2/+4
|
* Don't unwrap the function descriptor used as the callout argument todfr2001-04-191-7/+2
| | | | | fork_exit(). The MI version of fork_exit() needs a real function descriptor, not a simple function pointer.
* Don't take the Giant mutex for clock interrupts.dfr2001-04-191-2/+0
|
* Don't panic when we try to modify the kernel pmap.dfr2001-04-181-2/+2
|
* Print an approximation of the function arguments in the stack trace.dfr2001-04-181-10/+25
|
* Implement a simple stack trace for DDB. This will have to be redonedfr2001-04-183-11/+68
| | | | if/when we change to a more modern toolchain.
* Record the right value for tf_ndirty for kernel interruptions so thatdfr2001-04-182-6/+6
| | | | we can examine the interrupted register stack frame in DDB.
* Turn on kernel debugging support (DDB, INVARIANTS, INVARIANT_SUPPORT, WITNESS)obrien2001-04-151-1/+7
| | | | | | by default while SMPng is still being developed. Submitted by: jhb
* Rename the IPI API from smp_ipi_* to ipi_* since the smp_ prefix is justjhb2001-04-112-20/+20
| | | | | | "redundant noise" and to match the IPI constant namespace (IPI_*). Requested by: bde
* Reduce the emasculation of bounds_check_with_label() by one line, so weobrien2001-03-291-1/+1
| | | | propagate a bio error condition to the caller and above.
* Convert the allproc and proctree locks from lockmgr locks to sx locks.jhb2001-03-281-3/+5
|
* Rework the witness code to work with sx locks as well as mutexes.jhb2001-03-283-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Introduce lock classes and lock objects. Each lock class specifies a name and set of flags (or properties) shared by all locks of a given type. Currently there are three lock classes: spin mutexes, sleep mutexes, and sx locks. A lock object specifies properties of an additional lock along with a lock name and all of the extra stuff needed to make witness work with a given lock. This abstract lock stuff is defined in sys/lock.h. The lockmgr constants, types, and prototypes have been moved to sys/lockmgr.h. For temporary backwards compatability, sys/lock.h includes sys/lockmgr.h. - Replace proc->p_spinlocks with a per-CPU list, PCPU(spinlocks), of spin locks held. By making this per-cpu, we do not have to jump through magic hoops to deal with sched_lock changing ownership during context switches. - Replace proc->p_heldmtx, formerly a list of held sleep mutexes, with proc->p_sleeplocks, which is a list of held sleep locks including sleep mutexes and sx locks. - Add helper macros for logging lock events via the KTR_LOCK KTR logging level so that the log messages are consistent. - Add some new flags that can be passed to mtx_init(): - MTX_NOWITNESS - specifies that this lock should be ignored by witness. This is used for the mutex that blocks a sx lock for example. - MTX_QUIET - this is not new, but you can pass this to mtx_init() now and no events will be logged for this lock, so that one doesn't have to change all the individual mtx_lock/unlock() operations. - All lock objects maintain an initialized flag. Use this flag to export a mtx_initialized() macro that can be safely called from drivers. Also, we on longer walk the all_mtx list if MUTEX_DEBUG is defined as witness performs the corresponding checks using the initialized flag. - The lock order reversal messages have been improved to output slightly more accurate file and line numbers.
* Switch from save/disable/restore_intr() to critical_enter/exit().jhb2001-03-281-4/+3
|
* Catch up to the mtx_saveintr -> mtx_savecrit change.jhb2001-03-281-1/+1
|
* - Switch from using save/disable/restore_intr to using critical_enter/exitjhb2001-03-281-1/+1
| | | | | | | | | | | | | | | | | and change the u_int mtx_saveintr member of struct mtx to a critical_t mtx_savecrit. - On the alpha we no longer need a custom _get_spin_lock() macro to avoid an extra PAL call, so remove it. - Partially fix using mutexes with WITNESS in modules. Change all the _mtx_{un,}lock_{spin,}_flags() macros to accept explicit file and line parameters and rename them to use a prefix of two underscores. Inside of kern_mutex.c, generate wrapper functions for _mtx_{un,}lock_{spin,}_flags() (only using a prefix of one underscore) that are called from modules. The macros mtx_{un,}lock_{spin,}_flags() are mapped to the __mtx_* macros inside of the kernel to inline the usual case of mutex operations and map to the internal _mtx_* functions in the module case so that modules will use WITNESS and KTR logging if the kernel is compiled with support for it.
* - Add the new critical_t type used to save state inside of criticaljhb2001-03-282-9/+11
| | | | | | | | | sections. - Add implementations of the critical_enter() and critical_exit() functions and remove restore_intr() and save_intr(). - Remove the somewhat bogus disable_intr() and enable_intr() functions on the alpha as the alpha actually uses a priority level and not simple bit flag on the CPU.
* Send the remains (such as I have located) of "block major numbers" tophk2001-03-263-4/+0
| | | | the bit-bucket.
* Unbreak build on alpha.ume2001-03-241-3/+0
| | | | | | | - Move in_port_t to sys/types.h. - Nuke in_addr_t from each endian.h. Reported by: jhb
* - Define and use MAXCPU like the alpha and i386 instead of NCPUS.jhb2001-03-242-6/+10
| | | | | - Sort the sys/mutex.h include in mp_machdep.c into a closer to correct location.
* Stick a prototype for handleclock() in machine/clock.h and include itjhb2001-03-242-0/+2
| | | | interrupt.c to quiet a warning.
* Export intrnames and intrcnt as sysctls (hw.nintr, hw.intrnames andtmm2001-03-231-0/+2
| | | | | | hw.intrcnt). Approved by: rwatson
* Use a generic implementation of the Fowler/Noll/Vo hash (FNV hash).peter2001-03-171-1/+1
| | | | | | | | | | | | | | | | | Make the name cache hash as well as the nfsnode hash use it. As a special tweak, create an unsigned version of register_t. This allows us to use a special tweak for the 64 bit versions that significantly speeds up the i386 version (ie: int64 XOR int64 is slower than int64 XOR int32). The code layout is a little strange for the string function, but I was able to get between 5 to 10% improvement over the original version I started with. The layout affects gcc code generation choices and this way was fastest on x86 and alpha. Note that 'CPUTYPE=p3' etc makes a fair difference to this. It is around 45% faster with -march=pentiumpro on a p6 cpu.
* Allow the config file to specify a root filesystem filename.dfr2001-03-091-1/+6
|
* Adjust a comment slightly.dfr2001-03-091-2/+1
|
* Fix mtx_legal2block. The only time that it is bad to block on a mutex isjhb2001-03-091-1/+0
| | | | | | | | | | | | | | | | if we hold a spin mutex, since we can trivially get into deadlocks if we start switching out of processes that hold spinlocks. Checking to see if interrupts were disabled was a sort of cheap way of doing this since most of the time interrupts were only disabled when holding a spin lock. At least on the i386. To fix this properly, use a per-process counter p_spinlocks that counts the number of spin locks currently held, and instead of checking to see if interrupts are disabled in the witness code, check to see if we hold any spin locks. Since child processes always start up with the sched lock magically held in fork_exit(), we initialize p_spinlocks to 1 for child processes. Note that proc0 doesn't go through fork_exit(), so it starts with no spin locks held. Consulting from: cp
* Unrevert the pmap_map() changes. They weren't broken on x86.jhb2001-03-071-32/+26
| | | | Sense beaten into me by: peter
* - Release Giant a bit earlier on syscall exit.jhb2001-03-071-13/+8
| | | | | | - Don't try to grab Giant before postsig() in userret() as it is no longer needed. - Don't grab Giant before psignal() in ast() but get the proc lock instead.
* Grab the process lock while calling psignal and before calling psignal.jhb2001-03-071-1/+1
|
* Use the proc lock to protect p_pptr when waking up our parent in cpu_exit()jhb2001-03-071-2/+2
| | | | and remove the mpfixme() message that is now fixed.
* Back out the pmap_map() change for now, it isn't completely stable on thejhb2001-03-071-26/+32
| | | | i386.
* Don't psignal() a process from forward_hardclock() but set the appropriatejhb2001-03-061-2/+2
| | | | pending flag in p_sflag instead.
* - Rework pmap_map() to take advantage of direct-mapped segments onjhb2001-03-061-32/+26
| | | | | | | | | | | | | | | | | | | supported architectures such as the alpha. This allows us to save on kernel virtual address space, TLB entries, and (on the ia64) VHPT entries. pmap_map() now modifies the passed in virtual address on architectures that do not support direct-mapped segments to point to the next available virtual address. It also returns the actual address that the request was mapped to. - On the IA64 don't use a special zone of PV entries needed for early calls to pmap_kenter() during pmap_init(). This gets us in trouble because we end up trying to use the zone allocator before it is initialized. Instead, with the pmap_map() change, the number of needed PV entries is small enough that we can get by with a static pool that is used until pmap_init() is complete. Submitted by: dfr Debugging help: peter Tested by: me
* Fix a couple of typos which became obvious when I started to actually usedfr2001-03-041-3/+3
| | | | this on real hardware.
* sched_swi -> swi_schedjhb2001-02-241-1/+1
|
* Don't include machine/mutex.h and relocate sys/mutex.h's include to bejhb2001-02-241-2/+1
| | | | closer to alphabetical order and identical to that of the alpha.
* Clockframes have a trapframe stored in a cf_tf member, not ct_tf.jhb2001-02-241-1/+1
|
* Whitespace nits.jhb2001-02-241-2/+2
|
* Pass in process to mark ast on to aston().jhb2001-02-241-1/+1
|
* Axe pcb_schednest as it is no longer used.jhb2001-02-222-2/+0
|
* Rename switch_trampoline() to fork_trampoline() on the alpha and ia64.jhb2001-02-223-6/+6
| | | | Suggested by: dfr
* Don't set the sched_lock lesting level for new processes as it is nojhb2001-02-221-7/+0
| | | | longer used.
* Catch comments up to child_return() -> fork_return() as well.jhb2001-02-221-2/+2
|
* Synch up with the other architectures:jhb2001-02-221-39/+18
| | | | | | | | | | | | | - Remove unneeded spl()'s around mi_switch() in userret(). - Don't hold sched_lock across addupc_task(). - Remove the MD function child_return() now that the MI function fork_return() is used instead. - Use TRAPF_USERMODE() instead of dinking with the trapframe directly to check for ast's in kernel mode. - Check astpending(curproc) and resched_wanted() in ast() and return if neither is true. - Use astoff() rather than setting the non-existent per-cpu variable astpending to 0 to clear an ast.
* Use the MI fork_return() fork trampoline callout function for childjhb2001-02-221-1/+1
| | | | processes instead of the MD child_return().
* - Don't dink with sched_lock in cpu_switch() since mi_switch() does thisjhb2001-02-221-23/+14
| | | | | | | | | | for us. - Change the switch_trampoline() to call fork_exit() passing in the required arguments instead of calling the fork trampoline callout function directly. Warning: this hasn't been tested. Looked over by: dfr
* - Axe the now unused ASS_* assertions for interrupt status.jhb2001-02-221-10/+1
| | | | - Use ia64_get_psr() instead of save_intr() in mtx_legal2block().
* Add a inline function to read the psr.jhb2001-02-221-0/+11
|
* Add a mtx_intr_enable() macro.jhb2001-02-221-0/+1
|
OpenPOWER on IntegriCloud