summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_clock.c
Commit message (Collapse)AuthorAgeFilesLines
* Implement the DTrace sched provider. This implementation aims to berstone2012-05-151-0/+6
| | | | | | | | | | | | | | | | | | | | | | compatible with the sched provider implemented by Solaris and its open- source derivatives. Full documentation of the sched provider can be found on Oracle's DTrace wiki pages. Note that for compatibility with scripts originally written for Solaris, serveral probes are defined that will never fire. These probes are defined to fire when Solaris-specific features perform certain actions. As these features are not present in FreeBSD, the probes can never fire. Also, I have added a two probes that are not defined in Solaris, lend-pri and load-change. These probes have been added to make it possible to collect schedgraph data with DTrace. Finally, a few probes are defined in Solaris to take a cpuinfo_t * argument. As it was not immediately clear to me how to translate that to FreeBSD, currently those probes are passed NULL in place of a cpuinfo_t *. Sponsored by: Sandvine Incorporated MFC after: 2 weeks
* Add software PMC support.fabient2012-03-281-1/+11
| | | | | | | | | | | | | New kernel events can be added at various location for sampling or counting. This will for example allow easy system profiling whatever the processor is with known tools like pmcstat(8). Simultaneous usage of software PMC and hardware PMC is possible, for example looking at the lock acquire failure, page fault while sampling on instructions. Sponsored by: NETASQ MFC after: 1 month
* Idle ticks optimization:mav2012-03-101-16/+31
| | | | | | | | | | | - Pass number of events to the statclock() and profclock() functions same as to hardclock() before to not call them many times in a loop. - Rename them into statclock_cnt() and profclock_cnt(). - Turn statclock() and profclock() into compatibility wrappers, still needed for arm. - Rename hardclock_anycpu() into hardclock_cnt() for unification. MFC after: 1 week
* Mark all SYSCTL_NODEs static that have no corresponding SYSCTL_DECLs.ed2011-11-071-1/+2
| | | | | | The SYSCTL_NODE macro defines a list that stores all child-elements of that node. If there's no SYSCTL_DECL macro anywhere else, there's no reason why it shouldn't be static.
* Fix interrupt counters dumping on SW_WATCHDOG fire.mav2011-09-271-1/+1
|
* - Remove the eintrcnt/eintrnames usage and introduce the concept ofattilio2011-07-181-1/+1
| | | | | | | | | | | | | | | | sintrcnt/sintrnames which are symbols containing the size of the 2 tables. - For amd64/i386 remove the storage of intr* stuff from assembly files. This area can be widely improved by applying the same to other architectures and likely finding an unified approach among them and move the whole code to be MI. More work in this area is expected to happen fairly soon. No MFC is previewed for this patch. Tested by: pluknet Reviewed by: jhb Approved by: re (kib)
* Fix several places to ignore processes that are not yet fully constructed.jhb2011-04-061-0/+4
| | | | MFC after: 1 week
* After some off-list discussion, revert a number of changes to thedim2010-11-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | DPCPU_DEFINE and VNET_DEFINE macros, as these cause problems for various people working on the affected files. A better long-term solution is still being considered. This reversal may give some modules empty set_pcpu or set_vnet sections, but these are harmless. Changes reverted: ------------------------------------------------------------------------ r215318 | dim | 2010-11-14 21:40:55 +0100 (Sun, 14 Nov 2010) | 4 lines Instead of unconditionally emitting .globl's for the __start_set_xxx and __stop_set_xxx symbols, only emit them when the set_vnet or set_pcpu sections are actually defined. ------------------------------------------------------------------------ r215317 | dim | 2010-11-14 21:38:11 +0100 (Sun, 14 Nov 2010) | 3 lines Apply the STATIC_VNET_DEFINE and STATIC_DPCPU_DEFINE macros throughout the tree. ------------------------------------------------------------------------ r215316 | dim | 2010-11-14 21:23:02 +0100 (Sun, 14 Nov 2010) | 2 lines Add macros to define static instances of VNET_DEFINE and DPCPU_DEFINE.
* Apply the STATIC_VNET_DEFINE and STATIC_DPCPU_DEFINE macros throughoutdim2010-11-141-1/+1
| | | | the tree.
* Tweak the waitchannel messages for the dead lock detection kthread. Usejhb2010-11-021-2/+2
| | | | | | | | | a shorter message (userland generally only sees the first 6 to 8 characters) when waiting for the allproc lock. Use "-" when idle to math the behavior of other kthreads. Reviewed by: attilio MFC after: 1 week
* Make kern_tc.c provide minimum frequency of tc_ticktock() calls, requiredmav2010-09-141-2/+2
| | | | | | to handle current timecounter wraps. Make kern_clocksource.c to honor that requirement, scheduling sleeps on first CPU for no more then specified period. Allow other CPUs to sleep up to 1/4 second (for any case).
* Replace spin lock with the set of atomics. It is impractical for onemav2010-09-141-10/+14
| | | | tc_ticktock() call to wait for another's completion -- just skip it.
* Refactor timer management code with priority to one-shot operation mode.mav2010-09-131-52/+88
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The main goal of this is to generate timer interrupts only when there is some work to do. When CPU is busy interrupts are generating at full rate of hz + stathz to fullfill scheduler and timekeeping requirements. But when CPU is idle, only minimum set of interrupts (down to 8 interrupts per second per CPU now), needed to handle scheduled callouts is executed. This allows significantly increase idle CPU sleep time, increasing effect of static power-saving technologies. Also it should reduce host CPU load on virtualized systems, when guest system is idle. There is set of tunables, also available as writable sysctls, allowing to control wanted event timer subsystem behavior: kern.eventtimer.timer - allows to choose event timer hardware to use. On x86 there is up to 4 different kinds of timers. Depending on whether chosen timer is per-CPU, behavior of other options slightly differs. kern.eventtimer.periodic - allows to choose periodic and one-shot operation mode. In periodic mode, current timer hardware taken as the only source of time for time events. This mode is quite alike to previous kernel behavior. One-shot mode instead uses currently selected time counter hardware to schedule all needed events one by one and program timer to generate interrupt exactly in specified time. Default value depends of chosen timer capabilities, but one-shot mode is preferred, until other is forced by user or hardware. kern.eventtimer.singlemul - in periodic mode specifies how much times higher timer frequency should be, to not strictly alias hardclock() and statclock() events. Default values are 2 and 4, but could be reduced to 1 if extra interrupts are unwanted. kern.eventtimer.idletick - makes each CPU to receive every timer interrupt independently of whether they busy or not. By default this options is disabled. If chosen timer is per-CPU and runs in periodic mode, this option has no effect - all interrupts are generating. As soon as this patch modifies cpu_idle() on some platforms, I have also refactored one on x86. Now it makes use of MONITOR/MWAIT instrunctions (if supported) under high sleep/wakeup rate, as fast alternative to other methods. It allows SMP scheduler to wake up sleeping CPUs much faster without using IPI, significantly increasing performance on some highly task-switching loads. Tested by: many (on i386, amd64, sparc64 and powerc) H/W donated by: Gheorghe Ardelean Sponsored by: iXsystems, Inc.
* - Simplify logic in handling ticks wrap-upattilio2010-07-071-13/+10
| | | | | | | | | | | - Fix a bug where thread may be in sleeping state but the wchan won't be set, leading to an empty container for sleepq_type(). [0] Sponsored by: Sandvine Incorporated [0] Submitted by: Bryan Venteicher <bryanv at daemoninthecloset dot org> MFC after: 3 days X-MFC: 209577
* Fix a lock leak in the deadlock resolver in case the ticks counterattilio2010-06-281-2/+6
| | | | | | | | | | wrapped up. Sponsored by: Sandvine Incorporated Submitted by: pluknet <pluknet at gmail dot com> Reported by: Anton Yuzhaninov <citrin at citrin dot ru> Reviewed by: jhb MFC after: 3 days
* Use ISO C99 integer types in sys/kern where possible.ed2010-06-211-1/+1
| | | | | | There are only about 100 occurences of the BSD-specific u_int*_t datatypes in sys/kern. The ISO C99 integer types are used here more often.
* Implement new event timers infrastructure. It provides unified APIs formav2010-06-201-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | writing event timer drivers, for choosing best possible drivers by machine independent code and for operating them to supply kernel with hardclock(), statclock() and profclock() events in unified fashion on various hardware. Infrastructure provides support for both per-CPU (independent for every CPU core) and global timers in periodic and one-shot modes. MI management code at this moment uses only periodic mode, but one-shot mode use planned for later, as part of tickless kernel project. For this moment infrastructure used on i386 and amd64 architectures. Other archs are welcome to follow, while their current operation should not be affected. This patch updates existing drivers (i8254, RTC and LAPIC) for the new order, and adds event timers support into the HPET driver. These drivers have different capabilities: LAPIC - per-CPU timer, supports periodic and one-shot operation, may freeze in C3 state, calibrated on first use, so may be not exactly precise. HPET - depending on hardware can work as per-CPU or global, supports periodic and one-shot operation, usually provides several event timers. i8254 - global, limited to periodic mode, because same hardware used also as time counter. RTC - global, supports only periodic mode, set of frequencies in Hz limited by powers of 2. Depending on hardware capabilities, drivers preferred in following orders, either LAPIC, HPETs, i8254, RTC or HPETs, LAPIC, i8254, RTC. User may explicitly specify wanted timers via loader tunables or sysctls: kern.eventtimer.timer1 and kern.eventtimer.timer2. If requested driver is unavailable or unoperational, system will try to replace it. If no more timers available or "NONE" specified for second, system will operate using only one timer, multiplying it's frequency by few times and uing respective dividers to honor hz, stathz and profhz values, set during initial setup.
* Update several places that iterate over CPUs to use CPU_FOREACH().jhb2010-06-111-3/+1
|
* - Implement MI helper functions, dividing one or two timer interrupts withmav2010-05-241-0/+52
| | | | | | | | arbitrary frequencies into hardclock(), statclock() and profclock() calls. Same code with minor variations duplicated several times over the tree for different timer drivers and architectures. - Switch all x86 archs to new functions, simplifying the code and removing extra logic from timer drivers. Other archs are also welcome.
* getblk lockmgr is mostly used as a msleep() and may lead too easilly toattilio2010-04-191-0/+1
| | | | | | | false positives. Whitelist it. Reported by: Erik Cederstrand <erik at cederstrand dot dk>
* - Introduce a blessed list for sxlocks that prevents the deadlkres toattilio2010-04-111-1/+31
| | | | | | | | | | panic on those ones. [0] - Fix ticks counter wrap-up Sponsored by: Sandvine Incorporated [0] Reported by: jilles [0] Tested by: jilles MFC: 1 week
* Introduce the new kernel thread called "deadlock resolver".attilio2010-01-091-1/+121
| | | | | | | | | | | | | | | | | | | | | | | | | | While the name is pretentious, a good explanation of its targets is reported in this 17 months old presentation e-mail: http://lists.freebsd.org/pipermail/freebsd-arch/2008-August/008452.html In order to implement it, the sq_type in sleepqueues is mandatory and not only compiled along with INVARIANTS option. Additively, a new sleepqueue function, sleepq_type() is added, returning the type of the sleepqueue linked to a wchan. Three new sysctls are added in order to configure the thread: debug.deadlkres.slptime_threshold debug.deadlkres.blktime_threshold debug.deadlkres.sleepfreq rappresenting the thresholds for sleep and block time that will lead to a deadlock matching (when exceeded), while the sleepfreq rappresents the number of seconds between 2 consecutive thread runnings. In order to enable the deadlock resolver thread recompile your kernel with the option DEADLKRES. Reviewed by: jeff Tested by: pho, Giovanni Trematerra Sponsored by: Nokia Incorporated, Sandvine Incorporated MFC after: 2 weeks
* Mark the clock sysctls as MPSAFE.ed2009-05-181-3/+4
| | | | | | | These sysctls don't need any form of locking. At least cp_times is used by powerd very often, which means I get 50% less calls to non-MPSAFE sysctls on my system. The other 50% is consumed by dev.cpu.0.freq, but this seems to need Giant for Newbus.
* - Implement generic macros for producing KTR records that are compatiblejeff2009-01-171-2/+2
| | | | | | | | | | | | with src/tools/sched/schedgraph.py. This allows developers to quickly create a graphical view of ktr data for any resource in the system. - Add sched_tdname() and the pcpu field 'name' for quickly and uniformly identifying records associated with a thread or cpu. - Reimplement the KTR_SCHED traces using the new generic facility. Obtained from: attilio Discussed with: jhb Sponsored by: Nokia
* Implement per-cpu callout threads, wheels, and locks.jeff2008-04-021-23/+2
| | | | | | | | | | | | | | | | | | | | | - Move callout thread creation from kern_intr.c to kern_timeout.c - Call callout_tick() on every processor via hardclock_cpu() rather than inspecting callout internal details in kern_clock.c. - Remove callout implementation details from callout.h - Package up all of the global variables into a per-cpu callout structure. - Start one thread per-cpu. Threads are not strictly bound. They prefer to execute on the native cpu but may migrate temporarily if interrupts are starving callout processing. - Run all callouts by default in the thread for cpu0 to maintain current ordering and concurrency guarantees. Many consumers may not properly handle concurrent execution. - The new callout_reset_on() api allows specifying a particular cpu to execute the callout on. This may migrate a callout to a new cpu. callout_reset() schedules on the last assigned cpu while callout_reset_curcpu() schedules on the current cpu. Reviewed by: phk Sponsored by: Nokia
* In keeping with style(9)'s recommendations on macros, use a ';'rwatson2008-03-161-1/+1
| | | | | | | | | after each SYSINIT() macro invocation. This makes a number of lightweight C parsers much happier with the FreeBSD kernel source, including cflow's prcc and lxr. MFC after: 1 month Discussed with: imp, rink
* Remove kernel support for M:N threading.jeff2008-03-121-8/+0
| | | | | | | | While the KSE project was quite successful in bringing threading to FreeBSD, the M:N approach taken by the kse library was never developed to its full potential. Backwards compatibility will be provided via libmap.conf for dynamically linked binaries and static binaries will be broken.
* Add a new 'why' argument to kdb_enter(), and a set of constants to userwatson2007-12-251-1/+1
| | | | | | | | | for that argument. This will allow DDB to detect the broad category of reason why the debugger has been entered, which it can use for the purposes of deciding which DDB script to run. Assign approximate why values to all current consumers of the kdb_enter() interface.
* Move use of 'i' in cp_time sysctl under SCTL_MASK32 so that it compilesrwatson2007-11-291-1/+2
| | | | without warnings on systems that don't define it.
* Move the shared cp_time array (counts %sys, %user, %idle etc) to thepeter2007-11-291-9/+72
| | | | | | | | | | | | | | | | | | | | | | per-cpu area. cp_time[] goes away and a new function creates a merged cp_time-like array for things like linprocfs, sysctl etc. The atomic ops for updating cp_time[] in statclock go away, and the scope of the thread lock is reduced. sysctl kern.cp_time returns a backwards compatible cp_time[] array. A new kern.cp_times sysctl returns the individual per-cpu stats. I have pending changes to make top and vmstat optionally show per-cpu stats. I'm very aware that there are something like 5 or 6 other versions "out there" for doing this - but none were handy when I needed them. I did merge my changes with John Baldwin's, and ended up replacing a few chunks of my stuff with his, and stealing some other code. Reviewed by: jhb Partly obtained from: jhb
* generally we are interested in what thread did something asjulian2007-11-141-1/+1
| | | | | | opposed to what process. Since threads by default have teh name of the process unless over-written with more useful information, just print the thread name instead.
* - Move all of the PS_ flags into either p_flag or td_flags.jeff2007-09-171-12/+7
| | | | | | | | | | | | | | - p_sflag was mostly protected by PROC_LOCK rather than the PROC_SLOCK or previously the sched_lock. These bugs have existed for some time. - Allow swapout to try each thread in a process individually and then swapin the whole process if any of these fail. This allows us to move most scheduler related swap flags into td_flags. - Keep ki_sflag for backwards compat but change all in source tools to use the new and more correct location of P_INMEM. Reported by: pho Reviewed by: attilio, kib Approved by: re (kensmith)
* Since locking in kern/subr_prof.c is changed a bit, we need nomore ofattilio2007-06-091-1/+1
| | | | | | time_lock spinlock exported. Approved by: jeff (mentor)
* Commit 5/14 of sched_lock decomposition.jeff2007-06-041-35/+33
| | | | | | | | | | | | | | | - Protect the cp_time tick counts with atomics instead of a global lock. There will only be one atomic per tick and this allows all processors to execute softclock concurrently. - In softclock, protect access to rusage and td_*tick data with the thread_lock(), expanding the scope of the thread lock over the whole function. - Do some creative re-arranging in hardclock() to avoid excess locking. - Protect the p_timer fields with the per-process spinlock. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* - Move rusage from being per-process in struct pstats to per-thread injeff2007-06-011-6/+4
| | | | | | | | | | | | | | | | | | | td_ru. This removes the requirement for per-process synchronization in statclock() and mi_switch(). This was previously supported by sched_lock which is going away. All modifications to rusage are now done in the context of the owning thread. reads proceed without locks. - Aggregate exiting threads rusage in thread_exit() such that the exiting thread's rusage is not lost. - Provide a new routine, rufetch() to fetch an aggregate of all rusage structures from all threads in a process. This routine must be used in any place requiring a rusage from a process prior to it's exit. The exited process's rusage is still available via p_ru. - Aggregate tick statistics only on demand via rufetch() or when a thread exits. Tick statistics are kept in the thread and protected by sched_lock until it exits. Initial patch by: attilio Reviewed by: attilio, bde (some objections), arch (mostly silent)
* Revert 1.197 and instead avoid calling kdb_enter() if the KDB_UNATTENDEDemaste2007-05-281-1/+8
| | | | option is in use.
* Eliminate explicit kdb_enter in the software watchdog handler (whichemaste2007-05-281-7/+1
| | | | | | produced incorrect behaviour with the KDB_UNATTENDED option) and call panic in both the KDB and non-KDB cases. This change is consistent with rwatson's current kdb/ddb work.
* Initialize time_lock before calling cpu_initclocks(). This corrects arwatson2007-05-231-1/+1
| | | | | | | race condition in which hardclock fires before the mutex is initialized leading to a "corrupt spinlock" panic. Submitted by: attilio
* - Move clock synchronization into a seperate clock lock so the globaljeff2007-05-201-10/+13
| | | | | | | | scheduler lock is not involved. sched_lock still protects the sched_clock call. Another patch will remedy this. Contributed by: Attilio Rao <attilio@FreeBSD.org> Tested by: kris, jeff
* Instead of doing comparisons using the pcpu area to see ifjulian2007-03-081-1/+1
| | | | | | | a thread is an idle thread, just see if it has the IDLETD flag set. That flag will probably move to the pflags word as it's permenent and never chenges for the life of the system so it doesn't need locking.
* Align the interfaces for the various watchdogs and make the interfacen_hibma2006-12-151-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | behave as expected. Also: - Return an error if WD_PASSIVE is passed in to the ioctl as only WD_ACTIVE is implemented at the moment. See sys/watchdog.h for an explanation of the difference between WD_ACTIVE and WD_PASSIVE. - Remove the I_HAVE_TOTALLY_LOST_MY_SENSE_OF_HUMOR define. If you've lost your sense of humor, than don't add a define. Specific changes: i80321_wdog.c Don't roll your own passive watchdog tickle as this would defeat the purpose of an active (userland) watchdog tickle. ichwd.c / ipmi.c: WD_ACTIVE means active patting of the watchdog by a userland process, not whether the watchdog is active. See sys/watchdog.h. kern_clock.c: (software watchdog) Remove a check for WD_ACTIVE as this does not make sense here. This reverts r1.181.
* Threading cleanup.. part 2 of several.julian2006-12-061-16/+4
| | | | | | | | | | | | | | | | | | | | | | Make part of John Birrell's KSE patch permanent.. Specifically, remove: Any reference of the ksegrp structure. This feature was never fully utilised and made things overly complicated. All code in the scheduler that tried to make threaded programs fair to unthreaded programs. Libpthread processes will already do this to some extent and libthr processes already disable it. Also: Since this makes such a big change to the scheduler(s), take the opportunity to rename some structures and elements that had to be moved anyhow. This makes the code a lot more readable. The ULE scheduler compiles again but I have no idea if it works. The 4bsd scheduler still reqires a little cleaning and some functions that now do ALMOST nothing will go away, but I thought I'd do that as a separate commit. Tested by David Xu, and Dan Eischen using libthr and libpthread.
* Make KSE a kernel option, turned on by default in all GENERICjb2006-10-261-0/+19
| | | | | | | kernel configs except sun4v (which doesn't process signals properly with KSE). Reviewed by: davidxu@
* Unexpand an instance of TAILQ_EMPTY()delphij2006-06-141-1/+1
|
* Add scheduler CORE, the work I have done half a year ago, recent,davidxu2006-06-131-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I picked it up again. The scheduler is forked from ULE, but the algorithm to detect an interactive process is almost completely different with ULE, it comes from Linux paper "Understanding the Linux 2.6.8.1 CPU Scheduler", although I still use same word "score" as a priority boost in ULE scheduler. Briefly, the scheduler has following characteristic: 1. Timesharing process's nice value is seriously respected, timeslice and interaction detecting algorithm are based on nice value. 2. per-cpu scheduling queue and load balancing. 3. O(1) scheduling. 4. Some cpu affinity code in wakeup path. 5. Support POSIX SCHED_FIFO and SCHED_RR. Unlike scheduler 4BSD and ULE which using fuzzy RQ_PPQ, the scheduler uses 256 priority queues. Unlike ULE which using pull and push, the scheduelr uses pull method, the main reason is to let relative idle cpu do the work, but current the whole scheduler is protected by the big sched_lock, so the benefit is not visible, it really can be worse than nothing because all other cpu are locked out when we are doing balancing work, which the 4BSD scheduelr does not have this problem. The scheduler does not support hyperthreading very well, in fact, the scheduler does not make the difference between physical CPU and logical CPU, this should be improved in feature. The scheduler has priority inversion problem on MP machine, it is not good for realtime scheduling, it can cause realtime process starving. As a result, it seems the MySQL super-smack runs better on my Pentium-D machine when using libthr, despite on UP or SMP kernel.
* Trim trailing whitespace.jhb2006-04-171-3/+3
|
* CPU time accounting speedup (step 2)phk2006-02-111-3/+3
| | | | | | | | | | | | | | | | | | | Keep accounting time (in per-cpu) cputicks and the statistics counts in the thread and summarize into struct proc when at context switch. Don't reach across CPUs in calcru(). Add code to calibrate the top speed of cpu_tickrate() for variable cpu_tick hardware (like TSC on power managed machines). Don't enforce monotonicity (at least for now) in calcru. While the calibrated cpu_tickrate ramps up it may not be true. Use 27MHz counter on i386/Geode. Use TSC on amd64 & i386 if present. Use tick counter on sparc64
* Simplify system time accounting for profiling.phk2006-02-081-1/+1
| | | | | | | | | | Rename struct thread's td_sticks to td_pticks, we will need the other name for more appropriately named use shortly. Reduce it from uint64_t to u_int. Clear td_pticks whenever we enter the kernel instead of recording its value as reference for userret(). Use the absolute value of td->pticks in userret() and eliminate third argument.
* Tweak how the MD code calls the fooclock() methods some. Instead ofjhb2005-12-221-22/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | passing a pointer to an opaque clockframe structure and requiring the MD code to supply CLKF_FOO() macros to extract needed values out of the opaque structure, just pass the needed values directly. In practice this means passing the pair (usermode, pc) to hardclock() and profclock() and passing the boolean (usermode) to hardclock_cpu() and hardclock_process(). Other details: - Axe clockframe and CLKF_FOO() macros on all architectures. Basically, all the archs were taking a trapframe and converting it into a clockframe one way or another. Now they can just extract the PC and usermode values directly out of the trapframe and pass it to fooclock(). - Renamed hardclock_process() to hardclock_cpu() as the latter is more accurate. - On Alpha, we now run profclock() at hz (profhz == hz) rather than at the slower stathz. - On Alpha, for the TurboLaser machines that don't have an 8254 timecounter, call hardclock() directly. This removes an extra conditional check from every clock interrupt on Alpha on the BSP. There is probably room for even further pruning here by changing Alpha to use the simplified timecounter we use on x86 with the lapic timer since we don't get interrupts from the 8254 on Alpha anyway. - On x86, clkintr() shouldn't ever be called now unless using_lapic_timer is false, so add a KASSERT() to that affect and remove a condition to slightly optimize the non-lapic case. - Change prototypeof arm_handler_execute() so that it's first arg is a trapframe pointer rather than a void pointer for clarity. - Use KCOUNT macro in profclock() to lookup the kernel profiling bucket. Tested on: alpha, amd64, arm, i386, ia64, sparc64 Reviewed by: bde (mostly)
* Remove the KTR for hardclock completely. It seems to not be useful.njl2005-12-181-1/+0
| | | | Requested by: jhb
OpenPOWER on IntegriCloud