summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_timeout.c
Commit message (Collapse)AuthorAgeFilesLines
* Fixup r243901:attilio2012-12-051-9/+12
| | | | | | | | | | | | | | | | | | | | | | | | | - As the comment report, CALLOUT_LOCAL_ALLOC cannot be checked directly from the callout flags but might be checked by a cached value. Hence, do so before to actually remove the callout, when needed, in softclock_call_cc(). - In softclock_call_cc() also add a comment in the waiting and deferred migration case explaining that the dereference should be safe because of the migration dereference invariants. Additively: - In softclock_call_cc(), for the deferred migration case, move all the accesses to callout structure after the comment stating the callout must not be destroyed. - For consistency with this last tweak, use cached c_flags for the KASSERT() in the deferred migration case. It is not strictly necessary but this way all the callout accesses happen after the above mentioned comment, improving consistency. Pointy hat to: me Sponsored by: Isilon Systems / EMC Corporation Reviewed by: kib MFC after: 2 weeks X-MFC: 243901
* The softclock_call_cc() is executing with the callout already removedkib2012-12-051-29/+32
| | | | | | | | | | | | | | | | | | | | | | | from the callwheel. Calculate the cc->cc_next before removing the callout, otherwise the code followed the invalid tailq links. After this, make softclock_call_cc() return void, since it always return cc->cc_next, which is immediately available to the softclock() anyway. This also allows to eliminate a label under #ifdef SMP. Remove the assignment of cc->cc_next from callout_cc_del(), since the function is called with the callout already removed from callwheel. If cancelling the migration, also clear the CALLOUT_DFRMIGRATION flag. Postpone the free of the timeout(9) allocated callouts after the migration checks are done. Add some more strict asserts about the state of the callout in callout_call_cc(). Reviewed by: attilio Reported and tested by: pho (previous version) MFC after: 2 weeks
* replace bit shifting loop with 1<<fls(n), improve comments.alfred2012-12-041-6/+4
| | | | Reviewed by: davide
* Rework the known mutexes to benefit about staying on their ownattilio2012-10-311-2/+2
| | | | | | | | | | | cache line in order to avoid manual frobbing but using struct mtx_padalign. The sole exception being nvme and sxfge drivers, where the author redefined CACHE_LINE_SIZE manually, so they need to be analyzed and dealt with separately. Reviwed by: jimharris, alc
* Pad and align the callout_cpu mtx to its own cacheline to reduce falsejimharris2012-10-311-1/+1
| | | | | | | | | | | | sharing especially on the default CPU 0 callout_cpu structure. This will be followed up by attilio@ with a conversion to the new struct mtx_padalign but doing this manual conversion first gives an easy MFC candidate since mtx_padalign is a more extensive system change. Sponsored by: Intel Reviewed by: jeff, attilio MFC after: 1 week
* Move the code to call the callout callback into the helper functionkib2012-05-031-198/+181
| | | | | | | | softclock_call_cc(). While there, move some common code to callout_cc_del(). Requested by: avg, jhb Reviewed by: jhb MFC after: 1 week
* When callout_reset_on() cannot immediately migrate a callout since itkib2012-05-031-0/+36
| | | | | | | | | | | | | | | | | | | | | | | | | is running on other cpu, the CALLOUT_PENDING flag is temporarily cleared. Then, callout_stop() on this, in fact active, callout fails because CALLOUT_PENDING is not set, and callout_stop() returns 0. Now, in sleepq_check_timeout(), the failed callout_stop() causes the sleepq code to execute mi_switch() without even setting the wmesg, since the switch-out is supposed to be transient. In fact, the thread is put off the CPU for full timeout interval, instead of being put on runq immediately. Until timeout fires, the process is unkillable for obvious reasons. Fix this by marking the migrating callouts with CALLOUT_DFRMIGRATION flag. The flag is cleared by callout_stop_safe() when the function detects a migration, besides returning the success. The softclock() rechecks the flag for migrating callout and cancels its execution if the flag was cleared meantime. PR: misc/166340 Reported, debugging traces provided and tested by: Christian Esken <christian.esken trivago com> Reviewed by: avg, jhb MFC after: 1 week
* Mark MALLOC_DEFINEs static that have no corresponding MALLOC_DECLAREs.ed2011-11-071-1/+1
| | | | This means that their use is restricted to a single C file.
* callout_cpu_switch() allows preemption when dropping the outcomingattilio2011-08-211-0/+7
| | | | | | | | | | | | | | | | | | callout cpu lock (and after having dropped it). If the newly scheduled thread wants to acquire the old queue it will just spin forever. Fix this by disabling preemption and interrupts entirely (because fast interrupt handlers may incur in the same problem too) while switching locks. Reported by: hrs, Mike Tancsa <mike AT sentex DOT net>, Chip Camden <sterling AT camdensoftware DOT com> Tested by: hrs, Mike Tancsa <mike AT sentex DOT net>, Chip Camden <sterling AT camdensoftware DOT com>, Nicholas Esborn <nick AT desert DOT net> Approved by: re (kib) MFC after: 10 days
* Reintroduce the fix already discussed in r216805 (please check its historyattilio2011-04-081-24/+198
| | | | | | | | | | | | | | | for a detailed explanation of the problems). The only difference with the previous fix is in Solution2: CPUBLOCK is no longer set when exiting from callout_reset_*() functions, which avoid the deadlock (leading to r217161). There is no need to CPUBLOCK there because the running-and-migrating assumption is strong enough to avoid problems there. Furthermore add a better !SMP compliancy (leading to shrinked code and structures) and facility macros/functions. Tested by: gianni, pho, dim MFC after: 3 weeks
* Revert r216805.attilio2011-01-081-119/+23
| | | | | | | | | | That revision is introducing a bug which is more visible than problems it is trying to fix. As long as my time is very limited in this period I am going to commit back this patch just once it is fully fixed. Reported by: dim, Nicholas Esborn
* Fix several callout migration races:attilio2010-12-291-23/+119
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Problem1: Hypothesis: thread1 is doing a callout_reset_on(), within his callout handler, willing to implicitly or explicitly migrate the callout. thread2 is draining the callout. Thesys: * thread1 calls callout_lock() and locks the old callout cpu * thread1 performs the checks in the first path of the callout_reset_on() * thread1 hits this codepiece: /* * If the lock must migrate we have to check the state again as * we can't hold both the new and old locks simultaneously. */ if (c->c_cpu != cpu) { c->c_cpu = cpu; CC_UNLOCK(cc); goto retry; } which means it will drop the lock and 'retry' * thread2 will callout_lock() and locks the new callout cpu. thread1 spins on the new lock and will not keep going for the moment. * thread2 checks that the callout is not pending (as callout is currently running) and that it is not on cc->cc_curr (because cc now refers to the new callout and the callout is running on the old callout cpu) thus it thinks it is done and returns. * thread1 will now acquire the lock and then adds the callout to the new callout cpu queue That seems an obvious race as callout_stop() falsely reports the callout stopped or worse, callout_drain() falsely returns while the callout is still in use. - Solution1: Fixing this problem would require, in general, to lock both callout cpus at once while switching the c_cpu field and avoid cyclic deadlocks between callout cpus locks. The concept of CPUBLOCK is then introduced (working more or less like the blocked_lock for thread_lock() function) meaning: "in callout_lock(), spin until the c->c_cpu is not different from CPUBLOCK". That way the "original" callout cpu, referred to the above mentioned code snippet, will remain blocked until the lock handover is over critical path will remain covered. - Problem2: Having the callout currently executed on a specific callout cpu and contemporary pending on another callout cpu (as it can happen with current code) breaks, at least, the assumption callout_drain() returns just once the callout cannot be referenced anymore. - Solution2: Callout migration is deferred if the current callout is already under execution. The best place to do that is in softclock() and new members are added to the callout cpu structure in order to specify a pending migration is requested. That is necessary because the callout cannot be trusted (not freed) the 100% of times after the execution of the callout handler. CPUBLOCK will prevent, in the "deferred migration" case, that the callout gets freed in this case, stopping any callout_stop() and callout_drain() possible activity until the migration is actually performed. - Problem3: There is a further race in callout_drain(). In order to avoid a race between sleepqueue lock and callout cpu spinlock, in _callout_stop_safe(), the callout cpu lock is dropped, the sleepqueue lock is acquired and a new callout cpu lookup is performed. Note that the channel used for locking the sleepqueue is obtained from the "current" callout cpu (&cc->cc_waiting). If the callout migrated in the meanwhile, callout_drain() will end up using the wrong wchan for the sleepqueue (the locked one will be the older, while the new one will not really be locked) leading to a lock leak and a race access to sleepqueue. - Solution3: It is enough to check if a migration happened between the operation of acquiring the sleepqueue lock and the new callout cpu lock and eventually unwind all those and try again. This problems can lead to deathly races on moderate (4-ways) SMP environment, leading to easy panic or deadlocks. The 24-ways of the reporter, could easilly panic, with completely normal workload, almost daily. gianni@ kindly wrote the following prof-of-concept which can panic a FreeBSD machine in less than one hour, in smaller SMP: http://www.freebsd.org/~attilio/callout/test.c Reported by: Nicholas Esborn <nick at desert dot net>, DesertNet In collabouration with: gianni, pho, Nicholas Esborn Reviewed by: jhb MFC after: 1 week (*) * Usually, I would aim for a larger MFC timeout, but I really want this in before 8.2-RELEASE, thus re@ accepted a shorter timeout as a special case for this patch
* Remove 'softclock_ih' as it is no longer used.jhb2010-11-031-4/+1
|
* Fix callout_tickstofirst() behavior after signed integer ticks overflow.mav2010-10-311-2/+1
| | | | | | | This should fix callout precision drop to 1/4s after 25 days of uptime with HZ = 1000. Submitted by: Taku YAMAMOTO <taku@tackymt.homeip.net>
* Fix panic on NULL dereference possible after r212541.mav2010-09-141-1/+2
|
* Make kern_tc.c provide minimum frequency of tc_ticktock() calls, requiredmav2010-09-141-2/+2
| | | | | | to handle current timecounter wraps. Make kern_clocksource.c to honor that requirement, scheduling sleeps on first CPU for no more then specified period. Allow other CPUs to sleep up to 1/4 second (for any case).
* Refactor timer management code with priority to one-shot operation mode.mav2010-09-131-2/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The main goal of this is to generate timer interrupts only when there is some work to do. When CPU is busy interrupts are generating at full rate of hz + stathz to fullfill scheduler and timekeeping requirements. But when CPU is idle, only minimum set of interrupts (down to 8 interrupts per second per CPU now), needed to handle scheduled callouts is executed. This allows significantly increase idle CPU sleep time, increasing effect of static power-saving technologies. Also it should reduce host CPU load on virtualized systems, when guest system is idle. There is set of tunables, also available as writable sysctls, allowing to control wanted event timer subsystem behavior: kern.eventtimer.timer - allows to choose event timer hardware to use. On x86 there is up to 4 different kinds of timers. Depending on whether chosen timer is per-CPU, behavior of other options slightly differs. kern.eventtimer.periodic - allows to choose periodic and one-shot operation mode. In periodic mode, current timer hardware taken as the only source of time for time events. This mode is quite alike to previous kernel behavior. One-shot mode instead uses currently selected time counter hardware to schedule all needed events one by one and program timer to generate interrupt exactly in specified time. Default value depends of chosen timer capabilities, but one-shot mode is preferred, until other is forced by user or hardware. kern.eventtimer.singlemul - in periodic mode specifies how much times higher timer frequency should be, to not strictly alias hardclock() and statclock() events. Default values are 2 and 4, but could be reduced to 1 if extra interrupts are unwanted. kern.eventtimer.idletick - makes each CPU to receive every timer interrupt independently of whether they busy or not. By default this options is disabled. If chosen timer is per-CPU and runs in periodic mode, this option has no effect - all interrupts are generating. As soon as this patch modifies cpu_idle() on some platforms, I have also refactored one on x86. Now it makes use of MONITOR/MWAIT instrunctions (if supported) under high sleep/wakeup rate, as fast alternative to other methods. It allows SMP scheduler to wake up sleeping CPUs much faster without using IPI, significantly increasing performance on some highly task-switching loads. Tested by: many (on i386, amd64, sparc64 and powerc) H/W donated by: Gheorghe Ardelean Sponsored by: iXsystems, Inc.
* Add an extra comment to the SDT probes definition. This allows us to getrpaulo2010-08-221-2/+2
| | | | | | | | | use '-' in probe names, matching the probe names in Solaris.[1] Add userland SDT probes definitions to sys/sdt.h. Sponsored by: The FreeBSD Foundation Discussed with: rwaston [1]
* Update several places that iterate over CPUs to use CPU_FOREACH().jhb2010-06-111-3/+1
|
* Properly fix callout handling by putting all the per-cpu info inluigi2009-12-141-3/+22
| | | | | | | | | | | | | | | | | | | | | | | | | struct callout_cpu. From the comment in the file: + * There is one struct callout_cpu per cpu, holding all relevant + * state for the callout processing thread on the individual CPU. + * In particular: + * cc_ticks is incremented once per tick in callout_cpu(). + * It tracks the global 'ticks' but in a way that the individual + * threads should not worry about races in the order in which + * hardclock() and hardclock_cpu() run on the various CPUs. + * cc_softclock is advanced in callout_cpu() to point to the + * first entry in cc_callwheel that may need handling. In turn, + * a softclock() is scheduled so it can serve the various entries i + * such that cc_softclock <= i <= cc_ticks . Together with a smaller patch committed in september, this fixes a bug that affects 8.0 with apps that rely on callouts to fire exactly in the number of ticks specified (qemu among them). Right now, callouts in 8.0 fire one tick late. This was discussed in september with JeffR and jhb MFC after: 3 days
* Make sure callouts are not processed one tick late.luigi2009-09-121-2/+2
| | | | | | | | | | The problem was introduced in SVN 180608/ rev 1.114 and affects all users of callout_reset() (including select, usleep, setitimer). A better fix probably involves replicating 'ticks' in the struct callout_cpu; this commit is just a temporary thing so that we can MFC it after a suitable test time and RE approval. MFC after: 3 days
* Add explicit static DTrace tracing to the callout mechanism, capturingrwatson2009-01-241-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | pointers to the callout handler just before and just after the callout it invoked. I attempted to do this in a manner congruent to tracing in Solaris's callout mechanism, but couldn't quite use the same names due to convention and syntax differences. Example DTrace script to generate a distribution graph of callout execution times: callout_execute:::callout_start { self->cstart = timestamp; } callout_execute:::callout_end { @length = quantize(timestamp - self->cstart); } Reviewed by: jb MFC after: 3 days
* Add a new KTR tracepoint in the KTR_CALLOUT class to note when a calloutjhb2009-01-131-0/+1
| | | | | | routine finishes executing. MFC after: 1 week
* After a machine has been up for a bit more than 20 days with HZ=1000,peter2008-10-281-1/+1
| | | | | | "ticks" goes negative. This breaks the signed comparison in softclock. This causes sleep() to never wake up, tcp to stop, etc etc. This is bad(TM). Use the SEQ_LT() method from tcp's sequence number comparisons.
* add callout_schedule; besides being useful it also improvessam2008-08-021-0/+15
| | | | | | compatibility with other systems Reviewed by: ed, battlez
* Fix a race which could result in some timeout buckets being skipped.jeff2008-07-191-6/+11
| | | | | | | | | | | | - When a tick occurs on a cpu, iterate from cs_softticks until ticks. The per-cpu tick processing happens asynchronously with the actual adjustment of the 'ticks' variable. Sometimes the results may be visible before the local call and sometimes after. Previously this could cause a one tick window where we didn't evaluate the bucket. - In softclock fetch curticks before incrementing cc_softticks so we don't skip insertions which were made for the current time. Sponsored by: Nokia
* - Correct a major error introduced in the per-cpu timeout commit. Sleepjeff2008-04-061-5/+5
| | | | | | | and wakeup require the same wait channel to function properly. Found by: kris Pointy hat: me
* Implement per-cpu callout threads, wheels, and locks.jeff2008-04-021-124/+246
| | | | | | | | | | | | | | | | | | | | | - Move callout thread creation from kern_intr.c to kern_timeout.c - Call callout_tick() on every processor via hardclock_cpu() rather than inspecting callout internal details in kern_clock.c. - Remove callout implementation details from callout.h - Package up all of the global variables into a per-cpu callout structure. - Start one thread per-cpu. Threads are not strictly bound. They prefer to execute on the native cpu but may migrate temporarily if interrupts are starving callout processing. - Run all callouts by default in the thread for cpu0 to maintain current ordering and concurrency guarantees. Many consumers may not properly handle concurrent execution. - The new callout_reset_on() api allows specifying a particular cpu to execute the callout on. This may migrate a callout to a new cpu. callout_reset() schedules on the last assigned cpu while callout_reset_curcpu() schedules on the current cpu. Reviewed by: phk Sponsored by: Nokia
* Fix a race where timeout/untimeout could cause crashes for Giant lockedalfred2008-03-221-4/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | code. The bug: There exists a race condition for timeout/untimeout(9) due to the way that the softclock thread dequeues timeouts. The softclock thread sets the c_func and c_arg of the callout to NULL while holding the callout lock but not Giant. It then drops the callout lock and acquires Giant. It is at this point where untimeout(9) on another cpu/thread could be called. Since c_arg and c_func are cleared, untimeout(9) does not touch the callout and returns as if the callout is canceled. The softclock then tries to acquire Giant and likely blocks due to the other cpu/thread holding it. The other cpu/thread then likely deallocates the backing store that c_arg points to and finishes working and hence drops Giant. Softclock resumes and acquires giant and calls the function with the now free'd c_arg and we have corruption/crash. The fix: We need to track curr_callout even for timeout(9) (LOCAL_ALLOC) callouts. We need to free the callout after the softclock processes it to deal with the race here. Obtained from: Juniper Networks, iedowse Reviewed by: jhb, iedowse MFC After: 2 weeks.
* - Pass the priority argument from *sleep() into sleepq and down intojeff2008-03-121-1/+1
| | | | | | | | | | | | | | | | | sched_sleep(). This removes extra thread_lock() acquisition and allows the scheduler to decide what to do with the static boost. - Change the priority arguments to cv_* to match sleepq/msleep/etc. where 0 means no priority change. Catch -1 in cv_broadcastpri() and convert it to 0 for now. - Set a flag when sleeping in a way that is compatible with swapping since direct priority comparisons are meaningless now. - Add a sysctl to ule, kern.sched.static_boost, that defaults to on which controls the boost behavior. Turning it off gives better performance in some workloads but needs more investigation. - While we're modifying sleepq, change signal and broadcast to both return with the lock held as the lock was held on enter. Reviewed by: jhb, peter
* Really, no explicit checks against against lock_class_* object should beattilio2008-02-061-2/+2
| | | | | | | | | | | | | done in consumers code: using locks properties is much more appropriate. Fix current code doing these bogus checks. Note: Really, callout are not usable by all !(LC_SPINLOCK | LC_SLEEPABLE) primitives like rmlocks doesn't implement the generic lock layer functions, but they can be equipped for this, so the check is still valid. Tested by: matteo, kris (earlier version) Reviewed by: jhb
* Cache the value of c_lock as it can change, in the struct,attilio2007-11-221-5/+7
| | | | | | | | while the global callout spinlock is not held, and can lead to PF#. Reported by: dougb, Mark Atkinson <atkin901 at yahoo dot com> Tested by: dougb Diagnosed by: jhb
* Add the function callout_init_rw() to callout facility in order to useattilio2007-11-201-53/+58
| | | | | | | | | | | | | | | | | | | | | rwlocks in conjuction with callouts. The function does basically what callout_init_mtx() alredy does with the difference of using a rwlock as extra argument. CALLOUT_SHAREDLOCK flag can be used, now, in order to acquire the lock only in read mode when running the callout handler. It has no effects when used in conjuction with mtx. In order to implement this, underlying callout functions have been made completely lock type-unaware, so accordingly with this, sysctl debug.to_avg_mtxcalls is now changed in the generic debug.to_avg_lockcalls. Note: currently the allowed lock classes are mutexes and rwlocks because callout handlers run in softclock swi, so they cannot sleep and they cannot acquire sleepable locks like sx or lockmgr. Requested by: kmacy, pjd, rwatson Reviewed by: jhb
* Remove the definition and implementation of 'CALLOUT_NETGIANT', a now- (andrwatson2007-09-151-11/+2
| | | | | | | possibly always-) unused define. Reported by: kmacy Approved by: re (kensmith)
* Close a race that snuck in with the recent changes to fix a LOR betweenjhb2007-08-311-13/+27
| | | | | | | | | | | | | | | | the callout_lock spin lock and the sleepqueue spin locks. In the fix, callout_drain() has to drop the callout_lock so it can acquire the sleepqueue lock. The state of the callout can change while the callout_lock is held however (for example, it can be rescheduled via callout_reset()). The previous code assumed that the only state change that could happen is that the callout could finish executing. This change alters callout_drain() to effectively restart and recheck everything after it acquires the sleepqueue lock thus handling all the possible states that the callout could be in after any changes while callout_lock was dropped. Approved by: re (kensmith) Tested by: kris
* Fix an old standing LOR between callout_lock and sleepqueues chain (whichattilio2007-06-261-3/+35
| | | | | | | | | | | | | | | | | | could lead to a deadlock). - sleepq_set_timeout acquires callout_lock (via callout_reset()) only with sleepq chain lock held - msleep_spin in _callout_stop_safe lock the sleepqueue chain with callout_lock held In order to solve this don't use msleep_spin in _callout_stop_safe() but use directly sleepqueues as inline msleep_spin code. Rearrange the wakeup path in order to have it consistent too. Reported by: kris (via stress2 test suite) Tested by: Timothy Redaelli <drizzt@gufi.org> Reviewed by: jhb Approved by: jeff (mentor) Approved by: re
* Make the TCP timer callout obtain Giant if the network stack is markedandre2007-05-111-2/+11
| | | | | | as non-mpsafe. This change is to be removed when all protocols are mp-safe.
* Improve ktr(4) logging for callout(9) subsystem. Log all inserts andglebius2006-10-111-7/+23
| | | | | | | | | removals, including failures, into the callwheel. XXX: Most of the CTR() macros are called with callout_lock spin mutex held, thus won't be logged into file, if KTR_ALQ is used. Moving the CTR() macros out from the spinlocked code would require copying of all arguments. I'm too lazy to do this.
* Use the recently added msleep_spin() function to simplify thejhb2006-02-231-56/+41
| | | | | | callout_drain() logic. We no longer need a separate non-spin mutex to do sleep/wakeup with, instead we can now just use the one spin mutex to manage all the callout functionality.
* Oops, missed adding the required include.jhb2005-09-151-0/+1
| | | | Pointy hat to: jhb
* Replace the dont_sleep_in_callout mutex hack (similar to g_x{up,down})jhb2005-09-151-8/+2
| | | | with the disallow sleeping facility.
* Make callout_reset() return a non-zero value if a pending calloutglebius2005-09-081-3/+8
| | | | | | was rescheduled. If there was no pending callout, then return 0. Reviewed by: iedowse, cperciva
* When processing a timeout() callout and returning it to the freeiedowse2005-02-111-1/+2
| | | | | | | | | list, set `curr_callout' to NULL. This ensures that we won't attempt to cancel the current callout if the original callout structure gets recycled while we wait to acquire Giant. This is reported to fix an intermittent syscons problem that was introduced by revision 1.96.
* Add a mechanism for associating a mutex with a callout when theiedowse2005-02-071-15/+106
| | | | | | | | | | | | | | | | | | | | | | callout is first initialised, using a new function callout_init_mtx(). The callout system will acquire this mutex before calling the callout function and release it on return. In addition, the callout system uses the mutex to avoid most of the complications and race conditions inherent in asynchronous timer facilities, so mutex-protected callouts have much simpler semantics. As long as the mutex is held when invoking callout_stop() or callout_reset(), then these functions will guarantee that the callout will be stopped, even if softclock() had already begun to process the callout. Existing Giant-locked callouts will automatically pick up the new race-free semantics. This should close a number of race conditions in the USB code and probably other areas of the kernel too. There should be no change in behaviour for "MP-safe" callouts; these still need to use the techniques mentioned in timeout(9) to avoid race conditions.
* Make "c->c_func = NULL" conditional on CALLOUT_LOCAL_ALLOC in bothcperciva2005-01-191-1/+1
| | | | | | | places where it occurs, not just one. :-) Pointed out by: glebius Pointy had to: cperciva
* Make "c->c_func = NULL" conditional on the CALLOUT_LOCAL_ALLOC flag,cperciva2005-01-191-1/+1
| | | | | | | i.e., only clear c->c_func if the callout c is being used via the old timeout(9) interface. Requested by: glebius
* Clarify the description of the callout_active() macro: It is cleared bycperciva2005-01-191-1/+3
| | | | | callout_stop, callout_drain, and callout_deactivate, but is not automatically cleared when a callout returns.
* Adjust two of my comments to the new world order: Indent protection incperciva2005-01-071-2/+2
| | | | the first column is performed using /**, not /*-.
* Cut a KTR record whenever a callout is invoked. Mark whether it runsrwatson2004-08-061-0/+4
| | | | | with Giant or not, and include the function point so it can be looked up against the kernel symbol table during trace analysis.
* When reseting a pending callout, perform the deregistration incperciva2004-08-061-2/+16
| | | | | | | | callout_reset rather than calling callout_stop. This results in a few lines of code duplication, but it provides a significant performance improvement because it avoids recursing on callout_lock. Requested by: rwatson
OpenPOWER on IntegriCloud