summaryrefslogtreecommitdiffstats
path: root/sys/sys/callout.h
Commit message (Collapse)AuthorAgeFilesLines
* MFC of r280785, r280871, r280872, r281510, r218511 - callout fixes.rrs2015-04-171-5/+20
| | | | Sponsored by: Netflix Inc.
* MFC of r278469, r278623rrs2015-02-151-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 278469: This fixes two conditions that can incur when migration is being done in the callout code and harmonizes the macro use.: 1) The callout_active() will lie. Basically if a migration is occuring and the callout is about to expire and the migration has been deferred, the callout_active will no longer return true until after the migration. This confuses and breaks callers that are doing callout_init(&c, 1); such as TCP. 2) The migration code had a bug in it where when migrating, if a two calls to callout_reset came in and they both collided with the callout on the wheel about to run, then the second call to callout_reset would corrupt the list the callout wheel uses putting the callout thread into a endless loop. 3) Per imp, I have fixed all the macro occurance in the code that were for the most part being ignored. 278623: This fixes a bug I in-advertantly inserted when I updated the callout code in my last commit. The cc_exec_next is used to track the next when a direct call is being made from callout. It is *never* used in the in-direct method. When macro-izing I made it so that it would separate out direct/vs/non-direct. This is incorrect and can cause panics as Peter Holm has found for me (Thanks so much Peter for all your help in this). What this change does is restore that behavior but also get rid of the cc_next from the array and instead make it be part of the base callout structure. This way no one else will get confused since we will never use it for non-direct. Sponsored by: Netflix Inc.
* MFC r275046: whitespace and cosmetic changes in callout_reset family of macrosavg2014-12-081-3/+4
| | | | Not applicable to earlier releases.
* MFC r275045: callout(9): add sbt flavors of callout_scheduleavg2014-12-081-0/+7
| | | | Not applicable to earlier releases.
* Merge r270259 from head:gavin2014-09-031-1/+1
| | | | Add a missing brace to callout_init_rm() to fix syntax.
* Fix the build and fix style.davide2013-08-231-2/+2
| | | | Pointy-hat to: davide
* Introduce callout_init_rm() so that callouts can be used in conjunctiondavide2013-08-231-0/+3
| | | | | | | | | with rmlocks. This works only with non-sleepable rm because handlers run in SWI context. While here, document the new KPI in the timeout(9) manpage. Requested by: adrian, scottl Reviewed by: mav, remko(manpage)
* Move the auto-sizing of the callout array from init_param2() toandre2013-03-081-2/+0
| | | | | | | | | | | | kern_timeout_callwheel_alloc() where it is actually used. This is a mechanical move and no tuning parameters are changed. The pre-allocated callout array is only used for legacy timeout(9) calls and is only allocated and active on cpu0. Eventually all remaining users of timeout(9) should switch to the callout_* API. Reviewed by: davide
* - Make callout(9) tickless, relying on eventtimers(4) as backend fordavide2013-03-041-4/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | precise time event generation. This greatly improves granularity of callouts which are not anymore constrained to wait next tick to be scheduled. - Extend the callout KPI introducing a set of callout_reset_sbt* functions, which take a sbintime_t as timeout argument. The new KPI also offers a way for consumers to specify precision tolerance they allow, so that callout can coalesce events and reduce number of interrupts as well as potentially avoid scheduling a SWI thread. - Introduce support for dispatching callouts directly from hardware interrupt context, specifying an additional flag. This feature should be used carefully, as long as interrupt context has some limitations (e.g. no sleeping locks can be held). - Enhance mechanisms to gather informations about callwheel, introducing a new sysctl to obtain stats. This change breaks the KBI. struct callout fields has been changed, in particular 'int ticks' (4 bytes) has been replaced with 'sbintime_t' (8 bytes) and another 'sbintime_t' field was added for precision. Together with: mav Reviewed by: attilio, bde, luigi, phk Sponsored by: Google Summer of Code 2012, iXsystems inc. Tested by: flo (amd64, sparc64), marius (sparc64), ian (arm), markj (amd64), mav, Fabian Keil
* When callout_reset_on() cannot immediately migrate a callout since itkib2012-05-031-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | is running on other cpu, the CALLOUT_PENDING flag is temporarily cleared. Then, callout_stop() on this, in fact active, callout fails because CALLOUT_PENDING is not set, and callout_stop() returns 0. Now, in sleepq_check_timeout(), the failed callout_stop() causes the sleepq code to execute mi_switch() without even setting the wmesg, since the switch-out is supposed to be transient. In fact, the thread is put off the CPU for full timeout interval, instead of being put on runq immediately. Until timeout fires, the process is unkillable for obvious reasons. Fix this by marking the migrating callouts with CALLOUT_DFRMIGRATION flag. The flag is cleared by callout_stop_safe() when the function detects a migration, besides returning the success. The softclock() rechecks the flag for migrating callout and cancels its execution if the flag was cleared meantime. PR: misc/166340 Reported, debugging traces provided and tested by: Christian Esken <christian.esken trivago com> Reviewed by: avg, jhb MFC after: 1 week
* Implement the delayed task execution extension to the taskqueuekib2011-04-261-19/+1
| | | | | | | | | mechanism. The caller may specify a timeout in ticks after which the task will be scheduled. Sponsored by: The FreeBSD Foundation Reviewed by: jeff, jhb MFC after: 1 month
* Make kern_tc.c provide minimum frequency of tc_ticktock() calls, requiredmav2010-09-141-1/+1
| | | | | | to handle current timecounter wraps. Make kern_clocksource.c to honor that requirement, scheduling sleeps on first CPU for no more then specified period. Allow other CPUs to sleep up to 1/4 second (for any case).
* Refactor timer management code with priority to one-shot operation mode.mav2010-09-131-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The main goal of this is to generate timer interrupts only when there is some work to do. When CPU is busy interrupts are generating at full rate of hz + stathz to fullfill scheduler and timekeeping requirements. But when CPU is idle, only minimum set of interrupts (down to 8 interrupts per second per CPU now), needed to handle scheduled callouts is executed. This allows significantly increase idle CPU sleep time, increasing effect of static power-saving technologies. Also it should reduce host CPU load on virtualized systems, when guest system is idle. There is set of tunables, also available as writable sysctls, allowing to control wanted event timer subsystem behavior: kern.eventtimer.timer - allows to choose event timer hardware to use. On x86 there is up to 4 different kinds of timers. Depending on whether chosen timer is per-CPU, behavior of other options slightly differs. kern.eventtimer.periodic - allows to choose periodic and one-shot operation mode. In periodic mode, current timer hardware taken as the only source of time for time events. This mode is quite alike to previous kernel behavior. One-shot mode instead uses currently selected time counter hardware to schedule all needed events one by one and program timer to generate interrupt exactly in specified time. Default value depends of chosen timer capabilities, but one-shot mode is preferred, until other is forced by user or hardware. kern.eventtimer.singlemul - in periodic mode specifies how much times higher timer frequency should be, to not strictly alias hardclock() and statclock() events. Default values are 2 and 4, but could be reduced to 1 if extra interrupts are unwanted. kern.eventtimer.idletick - makes each CPU to receive every timer interrupt independently of whether they busy or not. By default this options is disabled. If chosen timer is per-CPU and runs in periodic mode, this option has no effect - all interrupts are generating. As soon as this patch modifies cpu_idle() on some platforms, I have also refactored one on x86. Now it makes use of MONITOR/MWAIT instrunctions (if supported) under high sleep/wakeup rate, as fast alternative to other methods. It allows SMP scheduler to wake up sleeping CPUs much faster without using IPI, significantly increasing performance on some highly task-switching loads. Tested by: many (on i386, amd64, sparc64 and powerc) H/W donated by: Gheorghe Ardelean Sponsored by: iXsystems, Inc.
* add callout_schedule; besides being useful it also improvessam2008-08-021-0/+4
| | | | | | compatibility with other systems Reviewed by: ed, battlez
* Implement per-cpu callout threads, wheels, and locks.jeff2008-04-021-7/+9
| | | | | | | | | | | | | | | | | | | | | - Move callout thread creation from kern_intr.c to kern_timeout.c - Call callout_tick() on every processor via hardclock_cpu() rather than inspecting callout internal details in kern_clock.c. - Remove callout implementation details from callout.h - Package up all of the global variables into a per-cpu callout structure. - Start one thread per-cpu. Threads are not strictly bound. They prefer to execute on the native cpu but may migrate temporarily if interrupts are starving callout processing. - Run all callouts by default in the thread for cpu0 to maintain current ordering and concurrency guarantees. Many consumers may not properly handle concurrent execution. - The new callout_reset_on() api allows specifying a particular cpu to execute the callout on. This may migrate a callout to a new cpu. callout_reset() schedules on the last assigned cpu while callout_reset_curcpu() schedules on the current cpu. Reviewed by: phk Sponsored by: Nokia
* Add the function callout_init_rw() to callout facility in order to useattilio2007-11-201-3/+10
| | | | | | | | | | | | | | | | | | | | | rwlocks in conjuction with callouts. The function does basically what callout_init_mtx() alredy does with the difference of using a rwlock as extra argument. CALLOUT_SHAREDLOCK flag can be used, now, in order to acquire the lock only in read mode when running the callout handler. It has no effects when used in conjuction with mtx. In order to implement this, underlying callout functions have been made completely lock type-unaware, so accordingly with this, sysctl debug.to_avg_mtxcalls is now changed in the generic debug.to_avg_lockcalls. Note: currently the allowed lock classes are mutexes and rwlocks because callout handlers run in softclock swi, so they cannot sleep and they cannot acquire sleepable locks like sx or lockmgr. Requested by: kmacy, pjd, rwatson Reviewed by: jhb
* Remove the definition and implementation of 'CALLOUT_NETGIANT', a now- (andrwatson2007-09-151-1/+0
| | | | | | | possibly always-) unused define. Reported by: kmacy Approved by: re (kensmith)
* Make the TCP timer callout obtain Giant if the network stack is markedandre2007-05-111-0/+1
| | | | | | as non-mpsafe. This change is to be removed when all protocols are mp-safe.
* Make callout_reset() return a non-zero value if a pending calloutglebius2005-09-081-1/+1
| | | | | | was rescheduled. If there was no pending callout, then return 0. Reviewed by: iedowse, cperciva
* Add a mechanism for associating a mutex with a callout when theiedowse2005-02-071-0/+5
| | | | | | | | | | | | | | | | | | | | | | callout is first initialised, using a new function callout_init_mtx(). The callout system will acquire this mutex before calling the callout function and release it on return. In addition, the callout system uses the mutex to avoid most of the complications and race conditions inherent in asynchronous timer facilities, so mutex-protected callouts have much simpler semantics. As long as the mutex is held when invoking callout_stop() or callout_reset(), then these functions will guarantee that the callout will be stopped, even if softclock() had already begun to process the callout. Existing Giant-locked callouts will automatically pick up the new race-free semantics. This should close a number of race conditions in the USB code and probably other areas of the kernel too. There should be no change in behaviour for "MP-safe" callouts; these still need to use the techniques mentioned in timeout(9) to avoid race conditions.
* 1. Remove callout_stop binary compatibility.cperciva2004-04-201-2/+1
| | | | | | | 2. Document that this means that kernel modules must be rebuilt. 3. While I'm here, fix my sorting error in callout.h Requested by: many [1], scottl [2], bde [3]
* Remove advertising clause from University of California Regent's license,imp2004-04-071-4/+0
| | | | | | per letter dated July 22, 1999. Approved by: core
* Introduce a callout_drain() function. This acts in the same manner ascperciva2004-04-061-0/+3
| | | | | | | | | | | callout_stop(), except that if the callout being stopped is currently in progress, it blocks attempts to reset the callout and waits until the callout is completed before it returns. This makes it possible to clean up callout-using code safely, e.g., without potentially freeing memory which is still being used by a callout. Reviewed by: mux, gallatin, rwatson, jhb
* Remove __Palfred2002-03-191-4/+4
|
* style(9) the structure definitions.obrien2001-09-051-2/+2
|
* Whitespace style nits.jhb2001-08-211-4/+4
|
* Change callout_stop() to return an integer. If callout_stop() succeeds injhb2001-08-101-1/+1
| | | | | | | | | | removing the callout entry, return 1. If callout_stop() fails to remove the callout entry because it is currently executing or has already been executed, then the function returns 0. The idea was obtained from BSD/OS, however, BSD/OS changed untimeout(), and I've just changed callout_stop() to be more conservative. Obtained from: BSD/OS
* Revert the last commit to the callout interface, and add a flag tojlemon2000-11-251-9/+2
| | | | | | | callout_init() indicating whether the callout is safe or not. Update the callers of callout_init() to reflect the new interface. Okayed by: Jake
* - Rename callout_reset to _callout_reset and add a flags argument.jake2000-11-251-1/+8
| | | | | - Add macros callout_reset, which does the obvious, and mp_callout_reset, which passes the CALLOUT_MPSAFE flag.
* - Protect the callout wheel with a separate spin mutex, callout_lock.jake2000-11-191-0/+2
| | | | | | | | | | | - Use the mutex in hardclock to ensure no races between it and softclock. - Make softclock be INTR_MPSAFE and provide a flag, CALLOUT_MPSAFE, which specifies that a callout handler does not need giant. There is still no way to set this flag when regstering a callout. Reviewed by: -smp@, jlemon
* Back out the previous change to the queue(3) interface.jake2000-05-261-4/+4
| | | | | | It was not discussed and should probably not happen. Requested by: msmith and others
* Change the way that the queue(3) structures are declared; don't assume thatjake2000-05-231-4/+4
| | | | | | | | the type argument to *_HEAD and *_ENTRY is a struct. Suggested by: phk Reviewed by: phk Approved by: mdodd
* Change #ifdef KERNEL to #ifdef _KERNEL in the public headers. "KERNEL"peter1999-12-291-2/+2
| | | | | | is an application space macro and the applications are supposed to be free to use it as they please (but cannot). This is consistant with the other BSD's who made this change quite some time ago. More commits to come.
* Restructure TCP timeout handling:jlemon1999-08-301-5/+5
| | | | | | | | | | - eliminate the fast/slow timeout lists for TCP and instead use a callout entry for each timer. - increase the TCP timer granularity to HZ - implement "bad retransmit" recovery, as presented in "On Estimating End-to-End Network Path Properties", by Allman and Paxson. Submitted by: jlemon, wollmann
* $Id$ -> $FreeBSD$peter1999-08-281-1/+1
|
* Expose a slightly-lower-level interface to timeouts which allows callerswollman1999-03-061-1/+14
| | | | | | | | to manage their own memory. Tested on my machine (make buildworld). I've made analogous changes on the alpha, but don't have a machine to test. Not-objected-to by: dg, gibbs
* Fix softclock calling so we don't loose timeouts (I broke this ~10h ago)phk1998-01-111-2/+2
|
* Reorder struct callout for better cacheline behavior.dg1997-12-011-2/+2
|
* Store an absolute tick value in callout entries so that a subtraction ongibbs1997-09-241-2/+1
| | | | | | | | hash chain traversal isn't needed. This also allows untimeout to recompute the hash to find the bucket that the entry to remove is stored in so that each callout entry no longer needs to store that information. Reviewed by: Nate Williams <nate@mt.sri.com>
* buf.h:gibbs1997-09-211-5/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change the definition of a buffer queue so that bufqdisksort can properly deal with bordered writes. Add inline functions for accessing buffer queues. This should be considered an opaque data structure by clients. callout.h: New callout implementation. device.h: Add support for CAM interrupts. disk.h: disklabel.h: tqdisksort->bufqdisksort kernel.h: Add new configuration entries for configuration hooks and calling cpu_rootconf and cpu_dumpconf. param.h: Add a priority for sleeping waiting on config hooks. proc.h: Update for new callout implementation. queue.h: Add TAILQ_HEAD_INITIALIZER from NetBSD. systm.h: Add prototypes for cpu_root/dumpconf, splcam, splsoftcam, etc..
* Some staticized variables were still declared to be extern.bde1997-09-071-2/+2
|
* Back out part 1 of the MCFH that changed $Id$ to $FreeBSD$. We are notpeter1997-02-221-1/+1
| | | | ready for it yet.
* Make the long-awaited change from $Id$ to $FreeBSD$jkh1997-01-141-1/+1
| | | | | | | | This will make a number of things easier in the future, as well as (finally!) avoiding the Id-smashing problem which has plagued developers for so long. Boy, I'm glad we're not using sup anymore. This update would have been insane otherwise.
* Made them all idempotent.paul1994-08-211-1/+6
| | | | | Reviewed by: Submitted by:
* Fix up some sloppy coding practices:wollman1994-08-181-3/+3
| | | | | | | | | | | | - Delete redundant declarations. - Add -Wredundant-declarations to Makefile.i386 so they don't come back. - Delete sloppy COMMON-style declarations of uninitialized data in header files. - Add a few prototypes. - Clean up warnings resulting from the above. NB: ioconf.c will still generate a redundant-declaration warning, which is unavoidable unless somebody volunteers to make `config' smarter.
* Added $Id$dg1994-08-021-0/+1
|
* BSD 4.4 Lite Kernel Sourcesrgrimes1994-05-241-0/+51
OpenPOWER on IntegriCloud