summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_event.c
Commit message (Collapse)AuthorAgeFilesLines
* Revert previous commits which I committed by mistake.rodrigc2007-07-141-9/+0
| | | | | Approved by: re (implicit) Pointy hat to: me
* The last entry in the ext2_opts array must be NULL,rodrigc2007-07-141-0/+9
| | | | | | | otherwise the kernel with crash in vfs_filteropt() if an invalid mount option is passed to ext2fs. Approved by: re (kensmith)
* In kern_kevent(), unconditionally fdrop() fp once fget() has succeeded,rwatson2007-05-281-2/+1
| | | | | | | as we never have an opportunity to set it to NULL. Found with: Coverity Prevent(tm) CID: 2161
* Select a more appealing spelling for the word acquire.rwatson2007-05-271-10/+10
|
* Replace custom file descriptor array sleep lock constructed using a mutexrwatson2007-04-041-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | and flags with an sxlock. This leads to a significant and measurable performance improvement as a result of access to shared locking for frequent lookup operations, reduced general overhead, and reduced overhead in the event of contention. All of these are imported for threaded applications where simultaneous access to a shared file descriptor array occurs frequently. Kris has reported 2x-4x transaction rate improvements on 8-core MySQL benchmarks; smaller improvements can be expected for many workloads as a result of reduced overhead. - Generally eliminate the distinction between "fast" and regular acquisisition of the filedesc lock; the plan is that they will now all be fast. Change all locking instances to either shared or exclusive locks. - Correct a bug (pointed out by kib) in fdfree() where previously msleep() was called without the mutex held; sx_sleep() is now always called with the sxlock held exclusively. - Universally hold the struct file lock over changes to struct file, rather than the filedesc lock or no lock. Always update the f_ops field last. A further memory barrier is required here in the future (discussed with jhb). - Improve locking and reference management in linux_at(), which fails to properly acquire vnode references before using vnode pointers. Annotate improper use of vn_fullpath(), which will be replaced at a future date. In fcntl(), we conservatively acquire an exclusive lock, even though in some cases a shared lock may be sufficient, which should be revisited. The dropping of the filedesc lock in fdgrowtable() is no longer required as the sxlock can be held over the sleep operation; we should consider removing that (pointed out by attilio). Tested by: kris Discussed with: jhb, kris, attilio, jeff
* Remove 'MPSAFE' annotations from the comments above most system calls: allrwatson2007-03-041-6/+0
| | | | | | | | system calls now enter without Giant held, and then in some cases, acquire Giant explicitly. Remove a number of other MPSAFE annotations in the credential code and tweak one or two other adjacent comments.
* Save exit status of an exiting process in kn_data in the knote.jhb2006-11-201-0/+1
| | | | | Submitted by: Jared Yanovich ^phirerunner at comcast.net^ MFC after: 2 weeks
* remove unnecessary NULL check...jmg2006-09-251-2/+1
| | | | Coverity ID: 1545
* hide kqueue_register from public view, and replace it w/ kqfd_register...jmg2006-09-241-2/+30
| | | | this eliminates a possible race in aio registering a kevent..
* add KTRACE hooks into kevent... This will help people debug their kqueuejmg2006-09-241-2/+38
| | | | | | | | | | programs to find out exactly which events were registered and which were returned... This should be lower in kern_kevent, but that would require special munging due to locks and the functions used to copyin/copyout kevents... If someone wants to teach ktrace how to output pretty kevents, I have a kevent prety printer that can be used...
* Use fget() in kqueue_register() instead of doing all the work by hand.jhb2006-06-121-17/+3
|
* Don't forget to unlock kq lock in low memory situations.pjd2006-06-021-0/+1
| | | | OK'ed by: jmg
* Remove confusing done_noglobal label. The KQ_GLOBAL_UNLOCK() macro knowpjd2006-06-021-2/+1
| | | | | | how to handle both situations - when kq_global lock is and is not held. OK'ed by: jmg
* Use SLIST_FOREACH_SAFE() macro, because knote_drop() can free an elementpjd2006-06-021-2/+2
| | | | | | which can be then used to find next element in the list. OK'ed by: jmg
* Drop the kqueue global mutex as soon as we are finished with it ratherjhb2006-04-141-4/+2
| | | | | | | | | | | | | | | than keeping it locked until we exit the function to optimize the case where the lock would be dropped and later reacquired. The optimization was broken when kevent's were moved from UFS to VFS and the knote list lock for a vnode kevent became the lockmgr vnode lock. If one tried to use a kqueue that contained events for a kqueue fd followed by a vnode, then the kq global lock would end up being held when the vnode lock was acquired which could result in sleeping with a mutex held (and subsequent panics) if the vnode lock was contested. Reviewed by: jmg Tested by: ps (on 6.x) MFC after: 3 days
* spell unlock correctly, this is relatively minor as it's rare someone wouldjmg2006-04-071-1/+1
| | | | | | | | provide a lock method, and want the default unlock, but it is a bug... PR: 95356 Submitted by: Stephen Corteselli MFC after: 3 days
* mask out any action when copying the flags from the event to the knote..jmg2006-04-011-0/+2
| | | | | | Pointed out by: Václav Haisman Submitted by: Dan Nelson (slightly modifed patch) MFC after: 3 days
* hold the list lock over the f_event and KNOTE_ACTIVATE calls... This closesjmg2006-03-291-1/+1
| | | | | | | | | | a race where data could come in before we clear the INFLUX flag, and get skipped over by knote (and hence never be activated, though it should of been)... Found by: glebius & co. Reviewed by: glebius MFC after: 3 days
* Add in kqueue support to LIO event notification and fix how it handledambrisko2005-10-121-2/+6
| | | | | | | | | | | | | | | | | | notifications when LIO operations completed. These were the problems with LIO event complete notification: - Move all LIO/AIO event notification into one general function so we don't have bugs in different data paths. This unification got rid of several notification bugs one of which if kqueue was used a SIGILL could get sent to the process. - Change the LIO event accounting to count all AIO request that could have been split across the fast path and daemon mode. The prior accounting only kept track of AIO op's in that mode and not the entire list of operations. This could cause a bogus LIO event complete notification to occur when all of the fast path AIO op's completed and not the AIO op's that ended up queued for the daemon. Suggestions from: alc
* Fix race condition that caused activation of an event toups2005-09-151-2/+4
| | | | | | | be ignored immediately after it was deactivated. Found by: Yahoo! MFC after: 3 days
* Fix the recent panics/LORs/hangs created by my kqueue commit by:ssouhlal2005-07-011-28/+83
| | | | | | | | | | | | | | | | | - Introducing the possibility of using locks different than mutexes for the knlist locking. In order to do this, we add three arguments to knlist_init() to specify the functions to use to lock, unlock and check if the lock is owned. If these arguments are NULL, we assume mtx_lock, mtx_unlock and mtx_owned, respectively. - Using the vnode lock for the knlist locking, when doing kqueue operations on a vnode. This way, we don't have to lock the vnode while holding a mutex, in filt_vfsread. Reviewed by: jmg Approved by: re (scottl), scottl (mentor override) Pointyhat to: ssouhlal Will be happy: everyone
* Wrap copyin/copyout for kevent so the 32bit wrapper does not haveps2005-06-031-44/+51
| | | | | | | | | to malloc nchanges * sizeof(struct kevent) AND/OR nevents * sizeof(struct kevent) on every syscall. Glanced at by: peter, jmg Obtained from: Yahoo! MFC after: 2 weeks
* make stat return an zero'd struct, and be a FIFO again... This is onlyjmg2005-05-241-1/+10
| | | | | | | | | to fix libc_r since it requires stat to close fd's, and so commented in the code... PR: threads/75795 Reviewed by: ps MFC after: 1 week
* fix aio+kq... I've been running ambrisko's test program for much longerjmg2005-03-181-8/+11
| | | | | | | | | | | | w/o problems than I was before... This simply brings back the knote_delete as knlist_delete which will also drop the knote's, instead of just clearing the list and seeing _ONESHOT... Fix a race where if a note was _INFLUX and _DETACHED, it could end up being modified... whoopse.. MFC after: 1 week Prodded by: ambrisko and dwhite
* Use kern_kevent instead of the stackgap for 32bit syscall wrapping.ps2005-03-011-33/+75
| | | | | Submitted by: jhb Tested on: amd64
* When invoking callout_init(), spell '1' as "CALLOUT_MPSAFE".rwatson2005-02-221-1/+1
| | | | MFC after: 3 days
* Make a bunch of malloc types static.phk2005-02-101-1/+2
| | | | Found by: src/tools/tools/kernxref
* Move a FILEDESC_UNLOCK upwards to silence witness.phk2004-11-161-1/+1
|
* Introduce an alias for FILEDESC_{UN}LOCK() with the suffix _FAST.phk2004-11-131-4/+4
| | | | | | | | Use this in all the places where sleeping with the lock held is not an issue. The distinction will become significant once we finalize the exact lock-type to use for this kind of case.
* /me gets the wrong patch out of the pr :(jmg2004-10-141-2/+2
| | | | | | | /me had the write patch w/o comments on his test system. Pointed out by: kuriyama and ache Pointy hat to: jmg
* fix a bug where signal events didn't set the flags for attach/detach..jmg2004-10-131-0/+2
| | | | | PR: 72234 MFC after: 2 days
* unlock global lock in kqueue_scan before msleep'ing to prevent deadjmg2004-09-141-0/+2
| | | | | | | | | lock.. we didn't unlock global lock earlier to prevent just having to reaquire it again.. Found by: peter Reviewed by: ps MFC after: 3 days
* remove giant required from kqueue_close..jmg2004-09-101-2/+0
| | | | | Reported by: kuriyama MFC after: 3 days
* don't call f_detach if the filter has alread removed the knote.. Thisjmg2004-09-061-8/+10
| | | | | | | | happens when a proc exits, but needs to inform the user that this has happened.. This also means we can remove the check for detached from proc and sig f_detach functions as this is doing in kqueue now... MFC after: 5 days
* Allocate the marker, when scanning a kqueue, from the "heap" instead of thegreen2004-08-161-6/+12
| | | | | | | stack. When swapped out, a process's kernel stack would be unavailable, and we could get a page fault when scanning the same kqueue. PR: kern/61849
* Add locking to the kqueue subsystem. This also makes the kqueue subsystemjmg2004-08-151-328/+956
| | | | | | | | | | | | | a more complete subsystem, and removes the knowlege of how things are implemented from the drivers. Include locking around filter ops, so a module like aio will know when not to be unloaded if there are outstanding knotes using it's filter ops. Currently, it uses the MTX_DUPOK even though it is not always safe to aquire duplicate locks. Witness currently doesn't support the ability to discover if a dup lock is ok (in some cases). Reviewed by: green, rwatson (both earlier versions)
* looks like rwatson forgot tabs... :)jmg2004-08-131-2/+2
|
* Trim trailing white space.rwatson2004-08-121-11/+11
|
* Push Giant acquisition down into fo_stat() from most callers. Acquirerwatson2004-07-221-0/+1
| | | | | | | | | | | Giant conditional on debug.mpsafenet in the socket soo_stat() routine, unconditionally in vn_statfile() for VFS, and otherwise don't acquire Giant. Accept an unlocked read in kqueue_stat(), and cryptof_stat() is a no-op. Don't acquire Giant in fstat() system call. Note: in fdescfs, fo_stat() is called while holding Giant due to the VFS stack sitting on top, and therefore there will still be Giant recursion in this case.
* Push acquisition of Giant from fdrop_closed() into fo_close() so thatrwatson2004-07-221-1/+2
| | | | | | | | | | | | | | | | | | | | | | | individual file object implementations can optionally acquire Giant if they require it: - soo_close(): depends on debug.mpsafenet - pipe_close(): Giant not acquired - kqueue_close(): Giant required - vn_close(): Giant required - cryptof_close(): Giant required (conservative) Notes: Giant is still acquired in close() even when closing MPSAFE objects due to kqueue requiring Giant in the calling closef() code. Microbenchmarks indicate that this removal of Giant cuts 3%-3% off of pipe create/destroy pairs from user space with SMP compiled into the kernel. The cryptodev and opencrypto code appears MPSAFE, but I'm unable to test it extensively and so have left Giant over fo_close(). It can probably be removed given some testing and review.
* Disable SIGIO for now, leave a comment as to why it's busted and hardalfred2004-07-151-0/+20
| | | | to fix.
* Make FIOASYNC, FIOSETOWN and FIOGETOWN work on kqueues.alfred2004-07-141-2/+29
|
* Introduce a new kevent filter. EVFILT_FS that will be used to signalalfred2004-07-041-0/+2
| | | | | | | | | | | generic filesystem events to userspace. Currently only mount and unmount of filesystems are signalled. Soon to be added, up/down status of NFS. Introduce a sysctl node used to route requests to/from filesystems based on filesystem ids. Introduce a new vfsop, vfs_sysctl(mp, req) that is used as the callback/ entrypoint by the sysctl code to change individual filesystems.
* Add GIANT_REQUIRED to kqueue_close(), since kqueue currently requiresrwatson2004-06-011-0/+2
| | | | Giant.
* Fix filt_timer* races: Finish initializing a knote before we pass it tocperciva2004-04-071-2/+2
| | | | | | | | a callout, and use the new callout_drain API to make sure that a callout has finished before we deallocate memory it is using. PR: kern/64121 Discussed with: gallatin
* Make sure to wake up any select waiters when closing a kqueue (also, notgreen2004-02-201-0/+4
| | | | | | | crash). I am fairly sure that only people with SMP and multi-threaded apps using kqueue will be affected by this, so I have a stress-testing program on my web site: <URL:http://green.homeunix.org/~green/getaddrinfo-pthreads-stresstest.c>
* Don't TAILQ_INIT kq_head twice, once is enough.dwmalone2003-12-251-1/+0
|
* Better fix than my previous commit:cognet2003-11-141-8/+2
| | | | | | | | | | | in exit1(), make sure the p_klist is empty after sending NOTE_EXIT. The process won't report fork() or execve() and won't be able to handle NOTE_SIGNAL knotes anyway. This fixes some race conditions with do_tdsignal() calling knote() while the process is exiting. Reported by: Stefan Farfeleder <stefan@fafoe.narf.at> MFC after: 1 week
* - Implement selwakeuppri() which allows raising the priority of atanimura2003-11-091-1/+1
| | | | | | | | | | | | | thread being waken up. The thread waken up can run at a priority as high as after tsleep(). - Replace selwakeup()s with selwakeuppri()s and pass appropriate priorities. - Add cv_broadcastpri() which raises the priority of the broadcast threads. Used by selwakeuppri() if collision occurs. Not objected in: -arch, -current
* I believe kbyanc@ really meant this in rev 1.58.cognet2003-11-041-2/+2
| | | | | | | | Use zpfind() to see if the process became a zombie if pfind() doesn't find it and if the caller wants to know about process death, so that the caller knows the process died even if it happened before the kevent was actually registered. MFC after: 1 week
OpenPOWER on IntegriCloud