summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_subr.c
Commit message (Collapse)AuthorAgeFilesLines
* Decompose the most lousy named file in sys/kern; kern_subr.c.ed2010-02-211-571/+0
| | | | | | | | Although this file has historically been used as a dumping ground for random functions, nowadays it only contains functions related to copying bits {from,to} userspace and hash table utility functions. Behold, subr_uio.c and subr_hash.c.
* Constify prime numbers.rpaulo2009-08-231-3/+3
|
* Make ureadc() warn when holding any locks, just like uiomove().ed2008-08-281-0/+3
| | | | | | | | | | | | A couple of months ago I was quite impressed, because when I was writing code, I discovered that uiomove() would not allow any locks to be held, while ureadc() did, mainly because ureadc() is implemented using the same building blocks as uiomove(). Let's see if this triggers any aditional witness warnings on our source tree. Reviewed by: atillio
* - Make SCHED_STATS more generic by adding a wrapper to create thejeff2008-04-171-1/+1
| | | | | | | | | | | | | | | | | | variables and sysctl nodes. - In reset walk the children of kern_sched_stats and reset the counters via the oid_arg1 pointer. This allows us to add arbitrary counters to the tree and still reset them properly. - Define a set of switch types to be passed with flags to mi_switch(). These types are named SWT_*. These types correspond to SCHED_STATS counters and are automatically handled in this way. - Make the new SWT_ types more specific than the older switch stats. There are now stats for idle switches, remote idle wakeups, remote preemption ithreads idling, etc. - Add switch statistics for ULE's pickcpu algorithm. These stats include how much migration there is, how often affinity was successful, how often threads were migrated to the local cpu on wakeup, etc. Sponsored by: Nokia
* Commit 14/14 of sched_lock decomposition.jeff2007-06-051-2/+2
| | | | | | | | | | | - Use thread_lock() rather than sched_lock for per-thread scheduling sychronization. - Use the per-process spinlock rather than the sched_lock for per-process scheduling synchronization. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Removes useless (flags | ) KASSERT. The ^ one that actuallyrrs2007-01-161-6/+1
| | | | | | | | does what we want. Submitted by: Li Xin delphij@delphij.net Reviewed by: rrs Approved by: gnn
* Fix warning by adding extra parentheseskmacy2007-01-161-1/+1
|
* Reviewed by: rwatsonrrs2007-01-151-6/+36
| | | | | | | | Approved by: gnn Add a new function hashinit_flags() which allows NOT-waiting for memory (or waiting). The old hashinit() function now calls hashinit_flags(..., HASH_WAITOK);
* Threading cleanup.. part 2 of several.julian2006-12-061-4/+0
| | | | | | | | | | | | | | | | | | | | | | Make part of John Birrell's KSE patch permanent.. Specifically, remove: Any reference of the ksegrp structure. This feature was never fully utilised and made things overly complicated. All code in the scheduler that tried to make threaded programs fair to unthreaded programs. Libpthread processes will already do this to some extent and libthr processes already disable it. Also: Since this makes such a big change to the scheduler(s), take the opportunity to rename some structures and elements that had to be moved anyhow. This makes the code a lot more readable. The ULE scheduler compiles again but I have no idea if it works. The 4bsd scheduler still reqires a little cleaning and some functions that now do ALMOST nothing will go away, but I thought I'd do that as a separate commit. Tested by David Xu, and Dan Eischen using libthr and libpthread.
* Make KSE a kernel option, turned on by default in all GENERICjb2006-10-261-0/+4
| | | | | | | kernel configs except sun4v (which doesn't process signals properly with KSE). Reviewed by: davidxu@
* Reduce the scope of the page queues lock in vm_pgmoveco() now thatalc2006-08-121-2/+2
| | | | | | vm_page_sleep_if_busy() no longer requires the page queue lock to be held. Correctly spell "TRUE".
* /* -> /*- for copyright notices, minor format tweaks as necessaryimp2005-01-061-1/+1
|
* Correct the handling of two unusual cases by the zero-copy receive path,alc2004-12-131-16/+26
| | | | | | | | | | | | | | | specifically, vm_pgmoveco(): 1. If vm_pgmoveco() sleeps on a busy page, it must redo the look up because the page may have been freed. 2. If the receive buffer is copy-on-write due to, for example, a fork, then although the first vm object in the shadow chain may not contain a page there may still be one from a backing object that is mapped. Thus, a pmap_remove() is required for the new page rather than the backing object's page to been seen by the application. Also, add some comments to vm_pgmoveco() and update some assertions. Tested by: ken@
* Tidy up the zero-copy receive path: Remove an unneeded argument toalc2004-12-081-6/+3
| | | | uiomoveco() and userspaceco().
* Update the Tigon 1 and 2 driver to use the sf_buf API for implementingalc2004-12-061-6/+4
| | | | | | | | | | | | zero-copy receive of jumbo frames. This eliminates the need for the jumbo frame allocator implemented in kern/uipc_jumbo.c and sys/jumbo.h. Remove it. Note: Zero-copy receive of jumbo frames did not work without these changes; I believe there was insufficient locking on the jumbo vm object. Tested by: ken@ Discussed with: gallatin@
* Eliminate an unused argument to vm_pgmoveco().alc2004-11-081-4/+2
|
* Two changes to vm_pgmoveco():alc2004-11-051-3/+1
| | | | | | | - Eliminate an initialized but unused variable. - Eliminate an unnecessary call to clear the page's PG_BUSY flag. (The call to vm_page_rename() already clears the page's PG_BUSY flag through its call to vm_page_remove().)
* The synchronization provided by vm object locking has eliminated thealc2004-11-031-2/+0
| | | | | | | | | | | | | | | | | need for most calls to vm_page_busy(). Specifically, most calls to vm_page_busy() occur immediately prior to a call to vm_page_remove(). In such cases, the containing vm object is locked across both calls. Consequently, the setting of the vm page's PG_BUSY flag is not even visible to other threads that are following the synchronization protocol. This change (1) eliminates the calls to vm_page_busy() that immediately precede a call to vm_page_remove() or functions, such as vm_page_free() and vm_page_rename(), that call it and (2) relaxes the requirement in vm_page_remove() that the vm page's PG_BUSY flag is set. Now, the vm page's PG_BUSY flag is set only when the vm object lock is released while the vm page is still in transition. Typically, this is when it is undergoing I/O.
* Add a WITNESS_WARN() to uiomove() to whine if locks are held when thisjhb2004-10-121-0/+2
| | | | | | function is called. MFC after: 1 month
* Clean up and wash struct iovec and struct uio handling.phk2004-07-101-17/+46
| | | | | | | | | | | | Add copyiniov() which copies a struct iovec array in from userland into a malloc'ed struct iovec. Caller frees. Change uiofromiov() to malloc the uio (caller frees) and name it copyinuio() which is more appropriate. Add cloneuio() which returns a malloc'ed copy. Caller frees. Use them throughout.
* - Change mi_switch() and sched_switch() to accept an optional thread tojhb2004-07-021-1/+1
| | | | | | | | | | | | | switch to. If a non-NULL thread pointer is passed in, then the CPU will switch to that thread directly rather than calling choosethread() to pick a thread to choose to. - Make sched_switch() aware of idle threads and know to do TD_SET_CAN_RUN() instead of sticking them on the run queue rather than requiring all callers of mi_switch() to know to do this if they can be called from an idlethread. - Move constants for arguments to mi_switch() and thread_single() out of the middle of the function prototypes and up above into their own section.
* Remove checks for curthread == NULL - it can't happen.tjr2004-06-031-5/+3
|
* Move TDF_DEADLKTREAT into td_pflags (and rename it accordingly) to avoidtjr2004-06-031-9/+4
| | | | | | | having to acquire sched_lock when manipulating it in lockmgr(), uiomove(), and uiomove_fromphys(). Reviewed by: jhb
* Remove advertising clause from University of California Regent's license,imp2004-04-051-4/+0
| | | | | | per letter dated July 22, 1999. Approved by: core
* Rename iov_to_uio to uiofromiov to be more consistent with othersilby2004-02-041-1/+1
| | | | | | uio* functions. Suggested by: bde
* Style fixessilby2004-02-041-29/+29
| | | | Submitted by: bde
* Remove debugging code that slipped into the previous commit.silby2004-02-021-3/+0
| | | | Spotted by: bde
* Rewrite sendfile's header support so that headers are now sent in the firstsilby2004-02-011-0/+42
| | | | | | | | | | | | packet along with data, instead of in their own packet. When serving files of size (packetsize - headersize) or smaller, this will result in one less packet crossing the network. Quick testing with thttpd and http_load has shown a noticeable performance improvement in this case (350 vs 330 fetches per second.) Included in this commit are two support routines, iov_to_uio, and m_uiotombuf; these routines are used by sendfile to construct the header mbuf chain that will be linked to the rest of the data in the socket buffer.
* - Add a flags parameter to mi_switch. The value of flags may be SW_VOL orjeff2004-01-251-2/+1
| | | | | | | | | | SW_INVOL. Assert that one of these is set in mi_switch() and propery adjust the rusage statistics. This is to simplify the large number of users of this interface which were previously all required to adjust the proper counter prior to calling mi_switch(). This also facilitates more switch and locking optimizations. - Change all callers of mi_switch() to pass the appropriate paramter and remove direct references to the process statistics.
* Add __restrict qualifiers to copyinfrom, copyinstrfrom, copystr, copyinstr,alfred2003-12-261-2/+4
| | | | copyin and copyout.
* Introduce a uiomove_frombuf helper routine that handles computing andnectar2003-10-021-0/+23
| | | | | | | | | | | | | | | validating the offset within a given memory buffer before handing the real work off to uiomove(9). Use uiomove_frombuf in procfs to correct several issues with integer arithmetic that could result in underflows/overflows. As a side-effect, the code is significantly simplified. Add additional sanity checks when computing a memory allocation size in pfs_read. Submitted by: rwatson (original uiomove_frombuf -- bugs are mine :-) Reported by: Joost Pol <joost@pine.nl> (integer underflows/overflows)
* Use __FBSDID().obrien2003-06-111-1/+3
|
* - Add vm object locking to vm_pgmoveco().alc2003-06-091-2/+5
| | | | | - Add a comment to vm_pgmoveco() describing what remains to be done for vm locking.
* Tweak the clearing of TDF_DEADLKTREAT so that we only bother grabbing thejhb2003-05-051-2/+2
| | | | lock and clearing the flag if it was clear when uiomove() was called.
* Remove extraneous check. We are not going to return from copyin/out onjhb2003-03-251-2/+0
| | | | the stack of a thread A but actually be thread B instead of thread A.
* Zero copy send and receive fixes:ken2003-03-081-1/+1
| | | | | | | | | | | | | | | | | | - On receive, vm_map_lookup() needs to trigger the creation of a shadow object. To make that happen, call vm_map_lookup() with PROT_WRITE instead of PROT_READ in vm_pgmoveco(). - On send, a shadow object will be created by the vm_map_lookup() in vm_fault(), but vm_page_cowfault() will delete the original page from the backing object rather than simply letting the legacy COW mechanism take over. In other words, the new page should be added to the shadow object rather than replacing the old page in the backing object. (i.e. vm_page_cowfault() should not be called in this case.) We accomplish this by making sure fs.object == fs.first_object before calling vm_page_cowfault() in vm_fault(). Submitted by: gallatin, alc Tested by: ken
* Remove ENABLE_VFS_IOOPT. It is a long unfinished work-in-progress.alc2003-03-061-106/+2
| | | | Discussed on: arch@
* Convert one of our main caddr_t consumers, uiomove(9), to void *.des2003-03-021-5/+5
|
* Clean up whitespace, unregisterize, ANSIfy, remove prototypes madedes2003-03-021-55/+19
| | | | superfluous by ANSIfication.
* Back out M_* changes, per decision of the TRB.imp2003-02-191-2/+2
| | | | Approved by: trb
* Remove M_TRYWAIT/M_WAITOK/M_WAIT. Callers should use 0.alfred2003-01-211-2/+2
| | | | Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
* Reduce the number of times that we acquire and release the page queuesalc2002-12-291-2/+0
| | | | | lock by making vm_page_rename()'s caller, rather than vm_page_rename(), responsible for acquiring it.
* Extend the scope of the page queues lock in vm_pgmoveco().alc2002-12-201-4/+4
|
* Hold the page queues lock when performing vm_page_busy().alc2002-12-181-0/+2
|
* Use pmap_remove_all() instead of pmap_remove() before freeing the pagealc2002-11-281-5/+4
| | | | | | | in vm_pgmoveco(); the page may have more than one mapping. Hold the page queues lock when calling pmap_remove_all(). Approved by: re (blanket)
* - Create a new scheduler api that is defined in sys/sched.hjeff2002-10-121-1/+2
| | | | | | | | | | - Begin moving scheduler specific functionality into sched_4bsd.c - Replace direct manipulation of scheduler data with hooks provided by the new api. - Remove KSE specific state modifications and single runq assumptions from kern_switch.c Reviewed by: -arch
* Change iov_base's type from `char *' to the standard `void *'. Allmike2002-10-111-5/+8
| | | | | uses of iov_base which assume its type is `char *' (in order to do pointer arithmetic) have been updated to cast iov_base to `char *'.
* o Convert a vm_page_sleep_busy() into a vm_page_sleep_if_busy()alc2002-08-041-1/+3
| | | | with appropriate page queue locking.
* o Lock page queue accesses by vm_page_free().alc2002-07-211-0/+2
|
* Fix compilation with ENABLE_VFS_IOOPT turned on and ZERO_COPY_SOCKETSken2002-07-121-16/+11
| | | | | | | | | turned off. Clean up #ifdefs, and remove a bunch of unnecessary includes. Reviewed by: bde Tested by: netchild
OpenPOWER on IntegriCloud