summaryrefslogtreecommitdiffstats
path: root/sys/kern
Commit message (Collapse)AuthorAgeFilesLines
* - Remove the ksq_loads[] array. We are only interested in three counts,jeff2003-11-021-33/+50
| | | | | | | | | the total load, the timeshare load, and the number of threads that can be migrated to another cpu. Account for these seperately. - Introduce a KSE_CAN_MIGRATE() macro which determines whether or not a KSE can be migrated to another CPU. Currently, this only checks to see if we're an interrupt handler. Eventually this will also be used to support CPU binding.
* Take care not to call vput if thread used in corresponding vgetkan2003-11-021-1/+2
| | | | | | | | | | wasn't curthread, i.e. when we receive a thread pointer to use as a function argument. Use VOP_UNLOCK/vrele in these cases. The only case there td != curthread known at the moment is boot() calling sync with thread0 pointer. This fixes the panic on shutdown people have reported.
* - In sched_prio() only force us onto the current queue if our priority isjeff2003-11-021-1/+2
| | | | being elevated (numerically smaller).
* - Rename SCHED_PRI_NTHRESH to SCHED_SLICE_NTHRESH since it is only used injeff2003-11-021-10/+11
| | | | | | | | | slice assignment. Add a comment describing what it does. - Remove a stale XXX comment, the nice should not impact the interactivity, nice adjustments only effect non-interactive tasks in ULE. - Don't allow nice -20 tasks to totally starve nice 0 tasks. Give them at least SCHED_SLICE_MIN ticks. We still allow nice 0 tasks to starve nice +20 tasks as intended.
* - Remove uses of PRIO_TOTAL and replace them with SCHED_PRI_NRESVjeff2003-11-021-5/+5
| | | | | | | - SCHED_PRI_NRESV does not have the off by one error in PRIO_TOTAL so we do not have to account for it in the few places that we use it. Requested by: bde
* - Change sched_interact_update() to only accept slp+runtime values betweenjeff2003-11-021-27/+56
| | | | | | | | | | | | | | | | | | | 0 and SCHED_SLP_RUN_MAX * 2. This allows us to simplify the algorithm quite a bit. Before, it dealt with arbitrary values which required us to do nasty integer division tricks that didn't quite work out correctly. - Chnage sched_wakeup() to detect conditions where the slp+runtime could exceed SCHED_SLP_RUN_MAX * 2. This can happen if we go to sleep for longer than 6 seconds. In this case, we'll just clear the runtime and set the sleep time to the max. - Define a new function, sched_interact_fork() which updates the slp+runtime of a newly forked thread. We want to limit the amount of history retained from the parent so that we learn the child's behavior quickly. We don't, however want to decay it to nothing. Previously, we would simply divide each parameter by 100 whenever we forked. After a few forks the values would reach 0 and tasks would not be considered interactive. - Add another KTR entry, cleanup some existing entries. - Remove a useless sched_interact_update() from sched_priority(). This is already done by the callers that require it.
* Temporarily undo parts of the stuct mount locking commit by jeff.kan2003-11-011-5/+1
| | | | | | | | It is unsafe to hold a mutex across vput/vrele calls. This will be redone when a better locking strategy is agreed upon. Discussed with: jeff
* - Add static to local functions and data where it was missing.jeff2003-10-311-78/+222
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Add an IPI based mechanism for migrating kses. This mechanism is broken down into several components. This is intended to reduce cache thrashing by eliminating most cases where one cpu touches another's run queues. - kseq_notify() appends a kse to a lockless singly linked list and conditionally sends an IPI to the target processor. Right now this is protected by sched_lock but at some point I'd like to get rid of the global lock. This is why I used something more complicated than a standard queue. - kseq_assign() processes our list of kses that have been assigned to us by other processors. This simply calls sched_add() for each item on the list after clearing the new KEF_ASSIGNED flag. This flag is used to indicate that we have been appeneded to the assigned queue but not added to the run queue yet. - In sched_add(), instead of adding a KSE to another processor's queue we use kse_notify() so that we don't touch their queue. Also in sched_add(), if KEF_ASSIGNED is already set return immediately. This can happen if a thread is removed and readded so that the priority is recorded properly. - In sched_rem() return immediately if KEF_ASSIGNED is set. All callers immediately readd simply to adjust priorites etc. - In sched_choose(), if we're running an IDLE task or the per cpu idle thread set our cpumask bit in 'kseq_idle' so that other processors may know that we are idle. Before this, make a single pass through the run queues of other processors so that we may find work more immediately if it is available. - In sched_runnable(), don't scan each processor's run queue, they will IPI us if they have work for us to do. - In sched_add(), if we're adding a thread that can be migrated and we have plenty of work to do, try to migrate the thread to an idle kseq. - Simplify the logic in sched_prio() and take the KEF_ASSIGNED flag into consideration. - No longer use kseq_choose() to steal threads, it can lose it's last argument. - Create a new function runq_steal() which operates like runq_choose() but skips threads based on some criteria. Currently it will not steal PRI_ITHD threads. In the future this will be used for CPU binding. - Create a kseq_steal() that checks each run queue with runq_steal(), use kseq_steal() in the places where we used kseq_choose() to steal with before.
* Ensure that mp_ncpus is set to 1 if mp_cpu_probe() fails.jhb2003-10-301-1/+3
|
* Relock mntvnode_mtx if vget fails in vfs_stdsync. The loop iskan2003-10-301-0/+1
| | | | always shoould entered with mutex locked.
* Try to fetch thread mailbox address in page fault trap, so when threaddavidxu2003-10-301-1/+2
| | | | | blocks in page fault hanlder, and upcall thread can be scheduled. It is useful if process is doing lots of mmap based I/O.
* Add a temporary mechanism to disble INTR_MPSAFE from network interfacesam2003-10-291-0/+13
| | | | | drivers. This is prepatory to running more parts of the network system w/o Giant.
* Removed mostly-dead code for setting switchtime after the idle loopbde2003-10-292-11/+2
| | | | | | | | | | | | | | | | | | | | | | clobbers this variable. Long ago, when the idle loop wasn't in a process, it set switchtime.tv_sec to zero to indicate that the time needs to be read after the idle loop finishes. The special case for this isn't needed now that there is an idle process (for each CPU). The time is read in the normal way when the idle process is switched away from. The seconds component of the time is only zero for the first second after the uptime is set, and the mostly-dead code was only executed during this time. (This was slightly broken by using uptimes instead of times relative to the Epoch -- in the original version the seconds component of the time was only 0 for the first second after the Epoch.) In mi_switch(), moved the setting of switchticks to just after the first (and now only) setting of switchtime. This setting used to be delayed since a late setting was needed for the idle case and an early setting was not needed. Now the early setting is needed so that fork_exit() doesn't need to set either switchtime or switchticks. Removed now-completely-rotted comment attached to this. Most of the code described by the comment had already moved to sched_switch().
* Removed sched_nest variable in sched_switch(). Context switches alwaysbde2003-10-293-7/+1
| | | | | | | | | | | | | | | | | begin with sched_lock held but not recursed, so this variable was always 0. Removed fixup of sched_lock.mtx_recurse after context switches in sched_switch(). Context switches always end with this variable in the same state that it began in, so there is no need to fix it up. Only sched_lock.mtx_lock really needs a fixup. Replaced fixup of sched_lock.mtx_recurse in fork_exit() by an assertion that sched_lock is owned and not recursed after it is fixed up. This assertion much match the one in mi_switch(), and if sched_lock were recursed then a non-null fixup of sched_lock.mtx_recurse would probably be needed again, unlike in sched_switch(), since fork_exit() doesn't return to its caller in the normal way.
* Introduce the notion of "persistent mbuf tags"; these are tags that staysam2003-10-291-0/+17
| | | | | | | | | | | | | | with an mbuf until it is reclaimed. This is in contrast to tags that vanish when an mbuf chain passes through an interface. Persistent tags are used, for example, by MAC labels. Add an m_tag_delete_nonpersistent function to strip non-persistent tags from mbufs and use it to strip such tags from packets as they pass through the loopback interface and when turned around by icmp. This fixes problems with "tag leakage". Pointed out by: Jonathan Stone Reviewed by: Robert Watson
* speedup stream socket recv handling by tracking the tail ofsam2003-10-283-41/+338
| | | | | | | the mbuf chain instead of walking the list for each append Submitted by: ps/jayanth Obtained from: netbsd (jason thorpe)
* - Only change the run queue in sched_prio() if the kse is non null. threadsjeff2003-10-281-10/+2
| | | | | | can be in the TD_ON_RUNQ state and not have an associated kse. - Remove the PRI_IDLE special case from sched_clock(), it was not actually necessary.
* - Don't set td_priority directly here, use sched_prio().jeff2003-10-271-1/+1
|
* - Use a better algorithm in sched_pctcpu_update()jeff2003-10-271-56/+50
| | | | | | | | | | | | | | | | | | | | | Contributed by: Thomaswuerfl@gmx.de - In sched_prio(), adjust the run queue for threads which may need to move to the current queue due to priority propagation . - In sched_switch(), fix style bug introduced when the KSE support went in. Columns are 80 chars wide, not 90. - In sched_switch(), Fix the comparison in the idle case and explicitly re-initialize the runq in the not propagated case. - Remove dead code in sched_clock(). - In sched_clock(), If we're an IDLE class td set NEEDRESCHED so that threads that have become runnable will get a chance to. - In sched_runnable(), if we're not the IDLETD, we should not consider curthread when examining the load. This mimics the 4BSD behavior of returning 0 when the only runnable thread is running. - In sched_userret(), remove the code for setting NEEDRESCHED entirely. This is not necessary and is not implemented in 4BSD. - Use the correct comparison in sched_add() when checking to see if an idle prio task has had it's priority temporarily elevated.
* constify the second args to timevaladd() and timevalsub().alfred2003-10-261-2/+2
|
* Check (locked) before performing an advisory unlock following a failurerwatson2003-10-251-1/+2
| | | | | | | of vn_start_write(). Otherwise, we may inconsistently attempt to release the advisory lock. Pointed out by: teggej
* When generate a core dump, use advisory locking in an advisory way:rwatson2003-10-251-6/+6
| | | | | | | | | | | if we do acquire an advisory lock, great! We'll release it later. However, if we fail to acquire a lock, we perform the coredump anyway. This problem became particularly visible with NFS after the introduction of rpc.lockd: if the lock manager isn't running, then locking calls will fail, aborting the core dump (resulting in a zero-byte dump file). Reported by: Yogeshwar Shenoy <ynshenoy@alumni.cs.ucsb.edu>
* Allow MAC policies to block/revoke kern_alq write access to a file.rwatson2003-10-251-2/+10
| | | | | | Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories Reviewed by: jeff
* Convenience functions to generate notifications from the kernel. The ACPIimp2003-10-241-19/+61
| | | | | | code will start using these shortly. Reviewed by: njl
* don't allow reading from files that haven't been open'd for reading.jmg2003-10-241-2/+3
|
* - Add a DDB command 'show intrcnt' to show the non-zero interrupt counts.jhb2003-10-241-0/+165
| | | | | | | - Add a DDB function to dump the contents of an ithread and optionally details about each handler in that ithread. This function can be used by MD code to implement DDB commands that display information about interrupt sources and their registered handlers.
* Writes to p_flag in __setugid() no longer need Giant.jhb2003-10-231-4/+0
|
* Move the P_COWINPROGRESS flag from being a per-process p_flag to being ajhb2003-10-231-1/+1
| | | | | | | per-thread td_pflag which doesn't require any locks to read or write as it is only read or written by curthread on itself. Glanced at by: mckusick
* Add appropriate const poisoning to the assert_*locked() family so that I canwollman2003-10-231-8/+8
| | | | | | call ASSERT_VOP_LOCKED(vp, __func__) without a diagnostic. Inspired by: the evil and rude OpenAFS cache manager code
* mac_Finish break-out of kern_mac.c into parts:rwatson2003-10-221-2756/+5
| | | | | | | | | | | | | | | | | | | | | | | Include src/sys/security/mac/mac_internal.h in kern_mac.c. Remove redundant defines from the include: SYSCTL_DECL(), debug macros, composition macros. Unstaticize various bits now exposed to the remainder of the kernel: mac_init_label(), mac_destroy_label(). Remove all the functions now implemented in mac_process/mac_vfs/mac_net/ mac_pipe. Also remove debug counters, sysctls exporting debug counters, enforcement flags, sysctls exporting enforcement flags. Leave module declaration, sysctl nodes, mactemp malloc type, system calls. This should conclude MAC/LINT/NOTES breakage from the break-out process, but I'm running builds now to make sure I caught everything. Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
* Variable cleanup following break-out of kern_mac.c into sys/security/mac:rwatson2003-10-221-16/+10
| | | | | | | | | | | | Unstaticize mac_late. Remove ea_warn_once, now in mac_vfs.c. Unstaticisize mac_policy_list, mac_static_policy_list, use struct mac_policy_list_head instead of LIST_HEAD() directly. Unstaticize and un-inline MAC policy locking functions so they can be referenced from mac_*.c. Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
* Rename error_select() to mac_error_select(), and unstaticize so itrwatson2003-10-221-5/+4
| | | | | | | can be used from src/sys/security/mac/mac_*.c. Obtained from: TrustedBSD Project Sponosred by: DARPA, Network Associates Laboratories
* Change all SYSCTLS which are readonly and have a related TUNABLEsilby2003-10-2112-24/+25
| | | | | from CTLFLAG_RD to CTLFLAG_RDTUN so that sysctl(8) can provide more useful error messages.
* We need to initialize bp->b_offset and bp->b_iooffsetsimokawa2003-10-211-0/+2
| | | | becuase bp->b_blkno is ignored now.
* Don peril-sensitive sunglasses and mark pipe(2) as MPSAFE. I've beaten upscottl2003-10-213-4/+4
| | | | | on it for the last 15 hours with no signs of problems. It gives a small (1%) gain on buildworld since pipe_read/pipe_write are already free of Giant.
* Remove KASSERTS on B_PHYS for vmapbuf() and vunmapbuf(), B_PHYS is goingphk2003-10-211-3/+0
| | | | away.
* Remove md_bspstore from the MD fields of struct thread. Now thatmarcel2003-10-211-1/+0
| | | | | the backing store is at a fixed address, there's no need for a per-thread variable.
* revert default for idle polling to zero until we can resolve thesam2003-10-201-1/+1
| | | | livelock problem
* - If a thread is not bound to a kse return 0 from sched_pctcpu().jeff2003-10-201-0/+2
| | | | Reported by: pawel.worach@nordea.com
* Initialize the buf's b_object in pbgetvp(). Clear it in pbrelvp(). (Thisalc2003-10-202-1/+2
| | | | | | | facilitates synchronization of the vm page's valid field using the vm object's lock.) Suggested by: tegge
* Mark dup as MPSAFE. Giant was pushed into dup ages ago, but it looksdwmalone2003-10-203-4/+4
| | | | | | like it was missed in syscalls.master. Spotted by: alc
* - Synchronize access to a vm page's valid field using the containingalc2003-10-201-4/+10
| | | | vm object's lock.
* Put the RSE backing store at a fixed address. This change is triggeredmarcel2003-10-201-1/+1
| | | | | | | | | | | | | by libguile that needs to know the base of the RSE backing store. We currently do not export the fixed address to userland by means of a sysctl so user code needs to hardcode it for now. This will be revisited later. The RSE backing store is now at the bottom of region 4. The memory stack is at the top of region 4. This means that the whole region is usable for the stacks, giving a 61-bit stack space. Port: lang/guile (depended of x11/gnome2)
* falloc allocates a file structure and adds it to the file descriptordwmalone2003-10-196-27/+34
| | | | | | | | | | | | | | | | | | | | | table, acquiring the necessary locks as it works. It usually returns two references to the new descriptor: one in the descriptor table and one via a pointer argument. As falloc releases the FILEDESC lock before returning, there is a potential for a process to close the reference in the file descriptor table before falloc's caller gets to use the file. I don't think this can happen in practice at the moment, because Giant indirectly protects closes. To stop the file being completly closed in this situation, this change makes falloc set the refcount to two when both references are returned. This makes life easier for several of falloc's callers, because the first thing they previously did was grab an extra reference on the file. Reviewed by: iedowse Idea run past: jhb
* - Add vm object locking to vfs_clean_pages() and vfs_bio_set_validclean().alc2003-10-191-2/+4
| | | | | This is to synchronize access to the vm page's valid field by vm_page_set_validclean().
* Tidy up loose ends in the idle process. Call the MI cpu_idle() functionpeter2003-10-191-37/+5
| | | | | | | | for all platforms now. XXX alpha/sparc64/powerpc should fill in the function. Submitted by: bde
* Initialize b_iooffset before calling VOP_[SPEC]STRATEGYphk2003-10-181-0/+3
|
* Initialize b_iooffset before calling strategyphk2003-10-181-0/+1
|
* Don't report b_pblkno, it is going away.phk2003-10-181-2/+2
|
* Report bio_pblkbo instead of bio_blkno.phk2003-10-181-3/+3
|
OpenPOWER on IntegriCloud