summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_meter.c
Commit message (Collapse)AuthorAgeFilesLines
* Rename VM_OBJECT_LOCK(), VM_OBJECT_UNLOCK() and VM_OBJECT_TRYLOCK() toattilio2013-02-201-4/+4
| | | | | | their "write" versions. Sponsored by: EMC / Isilon storage division
* Switch vm_object lock to be a rwlock.attilio2013-02-201-0/+1
| | | | | | | | * VM_OBJECT_LOCK and VM_OBJECT_UNLOCK are mapped to write operations * VM_OBJECT_SLEEP() is introduced as a general purpose primitve to get a sleep operation using a VM_OBJECT_LOCK() as protection * The approach must bear with vm_pager.h namespace pollution so many files require including directly rwlock.h
* - Add system wide page faults requiring I/O counter.zont2013-01-281-0/+1
| | | | | Reviewed by: alc MFC after: 2 weeks
* In the past four years, we've added two new vm object types. Each time,alc2012-12-091-1/+1
| | | | | | | | | | | | | | | | | | similar changes had to be made in various places throughout the machine- independent virtual memory layer to support the new vm object type. However, in most of these places, it's actually not the type of the vm object that matters to us but instead certain attributes of its pages. For example, OBJT_DEVICE, OBJT_MGTDEVICE, and OBJT_SG objects contain fictitious pages. In other words, in most of these places, we were testing the vm object's type to determine if it contained fictitious (or unmanaged) pages. To both simplify the code in these places and make the addition of future vm object types easier, this change introduces two new vm object flags that describe attributes of the vm object's pages, specifically, whether they are fictitious or unmanaged. Reviewed and tested by: kib
* - The previous commit (r228449) accidentally moved the vm.stats.vm.* sysctlseadler2011-12-141-47/+50
| | | | | | | | | | to vm.stats.sys. Move them back. Noticed by: pho Reviewed by: bde (earlier version) Approved by: bz MFC after: 1 week Pointy hat to: me
* Document a large number of currently undocumented sysctls. While hereeadler2011-12-131-108/+63
| | | | | | | | | | | | fix some style(9) issues and reduce redundancy. PR: kern/155491 PR: kern/155490 PR: kern/155489 Submitted by: Galimov Albert <wtfcrap@mail.ru> Approved by: bde Reviewed by: jhb MFC after: 1 week
* Fix some locking nits with the p_state field of struct proc:jhb2011-03-241-3/+0
| | | | | | | | | | | | | | | | | | - Hold the proc lock while changing the state from PRS_NEW to PRS_NORMAL in fork to honor the locking requirements. While here, expand the scope of the PROC_LOCK() on the new process (p2) to avoid some LORs. Previously the code was locking the new child process (p2) after it had locked the parent process (p1). However, when locking two processes, the safe order is to lock the child first, then the parent. - Fix various places that were checking p_state against PRS_NEW without having the process locked to use PROC_LOCK(). Every place was already locking the process, just after the PRS_NEW check. - Remove or reduce the use of PROC_SLOCK() for places that were checking p_state against PRS_NEW. The PROC_LOCK() alone is sufficient for reading the current state. - Reorder fill_kinfo_proc() slightly so it only acquires PROC_SLOCK() once. MFC after: 1 week
* Use CPU_FOREACH rather than expecting CPUs 0 through mp_ncpus-1 to be present.jmallett2011-02-121-5/+1
| | | | | | | Don't micro-optimize the uniprocessor case; use the same loop there. Submitted by: Bhanu Prakash Reviewed by: kib, jhb
* Move repeated MAXSLP definition from machine/vmparam.h to sys/vmmeter.h.kib2011-01-091-2/+0
| | | | | | | Update the outdated comments describing MAXSLP and the process selection algorithm for swap out. Comments wording and reviewed by: alc
* Add a new type of VM object: OBJT_SG. An OBJT_SG object is very similar tojhb2009-07-241-1/+1
| | | | | | | | | | | a device pager (OBJT_DEVICE) object in that it uses fictitious pages to provide aliases to other memory addresses. The primary difference is that it uses an sglist(9) to determine the physical addresses for a given offset into the object instead of invoking the d_mmap() method in a device driver. Reviewed by: alc Approved by: re (kensmith) MFC after: 2 weeks
* - Mark all standalone INT/LONG/QUAD sysctl's MPSAFE. This is donejhb2009-01-231-53/+53
| | | | | | | | | | inside the SYSCTL() macros and thus does not need to be done for all of the nodes scattered across the source tree. - Mark the name-cache related sysctl's (including debug.hashstat.*) MPSAFE. - Mark vm.loadavg MPSAFE. - Remove GIANT_REQUIRED from vmtotal() (everything in this routine already has sufficient locking) and mark vm.vmtotal MPSAFE. - Mark the vm.stats.(sys|vm).* sysctls MPSAFE.
* A bunch of formatting fixes brough to light by, or created by the Vimage commitjulian2008-08-201-0/+1
| | | | a few days ago.
* - Relax requirements for p_numthreads, p_threads, p_swtick, and p_nice fromjeff2008-03-191-1/+4
| | | | | | | requiring the per-process spinlock to only requiring the process lock. - Reflect these changes in the proc.h documentation and consumers throughout the kernel. This is a substantial reduction in locking cost for these fields and was made possible by recent changes to threading support.
* - Pass the priority argument from *sleep() into sleepq and down intojeff2008-03-121-16/+6
| | | | | | | | | | | | | | | | | sched_sleep(). This removes extra thread_lock() acquisition and allows the scheduler to decide what to do with the static boost. - Change the priority arguments to cv_* to match sleepq/msleep/etc. where 0 means no priority change. Catch -1 in cv_broadcastpri() and convert it to 0 for now. - Set a flag when sleeping in a way that is compatible with swapping since direct priority comparisons are meaningless now. - Add a sysctl to ule, kern.sched.static_boost, that defaults to on which controls the boost behavior. Turning it off gives better performance in some workloads but needs more investigation. - While we're modifying sleepq, change signal and broadcast to both return with the lock held as the lock was held on enter. Reviewed by: jhb, peter
* Add a counter for the total number of pages cached and support foralc2007-07-271-0/+2
| | | | | | reporting the value of this counter in the program "vmstat". Approved by: re (rwatson)
* Eliminate dead code.alc2007-07-121-10/+0
| | | | Approved by: re (hrs)
* Commit 14/14 of sched_lock decomposition.jeff2007-06-051-3/+9
| | | | | | | | | | | - Use thread_lock() rather than sched_lock for per-thread scheduling sychronization. - Use the per-process spinlock rather than the sched_lock for per-process scheduling synchronization. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Revert VMCNT_* operations introduction.attilio2007-05-311-62/+60
| | | | | | | | Probabilly, a general approach is not the better solution here, so we should solve the sched_lock protection problems separately. Requested by: alc Approved by: jeff (mentor)
* - define and use VMCNT_{GET,SET,ADD,SUB,PTR} macros for manipulatingjeff2007-05-181-60/+62
| | | | | | | | vmcnts. This can be used to abstract away pcpu details but also changes to use atomics for all counters now. This means sched lock is no longer responsible for protecting counts in the switch routines. Contributed by: Attilio Rao <attilio@FreeBSD.org>
* Remove a redundant pointer-type variable.ru2006-11-201-19/+18
|
* When counting vm totals, skip unreferenced objects, includingru2006-11-201-0/+7
| | | | | | | vnodes representing mounted file systems. Reviewed by: alc MFC after: 3 days
* Retire debug.mpsafevm. None of the architectures supported in CVS requirealc2006-07-211-8/+0
| | | | it any longer.
* Set debug.mpsafevm to true on PowerPC. (Now, by default, all architecturesalc2006-07-101-4/+0
| | | | | | in CVS have debug.mpsafevm set to true.) Tested by: grehan@
* Enable debug.mpsafevm on arm by default.alc2006-06-101-1/+1
| | | | Tested by: cognet@
* Close race between vmspace_exitfree() and exit1() and races betweentegge2006-05-291-1/+6
| | | | | | | | | | | | | | | | | vmspace_exitfree() and vmspace_free() which could result in the same vmspace being freed twice. Factor out part of exit1() into new function vmspace_exit(). Attach to vmspace0 to allow old vmspace to be freed earlier. Add new function, vmspace_acquire_ref(), for obtaining a vmspace reference for a vmspace belonging to another process. Avoid changing vmspace refcount from 0 to 1 since that could also lead to the same vmspace being freed twice. Change vmtotal() and swapout_procs() to use vmspace_acquire_ref(). Reviewed by: alc
* Enable debug_mpsafevm on ia64 due to the severe functional regressionmarcel2005-05-081-1/+1
| | | | | | | caused by recent locking changes when it's off. Revert the logic to trim down the conditional. Clued-in by: alc@
* Tidy vcnt() by moving a duplicated line above #ifdef and removing a uselessjhb2005-04-121-5/+2
| | | | variable.
* Flip the switch and turn mpsafevm on by default for sparc64.jhb2005-04-041-1/+1
| | | | Approved by: alc
* sysctl node vm.stats can not be static (for ia64 reasons).phk2005-02-111-1/+1
|
* Make three SYSCTL_NODEs staticphk2005-02-101-3/+5
|
* /* -> /*- for license, minor formatting changesimp2005-01-071-1/+1
|
* Enable debug.mpsafevm by default on alpha.alc2004-12-171-1/+1
|
* Put on my peril sensitive sunglasses and add a flags field to the internalpeter2004-10-111-2/+18
| | | | | | | | | | | | | | | | sysctl routines and state. Add some code to use it for signalling the need to downconvert a data structure to 32 bits on a 64 bit OS when requested by a 32 bit app. I tried to do this in a generic abi wrapper that intercepted the sysctl oid's, or looked up the format string etc, but it was a real can of worms that turned into a fragile mess before I even got it partially working. With this, we can now run 'sysctl -a' on a 32 bit sysctl binary and have it not abort. Things like netstat, ps, etc have a long way to go. This also fixes a bug in the kern.ps_strings and kern.usrstack hacks. These do matter very much because they are used by libc_r and other things.
* Enable debug.mpsafevm by default on amd64 and i386. This enables copy-on-alc2004-09-041-0/+4
| | | | | | write and zero-fill faults to run without holding Giant. It is still possible to disable Giant-free operation by setting debug.mpsafevm to 0 in loader.conf.
* - Introduce and use a new tunable "debug.mpsafevm". At present, settingalc2004-08-161-0/+8
| | | | | | | | | | | | | | "debug.mpsafevm" results in (almost) Giant-free execution of zero-fill page faults. (Giant is held only briefly, just long enough to determine if there is a vnode backing the faulting address.) Also, condition the acquisition and release of Giant around calls to pmap_remove() on "debug.mpsafevm". The effect on performance is significant. On my dual Opteron, I see a 3.6% reduction in "buildworld" time. - Use atomic operations to update several counters in vm_fault().
* Remove advertising clause from University of California Regent's license,imp2004-04-061-4/+0
| | | | | | per letter dated July 22, 1999. Approved by: core
* Avoid lock-order reversal between the vm object list mutex and the vmalc2004-01-021-5/+15
| | | | object mutex.
* Use __FBSDID().obrien2003-06-111-1/+3
|
* Lock some manipulations of the vm object's flags.alc2003-04-131-7/+7
|
* Make 'sysctl vm.vmtotal' work properly using updated patch from Hiten.dillon2003-01-111-1/+1
| | | | | | | (the patch in the PR was stale). PR: kern/5689 Submitted by: Hiten Pandya <hiten@unixdaemons.com>
* Add vm map and vm object locking to vmtotal().alc2003-01-031-5/+14
|
* Lock the vm object when performing vm_object_clear_flag().alc2003-01-021-1/+4
|
* Eliminate some dead code. (Any possible use for this code died withalc2002-12-231-4/+0
| | | | | | vm/vm_page.c revision 1.220.) Submitted by: bde
* The UP -current was not properly counting the per-cpu VM stats in thedillon2002-12-221-0/+3
| | | | | | | sysctl code. This makes 'systat -vm 1's syscall count work again. Submitted by: Michal Mertl <mime@traveller.cz> Note: also slated for 5.0
* Rename the mutex thread and process states to use a more generic 'LOCK'jhb2002-10-021-1/+1
| | | | | | | | | name instead. (e.g., SLOCK instead of SMTX, TD_ON_LOCK() instead of TD_ON_MUTEX()) Eventually a turnstile abstraction will be added that will be shared with mutexes and other types of locks. SLOCK/TDI_LOCK will be used internally by the turnstile code and will not be specific to mutexes. Making the change now ensures that turnstiles can be dropped in at a later date without affecting the ABI of userland applications.
* Completely redo thread states.julian2002-09-111-23/+14
| | | | Reviewed by: davidxu@freebsd.org
* Part 1 of KSE-IIIjulian2002-06-291-32/+39
| | | | | | | | | | | | | The ability to schedule multiple threads per process (one one cpu) by making ALL system calls optionally asynchronous. to come: ia64 and power-pc patches, patches for gdb, test program (in tools) Reviewed by: Almost everyone who counts (at various times, peter, jhb, matt, alfred, mini, bernd, and a cast of thousands) NOTE: this is still Beta code, and contains lots of debugging stuff. expect slight instability in signals..
* Reintroduce locking on accesses to vm_object_list.alc2002-04-201-0/+4
|
* Embed a struct vmmeter in the per-cpu structure and add a macro,dillon2002-04-041-96/+129
| | | | | | | | | | | | | | | | PCPU_LAZY_INC() which increments elements in it for cases where we can afford the occassional inaccuracy. Use of per-cpu stats counters avoids significant cache stalls in various critical paths that would otherwise severely limit our cpu scaleability. Adjust all sysctl's accessing cnt.* elements to now use a procedure which aggregates the requested field for all cpus and for the global vmmeter. The global vmmeter is retained, since some stats counters, like v_free_min, cannot be made per-cpu. Also, this allows us to convert counters from the global vmmeter to the per-cpu vmmeter in a piecemeal fashion, so have at it!
* In a threaded world, differnt priorirites become properties ofjulian2002-02-111-1/+2
| | | | | | different entities. Make it so. Reviewed by: jhb@freebsd.org (john baldwin)
OpenPOWER on IntegriCloud