summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_kern.c
Commit message (Collapse)AuthorAgeFilesLines
* /* -> /*- for license, minor formatting changesimp2005-01-071-1/+1
|
* Use VM_ALLOC_NOBUSY instead of calling vm_page_wakeup().alc2004-10-241-2/+1
|
* Back out all behavioral chnages.green2004-08-101-4/+0
|
* Revamp VM map wiring.green2004-08-091-0/+4
| | | | | | | | | | | | | | | | | * Allow no-fault wiring/unwiring to succeed for consistency; however, the wired count remains at zero, so it's a special case. * Fix issues inside vm_map_wire() and vm_map_unwire() where the exact state of user wiring (one or zero) and system wiring (zero or more) could be confused; for example, system unwiring could succeed in removing a user wire, instead of being an error. * Require all mappings to be unwired before they are deleted. When VM space is still wired upon deletion, it will be waited upon for the following unwire. This makes vslock(9) work rather than allowing kernel-locked memory to be deleted out from underneath of its consumer as it would before.
* For years, kmem_alloc_pageable() has been misused. Now that the last ofalc2004-07-251-24/+0
| | | | | these misuses has been corrected, remove it before new ones appear, such as arm/arm/pmap.c revision 1.8.
* Bring in mbuma to replace mballoc.bmilekic2004-05-311-10/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mbuma is an Mbuf & Cluster allocator built on top of a number of extensions to the UMA framework, all included herein. Extensions to UMA worth noting: - Better layering between slab <-> zone caches; introduce Keg structure which splits off slab cache away from the zone structure and allows multiple zones to be stacked on top of a single Keg (single type of slab cache); perhaps we should look into defining a subset API on top of the Keg for special use by malloc(9), for example. - UMA_ZONE_REFCNT zones can now be added, and reference counters automagically allocated for them within the end of the associated slab structures. uma_find_refcnt() does a kextract to fetch the slab struct reference from the underlying page, and lookup the corresponding refcnt. mbuma things worth noting: - integrates mbuf & cluster allocations with extended UMA and provides caches for commonly-allocated items; defines several zones (two primary, one secondary) and two kegs. - change up certain code paths that always used to do: m_get() + m_clget() to instead just use m_getcl() and try to take advantage of the newly defined secondary Packet zone. - netstat(1) and systat(1) quickly hacked up to do basic stat reporting but additional stats work needs to be done once some other details within UMA have been taken care of and it becomes clearer to how stats will work within the modified framework. From the user perspective, one implication is that the NMBCLUSTERS compile-time option is no longer used. The maximum number of clusters is still capped off according to maxusers, but it can be made unlimited by setting the kern.ipc.nmbclusters boot-time tunable to zero. Work should be done to write an appropriate sysctl handler allowing dynamic tuning of kern.ipc.nmbclusters at runtime. Additional things worth noting/known issues (READ): - One report of 'ips' (ServeRAID) driver acting really slow in conjunction with mbuma. Need more data. Latest report is that ips is equally sucking with and without mbuma. - Giant leak in NFS code sometimes occurs, can't reproduce but currently analyzing; brueffer is able to reproduce but THIS IS NOT an mbuma-specific problem and currently occurs even WITHOUT mbuma. - Issues in network locking: there is at least one code path in the rip code where one or more locks are acquired and we end up in m_prepend() with M_WAITOK, which causes WITNESS to whine from within UMA. Current temporary solution: force all UMA allocations to be M_NOWAIT from within UMA for now to avoid deadlocks unless WITNESS is defined and we can determine with certainty that we're not holding any locks when we're M_WAITOK. - I've seen at least one weird socketbuffer empty-but- mbuf-still-attached panic. I don't believe this to be related to mbuma but please keep your eyes open, turn on debugging, and capture crash dumps. This change removes more code than it adds. A paper is available detailing the change and considering various performance issues, it was presented at BSDCan2004: http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf Please read the paper for Future Work and implementation details, as well as credits. Testing and Debugging: rwatson, brueffer, Ketrien I. Saihr-Kesenchedra, ... Reviewed by: Lots of people (for different parts)
* Push down the responsibility for zeroing a physical page from thealc2004-04-241-2/+0
| | | | | | | | | | | | | caller to vm_page_grab(). Although this gives VM_ALLOC_ZERO a different meaning for vm_page_grab() than for vm_page_alloc(), I feel such change is necessary to accomplish other goals. Specifically, I want to make the PG_ZERO flag immutable between the time it is allocated by vm_page_alloc() and freed by vm_page_free() or vm_page_free_zero() to avoid locking overheads. Once we gave up on the ability to automatically recognize a zeroed page upon entry to vm_page_free(), the ability to mutate the PG_ZERO flag became useless. Instead, I would like to say that "Once a page becomes valid, its PG_ZERO flag must be ignored."
* Remove advertising clause from University of California Regent's license,imp2004-04-061-4/+0
| | | | | | per letter dated July 22, 1999. Approved by: core
* Back out previous commit due to objections.des2004-02-161-2/+0
|
* Don't panic if we fail to satisfy an M_WAITOK request; return 0 instead.des2004-02-161-0/+2
| | | | The calling code will either handle that gracefully or cause a page fault.
* Unmanage pages allocated by kmem_alloc(). (There is no point in having PValc2004-01-101-0/+1
| | | | entries for these pages.)
* Don't bother clearing PG_ZERO in contigmalloc1(), kmem_alloc(), oralc2004-01-061-2/+0
| | | | kmem_malloc(). It serves no purpose.
* - Increase the scope of the kmem_object's lock in kmem_malloc(). Add aalc2004-01-011-2/+7
| | | | comment explaining why a further increase is not possible.
* Remove GIANT_REQUIRED from kmem_suballoc().alc2003-12-281-2/+0
|
* NFC: Update stale comments.mini2003-11-101-3/+3
| | | | Reviewed by: alc
* Synchronize access to a vm page's valid field using the containingalc2003-10-041-4/+4
| | | | vm object's lock.
* Call vm_page_unmanage() on pages belonging to the kmem_object. Thisalc2003-09-141-0/+1
| | | | | eliminates the unnecessary overhead of managing "PV" entries for these pages.
* Change clean_map from a global to an auto variableeivind2003-09-011-1/+0
|
* Add the mlockall() and munlockall() system calls.bms2003-08-111-1/+2
| | | | | | | | | | | | | | | | | | | | | | | - All those diffs to syscalls.master for each architecture *are* necessary. This needed clarification; the stub code generation for mlockall() was disabled, which would prevent applications from linking to this API (suggested by mux) - Giant has been quoshed. It is no longer held by the code, as the required locking has been pushed down within vm_map.c. - Callers must specify VM_MAP_WIRE_HOLESOK or VM_MAP_WIRE_NOHOLES to express their intention explicitly. - Inspected at the vmstat, top and vm pager sysctl stats level. Paging-in activity is occurring correctly, using a test harness. - The RES size for a process may appear to be greater than its SIZE. This is believed to be due to mappings of the same shared library page being wired twice. Further exploration is needed. - Believed to back out of allocations and locks correctly (tested with WITNESS, MUTEX_PROFILING, INVARIANTS and DIAGNOSTIC). PR: kern/43426, standards/54223 Reviewed by: jake, alc Approved by: jake (mentor) MFC after: 2 weeks
* More pipe changes:silby2003-08-111-0/+1
| | | | | | | | | | | | | | From alc: Move pageable pipe memory to a seperate kernel submap to avoid awkward vm map interlocking issues. (Bad explanation provided by me.) From me: Rework pipespace accounting code to handle this new layout, and adjust our default values to account for the fact that we now have a solid limit on allocations. Also, remove the "maxpipes" limit, as it no longer has a purpose. (The limit on kva usage solves the problem of having two many pipes.)
* Update the comment at the head of kmem_alloc_nofault() to describe itsalc2003-08-011-1/+5
| | | | purpose and use.
* Remove GIANT_REQUIRED from kmem_alloc().alc2003-07-271-2/+0
|
* Remove GIANT_REQUIRED from kmem_malloc().alc2003-06-281-3/+0
|
* Use __FBSDID().obrien2003-06-111-2/+3
|
* Lock the kernel object in kmem_alloc().alc2003-06-071-0/+2
|
* Update locking on the kmem_object to use the new macros.alc2003-04-151-7/+7
|
* Eliminate unnecessary gotos from kmem_malloc().alc2003-04-131-6/+3
|
* Allow kmem_malloc() without Giant if M_NOWAIT is specified.alc2003-01-041-1/+2
|
* - Mark the kernel_map as a system map immediately after its creation.alc2002-12-301-2/+2
| | | | - Correct a cast.
* Two changes to kmem_malloc():alc2002-12-281-6/+4
| | | | | - Use VM_ALLOC_WIRED. - Perform vm_page_wakeup() after pmap_enter(), like we do everywhere else.
* - Hold the page queues lock around calls to vm_page_flag_clear().alc2002-12-241-0/+2
|
* - Hold the page queues lock around vm_page_wakeup().alc2002-12-241-0/+2
|
* Increase the scope of the kmem_object locking in kmem_malloc().alc2002-12-201-3/+5
|
* Hold the page queues lock when performing vm_page_flag_set().alc2002-12-171-0/+2
|
* Perform vm_object_lock() and vm_object_unlock() on kmem_objectalc2002-12-151-0/+4
| | | | around vm_page_lookup() and vm_page_free().
* o Retire vm_page_zero_fill() and vm_page_zero_fill_area(). Ever sincealc2002-08-251-2/+2
| | | | | | pmap_zero_page() and pmap_zero_page_area() were modified to accept a struct vm_page * instead of a physical address, vm_page_zero_fill() and vm_page_zero_fill_area() have served no purpose.
* o Remove the setting and clearing of the PG_MAPPED flag. (This flag isalc2002-08-101-1/+1
| | | | obsolete.)
* o Lock page queue accesses by vm_page_free().alc2002-07-281-0/+2
|
* o Lock page queue accesses by vm_page_wire().alc2002-07-141-0/+2
|
* o Assert GIANT_REQUIRED on system maps in _vm_map_lock(),alc2002-07-121-9/+0
| | | | | | | | | | _vm_map_lock_read(), and _vm_map_trylock(). Submitted by: tegge o Remove GIANT_REQUIRED from kmem_alloc_wait() and kmem_free_wakeup(). (This clears the way for exec_map accesses to move outside of Giant. The exec_map is not a system map.) o Remove some premature MPSAFE comments. Reviewed by: tegge
* o Add a "needs wakeup" flag to the vm_map for use by kmem_alloc_wait()alc2002-07-111-4/+7
| | | | | | | and kmem_free_wakeup(). Previously, kmem_free_wakeup() always called wakeup(). In general, no one was sleeping. o Export vm_map_unlock_and_wait() and vm_map_wakeup() from vm_map.c for use in vm_kern.c.
* o Remove GIANT_REQUIRED from kmem_alloc_pageable(), kmem_alloc_nofault(),alc2002-06-231-7/+8
| | | | | and kmem_free(). (Annotate as MPSAFE.) o Remove incorrect casts from kmem_alloc_pageable() and kmem_alloc_nofault().
* - Move the computation of pflags out of the page allocation loop injeff2002-06-191-17/+21
| | | | | | | kmem_malloc() - zero fill pages if PG_ZERO bit is not set after allocation in kmem_malloc() Suggested by: alc, jake
* Teach kmem_malloc about M_ZERO.jeff2002-06-191-4/+10
|
* o Use vm_map_wire() and vm_map_unwire() in place of vm_map_pageable() andalc2002-06-141-1/+1
| | | | | | | | | vm_map_user_pageable(). o Remove vm_map_pageable() and vm_map_user_pageable(). o Remove vm_map_clear_recursive() and vm_map_set_recursive(). (They were only used by vm_map_pageable() and vm_map_user_pageable().) Reviewed by: tegge
* Tidy up some loose ends.peter2002-04-291-1/+0
| | | | | | | | | | | | i386/ia64/alpha - catch up to sparc64/ppc: - replace pmap_kernel() with refs to kernel_pmap - change kernel_pmap pointer to (&kernel_pmap_store) (this is a speedup since ld can set these at compile/link time) all platforms (as suggested by jake): - gc unused pmap_reference - gc unused pmap_destroy - gc unused struct pmap.pm_count (we never used pm_count - we track address space sharing at the vmspace)
* - Remove a number of extra newlines that do not belong here according toeivind2002-03-101-6/+0
| | | | | | | | | style(9) - Minor space adjustment in cases where we have "( ", " )", if(), return(), while(), for(), etc. - Add /* SYMBOL */ after a few #endifs. Reviewed by: alc
* Revert change in revision 1.53 and add a small comment to protecttegge2002-03-091-0/+12
| | | | | | | | | | | | | | | | | | the revived code. vm pages newly allocated are marked busy (PG_BUSY), thus calling vm_page_delete before the pages has been freed or unbusied will cause a deadlock since vm_page_object_page_remove will wait for the busy flag to be cleared. This can be triggered by calling malloc with size > PAGE_SIZE and the M_NOWAIT flag on systems low on physical free memory. A kernel module that reproduces the problem, written by Logan Gabriel <logan@mail.2cactus.com>, can be found in the freebsd-hackers mail archive (12 Apr 2001). The problem was recently noticed again by Archie Cobbs <archie@dellroad.org>. Reviewed by: dillon
* vm/vm_kern.c: rate limit (to once per second) diagnostic printf whenluigi2001-12-011-2/+8
| | | | | | | | | | | | | | | | | | | | | you run out of mbuf address space. kern/subr_mbuf.c: print a warning message when mb_alloc fails, again rate-limited to at most once per second. This covers other cases of mbuf allocation failures. Probably it also overlaps the one handled in vm/vm_kern.c, so maybe the latter should go away. This warning will let us gradually remove the printf that are scattered across most network drivers to report mbuf allocation failures. Those are potentially dangerous, in that they are not rate-limited and can easily cause systems to panic. Unless there is disagreement (which does not seem to be the case judging from the discussion on -net so far), and because this is sort of a safety bugfix, I plan to commit a similar change to STABLE during the weekend (it affects kern/uipc_mbuf.c there). Discussed-with: jlemon, silby and -net
* - Remove asleep(), await(), and M_ASLEEP.jhb2001-08-101-5/+2
| | | | | | | | | - Callers of asleep() and await() have been converted to calling tsleep(). The only caller outside of M_ASLEEP was the ata driver, which called both asleep() and await() with spl-raised, so there was no need for the asleep() and await() pair. M_ASLEEP was unused. Reviewed by: jasone, peter
OpenPOWER on IntegriCloud