summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_kern.c
Commit message (Collapse)AuthorAgeFilesLines
* Remove GIANT_REQUIRED from kmem_malloc().alc2003-06-281-3/+0
|
* Use __FBSDID().obrien2003-06-111-2/+3
|
* Lock the kernel object in kmem_alloc().alc2003-06-071-0/+2
|
* Update locking on the kmem_object to use the new macros.alc2003-04-151-7/+7
|
* Eliminate unnecessary gotos from kmem_malloc().alc2003-04-131-6/+3
|
* Allow kmem_malloc() without Giant if M_NOWAIT is specified.alc2003-01-041-1/+2
|
* - Mark the kernel_map as a system map immediately after its creation.alc2002-12-301-2/+2
| | | | - Correct a cast.
* Two changes to kmem_malloc():alc2002-12-281-6/+4
| | | | | - Use VM_ALLOC_WIRED. - Perform vm_page_wakeup() after pmap_enter(), like we do everywhere else.
* - Hold the page queues lock around calls to vm_page_flag_clear().alc2002-12-241-0/+2
|
* - Hold the page queues lock around vm_page_wakeup().alc2002-12-241-0/+2
|
* Increase the scope of the kmem_object locking in kmem_malloc().alc2002-12-201-3/+5
|
* Hold the page queues lock when performing vm_page_flag_set().alc2002-12-171-0/+2
|
* Perform vm_object_lock() and vm_object_unlock() on kmem_objectalc2002-12-151-0/+4
| | | | around vm_page_lookup() and vm_page_free().
* o Retire vm_page_zero_fill() and vm_page_zero_fill_area(). Ever sincealc2002-08-251-2/+2
| | | | | | pmap_zero_page() and pmap_zero_page_area() were modified to accept a struct vm_page * instead of a physical address, vm_page_zero_fill() and vm_page_zero_fill_area() have served no purpose.
* o Remove the setting and clearing of the PG_MAPPED flag. (This flag isalc2002-08-101-1/+1
| | | | obsolete.)
* o Lock page queue accesses by vm_page_free().alc2002-07-281-0/+2
|
* o Lock page queue accesses by vm_page_wire().alc2002-07-141-0/+2
|
* o Assert GIANT_REQUIRED on system maps in _vm_map_lock(),alc2002-07-121-9/+0
| | | | | | | | | | _vm_map_lock_read(), and _vm_map_trylock(). Submitted by: tegge o Remove GIANT_REQUIRED from kmem_alloc_wait() and kmem_free_wakeup(). (This clears the way for exec_map accesses to move outside of Giant. The exec_map is not a system map.) o Remove some premature MPSAFE comments. Reviewed by: tegge
* o Add a "needs wakeup" flag to the vm_map for use by kmem_alloc_wait()alc2002-07-111-4/+7
| | | | | | | and kmem_free_wakeup(). Previously, kmem_free_wakeup() always called wakeup(). In general, no one was sleeping. o Export vm_map_unlock_and_wait() and vm_map_wakeup() from vm_map.c for use in vm_kern.c.
* o Remove GIANT_REQUIRED from kmem_alloc_pageable(), kmem_alloc_nofault(),alc2002-06-231-7/+8
| | | | | and kmem_free(). (Annotate as MPSAFE.) o Remove incorrect casts from kmem_alloc_pageable() and kmem_alloc_nofault().
* - Move the computation of pflags out of the page allocation loop injeff2002-06-191-17/+21
| | | | | | | kmem_malloc() - zero fill pages if PG_ZERO bit is not set after allocation in kmem_malloc() Suggested by: alc, jake
* Teach kmem_malloc about M_ZERO.jeff2002-06-191-4/+10
|
* o Use vm_map_wire() and vm_map_unwire() in place of vm_map_pageable() andalc2002-06-141-1/+1
| | | | | | | | | vm_map_user_pageable(). o Remove vm_map_pageable() and vm_map_user_pageable(). o Remove vm_map_clear_recursive() and vm_map_set_recursive(). (They were only used by vm_map_pageable() and vm_map_user_pageable().) Reviewed by: tegge
* Tidy up some loose ends.peter2002-04-291-1/+0
| | | | | | | | | | | | i386/ia64/alpha - catch up to sparc64/ppc: - replace pmap_kernel() with refs to kernel_pmap - change kernel_pmap pointer to (&kernel_pmap_store) (this is a speedup since ld can set these at compile/link time) all platforms (as suggested by jake): - gc unused pmap_reference - gc unused pmap_destroy - gc unused struct pmap.pm_count (we never used pm_count - we track address space sharing at the vmspace)
* - Remove a number of extra newlines that do not belong here according toeivind2002-03-101-6/+0
| | | | | | | | | style(9) - Minor space adjustment in cases where we have "( ", " )", if(), return(), while(), for(), etc. - Add /* SYMBOL */ after a few #endifs. Reviewed by: alc
* Revert change in revision 1.53 and add a small comment to protecttegge2002-03-091-0/+12
| | | | | | | | | | | | | | | | | | the revived code. vm pages newly allocated are marked busy (PG_BUSY), thus calling vm_page_delete before the pages has been freed or unbusied will cause a deadlock since vm_page_object_page_remove will wait for the busy flag to be cleared. This can be triggered by calling malloc with size > PAGE_SIZE and the M_NOWAIT flag on systems low on physical free memory. A kernel module that reproduces the problem, written by Logan Gabriel <logan@mail.2cactus.com>, can be found in the freebsd-hackers mail archive (12 Apr 2001). The problem was recently noticed again by Archie Cobbs <archie@dellroad.org>. Reviewed by: dillon
* vm/vm_kern.c: rate limit (to once per second) diagnostic printf whenluigi2001-12-011-2/+8
| | | | | | | | | | | | | | | | | | | | | you run out of mbuf address space. kern/subr_mbuf.c: print a warning message when mb_alloc fails, again rate-limited to at most once per second. This covers other cases of mbuf allocation failures. Probably it also overlaps the one handled in vm/vm_kern.c, so maybe the latter should go away. This warning will let us gradually remove the printf that are scattered across most network drivers to report mbuf allocation failures. Those are potentially dangerous, in that they are not rate-limited and can easily cause systems to panic. Unless there is disagreement (which does not seem to be the case judging from the discussion on -net so far), and because this is sort of a safety bugfix, I plan to commit a similar change to STABLE during the weekend (it affects kern/uipc_mbuf.c there). Discussed-with: jlemon, silby and -net
* - Remove asleep(), await(), and M_ASLEEP.jhb2001-08-101-5/+2
| | | | | | | | | - Callers of asleep() and await() have been converted to calling tsleep(). The only caller outside of M_ASLEEP was the ata driver, which called both asleep() and await() with spl-raised, so there was no need for the asleep() and await() pair. M_ASLEEP was unused. Reviewed by: jasone, peter
* With Alfred's permission, remove vm_mtx in favor of a fine-grained approachdillon2001-07-041-58/+12
| | | | | | | | | (this commit is just the first stage). Also add various GIANT_ macros to formalize the removal of Giant, making it easy to test in a more piecemeal fashion. These macros will allow us to test fine-grained locks to a degree before removing Giant, and also after, and to remove Giant in a piecemeal fashion via sysctl's on those subsystems which the authors believe can operate without Giant.
* Introduce numerous SMP friendly changes to the mbuf allocator. Namely,bmilekic2001-06-221-8/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | introduce a modified allocation mechanism for mbufs and mbuf clusters; one which can scale under SMP and which offers the possibility of resource reclamation to be implemented in the future. Notable advantages: o Reduce contention for SMP by offering per-CPU pools and locks. o Better use of data cache due to per-CPU pools. o Much less code cache pollution due to excessively large allocation macros. o Framework for `grouping' objects from same page together so as to be able to possibly free wired-down pages back to the system if they are no longer needed by the network stacks. Additional things changed with this addition: - Moved some mbuf specific declarations and initializations from sys/conf/param.c into mbuf-specific code where they belong. - m_getclr() has been renamed to m_get_clrd() because the old name is really confusing. m_getclr() HAS been preserved though and is defined to the new name. No tree sweep has been done "to change the interface," as the old name will continue to be supported and is not depracated. The change was merely done because m_getclr() sounds too much like "m_get a cluster." - TEMPORARILY disabled mbtypes statistics displaying in netstat(1) and systat(1) (see TODO below). - Fixed systat(1) to display number of "free mbufs" based on new per-CPU stat structures. - Fixed netstat(1) to display new per-CPU stats based on sysctl-exported per-CPU stat structures. All infos are fetched via sysctl. TODO (in order of priority): - Re-enable mbtypes statistics in both netstat(1) and systat(1) after introducing an SMP friendly way to collect the mbtypes stats under the already introduced per-CPU locks (i.e. hopefully don't use atomic() - it seems too costly for a mere stat update, especially when other locks are already present). - Optionally have systat(1) display not only "total free mbufs" but also "total free mbufs per CPU pool." - Fix minor length-fetching issues in netstat(1) related to recently re-enabled option to read mbuf stats from a core file. - Move reference counters at least for mbuf clusters into an unused portion of the cluster itself, to save space and need to allocate a counter. - Look into introducing resource freeing possibly from a kproc. Reviewed by (in parts): jlemon, jake, silby, terry Tested by: jlemon (Intel & Alpha), mjacob (Intel & Alpha) Preliminary performance measurements: jlemon (and me, obviously) URL: http://people.freebsd.org/~bmilekic/mb_alloc/
* Introduce a global lock for the vm subsystem (vm_mtx).alfred2001-05-191-6/+68
| | | | | | | | | | | | | | | | | | | vm_mtx does not recurse and is required for most low level vm operations. faults can not be taken without holding Giant. Memory subsystems can now call the base page allocators safely. Almost all atomic ops were removed as they are covered under the vm mutex. Alpha and ia64 now need to catch up to i386's trap handlers. FFS and NFS have been tested, other filesystems will need minor changes (grabbing the vm lock when twiddling page properties). Reviewed (partially) by: jake, jhb
* Undo part of the tangle of having sys/lock.h and sys/mutex.h included inmarkm2001-05-011-1/+2
| | | | | | | | | | | other "system" header files. Also help the deprecation of lockmgr.h by making it a sub-include of sys/lock.h and removing sys/lockmgr.h form kernel .c files. Sort sys/*.h includes where possible in affected files. OK'ed by: bde (with reservations)
* Add mtx_assert()'s to verify that kmem_alloc() and kmem_free() are calledjhb2001-01-241-0/+3
| | | | with Giant held.
* fix comment which was outdated 3 years agoalfred2000-12-291-14/+13
| | | | | remove useless assignment purge entire file of 'register' keyword
* clean up kmem_suballoc():alfred2000-12-291-4/+4
| | | | | remove useless assignment remove 'register' variables
* - If swap metadata does not fit into the KVM, reduce the number oftanimura2000-12-131-1/+0
| | | | | | | | | | | | | | | struct swblock entries by dividing the number of the entries by 2 until the swap metadata fits. - Reject swapon(2) upon failure of swap_zone allocation. This is just a temporary fix. Better solutions include: (suggested by: dillon) o reserving swap in SWAP_META_PAGES chunks, and o swapping the swblock structures themselves. Reviewed by: alfred, dillon
* Implement an optimization of the VM<->pmap API. Pass vm_page_t's directlypeter2000-05-211-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | to various pmap_*() functions instead of looking up the physical address and passing that. In many cases, the first thing the pmap code was doing was going to a lot of trouble to get back the original vm_page_t, or it's shadow pv_table entry. Inspired by: John Dyson's 1998 patches. Also: Eliminate pv_table as a seperate thing and build it into a machine dependent part of vm_page_t. This eliminates having a seperate set of structions that shadow each other in a 1:1 fashion that we often went to a lot of trouble to translate from one to the other. (see above) This happens to save 4 bytes of physical memory for each page in the system. (8 bytes on the Alpha). Eliminate the use of the phys_avail[] array to determine if a page is managed (ie: it has pv_entries etc). Store this information in a flag. Things like device_pager set it because they create vm_page_t's on the fly that do not have pv_entries. This makes it easier to "unmanage" a page of physical memory (this will be taken advantage of in subsequent commits). Add a function to add a new page to the freelist. This could be used for reclaiming the previously wasted pages left over from preloaded loader(8) files. Reviewed by: dillon
* Revert spelling mistake I made in the previous commitcharnier2000-03-271-1/+1
| | | | Requested by: Alan and Bruce
* Spellingcharnier2000-03-261-1/+1
|
* useracc() the prequel:phk1999-10-291-1/+0
| | | | | | | | | | | Merge the contents (less some trivial bordering the silly comments) of <vm/vm_prot.h> and <vm/vm_inherit.h> into <vm/vm.h>. This puts the #defines for the vm_inherit_t and vm_prot_t types next to their typedefs. This paves the road for the commit to follow shortly: change useracc() to use VM_PROT_{READ|WRITE} rather than B_{READ|WRITE} as argument.
* Remove the last vestiges of "vm_map_t phys_map". It's been unusedalc1999-10-291-1/+0
| | | | since i386/i386/machdep.c rev 1.45 (or 1994 :-) ).
* $Id$ -> $FreeBSD$peter1999-08-281-1/+1
|
* Remove the declarations for "vm_map_t io_map". It's been unusedalc1999-08-151-2/+1
| | | | since i386/i386/machdep rev 1.310, i.e., the demise of BOUNCE_BUFFERS.
* Remove the declarations for "vm_map_t u_map". It's been unusedalc1999-08-151-2/+1
| | | | since i386/i386/pmap rev 1.190. (The alpha never used it.)
* Fix some int/long printf problems for the Alphapeter1999-07-011-3/+3
|
* Add a function kmem_alloc_nofault() - same as kmem_alloc_pageable(), butdt1999-06-081-1/+25
| | | | | | create a nofault entry. It will be used to allocate kmem for upages. (I am not too happy with all this, but it's better than nothing).
* Correct a problem in kmem_malloc: A kmem_malloc allowing "wait" mayalc1999-03-161-3/+5
| | | | | | block (VM_WAIT) holding the map lock. This is bad. For example, a subsequent kmem_malloc by an interrupt handler on the same map may find the lock held and panic in the lockmgr.
* Remove vm_page_frees from kmem_malloc that are performedalc1999-03-121-7/+1
| | | | by vm_map_delete/vm_object_page_remove anyway.
* Potential bug fix, do not just clear PG_BUSY... call vm_page_wakeup()dillon1999-01-211-1/+1
| | | | | | | instead to properly handle any waiters. Added comments, added support for M_ASLEEP. Generally treat M_ flags as flags instead of constants to compare against.
* This is a rather large commit that encompasses the new swapper,dillon1999-01-211-24/+54
| | | | | | | | | | changes to the VM system to support the new swapper, VM bug fixes, several VM optimizations, and some additional revamping of the VM code. The specific bug fixes will be documented with additional forced commits. This commit is somewhat rough in regards to code cleanup issues. Reviewed by: "John S. Dyson" <root@dyson.iquest.net>, "David Greenman" <dg@root.com>
OpenPOWER on IntegriCloud