summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_page.c
Commit message (Collapse)AuthorAgeFilesLines
* o Synchronize updates to struct vm_page::cow with the page queues lock.alc2002-09-021-6/+5
|
* o Retire vm_page_zero_fill() and vm_page_zero_fill_area(). Ever sincealc2002-08-251-25/+0
| | | | | | pmap_zero_page() and pmap_zero_page_area() were modified to accept a struct vm_page * instead of a physical address, vm_page_zero_fill() and vm_page_zero_fill_area() have served no purpose.
* o Assert that the page queues lock is held in vm_page_activate().alc2002-08-111-1/+1
|
* o Remove the setting and clearing of the PG_MAPPED flag. (This flag isalc2002-08-101-1/+1
| | | | obsolete.)
* o Use pmap_page_is_mapped() in vm_page_protect() rather than the PG_MAPPEDalc2002-08-081-1/+1
| | | | | flag. (This is the only place in the entire kernel where the PG_MAPPED flag is tested. It will be removed soon.)
* o Acquire the page queues lock before checking the page's busy statusalc2002-08-041-2/+4
| | | | | in vm_page_grab(). Also, replace the nearby tsleep() with an msleep() on the page queues lock.
* - Replace v_flag with v_iflag and v_vflagjeff2002-08-041-2/+6
| | | | | | | | | | | | | | | - v_vflag is protected by the vnode lock and is used when synchronization with VOP calls is needed. - v_iflag is protected by interlock and is used for dealing with vnode management issues. These flags include X/O LOCK, FREE, DOOMED, etc. - All accesses to v_iflag and v_vflag have either been locked or marked with mp_fixme's. - Many ASSERT_VOP_LOCKED calls have been added where the locking was not clear. - Many functions in vfs_subr.c were restructured to provide for stronger locking. Idea stolen from: BSD/OS
* o Remove the setting of PG_MAPPED from vm_page_wire() andalc2002-08-031-2/+0
| | | | vm_page_alloc(VM_ALLOC_WIRED).
* o Lock page queue accesses in nwfs and smbfs.alc2002-08-021-1/+1
| | | | o Assert that the page queues lock is held in vm_page_deactivate().
* o Acquire the page queues lock before calling vm_page_io_finish().alc2002-08-011-1/+2
| | | | o Assert that the page queues lock is held in vm_page_io_finish().
* o Lock page accesses by vm_page_io_start() with the page queues lock.alc2002-07-311-1/+2
| | | | o Assert that the page queues lock is held in vm_page_io_start().
* o Introduce vm_page_sleep_if_busy() as an eventual replacement foralc2002-07-291-0/+22
| | | | | vm_page_sleep_busy(). vm_page_sleep_if_busy() uses the page queues lock.
* o Modify vm_page_grab() to accept VM_ALLOC_WIRED.alc2002-07-281-0/+4
|
* o Lock page queue accesses by vm_page_dontneed().alc2002-07-231-1/+1
| | | | o Assert that the page queue lock is held in vm_page_dontneed().
* o Lock page queue accesses by vm_page_try_to_cache(). (The accessesalc2002-07-201-1/+1
| | | | | in kern/vfs_bio.c are already locked.) o Assert that the page queues lock is held in vm_page_try_to_cache().
* o Assert that the page queues lock is held in vm_page_try_to_free().alc2002-07-201-0/+2
|
* o Lock page queue accesses by vm_page_cache() in vm_fault() andalc2002-07-201-1/+1
| | | | | vm_pageout_scan(). (The others are already locked.) o Assert that the page queues lock is held in vm_page_cache().
* o Duplicate an odd side-effect of vm_page_wire() in vm_page_allocate()alc2002-07-191-1/+2
| | | | | | when VM_ALLOC_WIRED is specified: set the PG_MAPPED bit in flags. o In both vm_page_wire() and vm_page_allocate() add a comment saying that setting PG_MAPPED does not belong there.
* o Introduce an argument, VM_ALLOC_WIRED, that requests vm_page_alloc()alc2002-07-181-9/+10
| | | | | | | | | | to return a wired page. o Use VM_ALLOC_WIRED within Alpha's pmap_growkernel(). Also, because Alpha's pmap_growkernel() calls vm_page_alloc() from within a critical section, specify VM_ALLOC_INTERRUPT instead of VM_ALLOC_SYSTEM. (Only VM_ALLOC_INTERRUPT is implemented entirely with a spin mutex.) o Assert that the page queues mutex is held in vm_page_wire() on Alpha, just like the other platforms.
* o Lock page queue accesses by vm_page_wire() that aren'talc2002-07-141-0/+3
| | | | | | within a critical section. o Assert that the page queues lock is held in vm_page_wire() unless an Alpha.
* o Lock page queue accesses by vm_page_unmanage().alc2002-07-131-0/+1
| | | | o Assert that the page queues lock is held in vm_page_unmanage().
* o Complete the locking of page queue accesses by vm_page_unwire().alc2002-07-131-1/+1
| | | | | | o Assert that the page queues lock is held in vm_page_unwire(). o Make vm_page_lock_queues() and vm_page_unlock_queues() visible to kernel loadable modules.
* Remove bogus vm_page_wakeup() in vm_page_cowfault() that will cause panicsgallatin2002-07-051-1/+0
| | | | | | | in the zero-copy send path if a process attempts to write to a page which is still in flight. reviewed by: ken
* o Resurrect vm_page_lock_queues(), vm_page_unlock_queues(), and the freealc2002-07-041-5/+23
| | | | | | queue lock (revision 1.33 of vm/vm_page.c removed them). o Make the free queue lock a spin lock because it's sometimes acquired inside of a critical section.
* At long last, commit the zero copy sockets code.ken2002-06-261-0/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MAKEDEV: Add MAKEDEV glue for the ti(4) device nodes. ti.4: Update the ti(4) man page to include information on the TI_JUMBO_HDRSPLIT and TI_PRIVATE_JUMBOS kernel options, and also include information about the new character device interface and the associated ioctls. man9/Makefile: Add jumbo.9 and zero_copy.9 man pages and associated links. jumbo.9: New man page describing the jumbo buffer allocator interface and operation. zero_copy.9: New man page describing the general characteristics of the zero copy send and receive code, and what an application author should do to take advantage of the zero copy functionality. NOTES: Add entries for ZERO_COPY_SOCKETS, TI_PRIVATE_JUMBOS, TI_JUMBO_HDRSPLIT, MSIZE, and MCLSHIFT. conf/files: Add uipc_jumbo.c and uipc_cow.c. conf/options: Add the 5 options mentioned above. kern_subr.c: Receive side zero copy implementation. This takes "disposable" pages attached to an mbuf, gives them to a user process, and then recycles the user's page. This is only active when ZERO_COPY_SOCKETS is turned on and the kern.ipc.zero_copy.receive sysctl variable is set to 1. uipc_cow.c: Send side zero copy functions. Takes a page written by the user and maps it copy on write and assigns it kernel virtual address space. Removes copy on write mapping once the buffer has been freed by the network stack. uipc_jumbo.c: Jumbo disposable page allocator code. This allocates (optionally) disposable pages for network drivers that want to give the user the option of doing zero copy receive. uipc_socket.c: Add kern.ipc.zero_copy.{send,receive} sysctls that are enabled if ZERO_COPY_SOCKETS is turned on. Add zero copy send support to sosend() -- pages get mapped into the kernel instead of getting copied if they meet size and alignment restrictions. uipc_syscalls.c:Un-staticize some of the sf* functions so that they can be used elsewhere. (uipc_cow.c) if_media.c: In the SIOCGIFMEDIA ioctl in ifmedia_ioctl(), avoid calling malloc() with M_WAITOK. Return an error if the M_NOWAIT malloc fails. The ti(4) driver and the wi(4) driver, at least, call this with a mutex held. This causes witness warnings for 'ifconfig -a' with a wi(4) or ti(4) board in the system. (I've only verified for ti(4)). ip_output.c: Fragment large datagrams so that each segment contains a multiple of PAGE_SIZE amount of data plus headers. This allows the receiver to potentially do page flipping on receives. if_ti.c: Add zero copy receive support to the ti(4) driver. If TI_PRIVATE_JUMBOS is not defined, it now uses the jumbo(9) buffer allocator for jumbo receive buffers. Add a new character device interface for the ti(4) driver for the new debugging interface. This allows (a patched version of) gdb to talk to the Tigon board and debug the firmware. There are also a few additional debugging ioctls available through this interface. Add header splitting support to the ti(4) driver. Tweak some of the default interrupt coalescing parameters to more useful defaults. Add hooks for supporting transmit flow control, but leave it turned off with a comment describing why it is turned off. if_tireg.h: Change the firmware rev to 12.4.11, since we're really at 12.4.11 plus fixes from 12.4.13. Add defines needed for debugging. Remove the ti_stats structure, it is now defined in sys/tiio.h. ti_fw.h: 12.4.11 firmware. ti_fw2.h: 12.4.11 firmware, plus selected fixes from 12.4.13, and my header splitting patches. Revision 12.4.13 doesn't handle 10/100 negotiation properly. (This firmware is the same as what was in the tree previously, with the addition of header splitting support.) sys/jumbo.h: Jumbo buffer allocator interface. sys/mbuf.h: Add a new external mbuf type, EXT_DISPOSABLE, to indicate that the payload buffer can be thrown away / flipped to a userland process. socketvar.h: Add prototype for socow_setup. tiio.h: ioctl interface to the character portion of the ti(4) driver, plus associated structure/type definitions. uio.h: Change prototype for uiomoveco() so that we'll know whether the source page is disposable. ufs_readwrite.c:Update for new prototype of uiomoveco(). vm_fault.c: In vm_fault(), check to see whether we need to do a page based copy on write fault. vm_object.c: Add a new function, vm_object_allocate_wait(). This does the same thing that vm_object allocate does, except that it gives the caller the opportunity to specify whether it should wait on the uma_zalloc() of the object structre. This allows vm objects to be allocated while holding a mutex. (Without generating WITNESS warnings.) vm_object_allocate() is implemented as a call to vm_object_allocate_wait() with the malloc flag set to M_WAITOK. vm_object.h: Add prototype for vm_object_allocate_wait(). vm_page.c: Add page-based copy on write setup, clear and fault routines. vm_page.h: Add page based COW function prototypes and variable in the vm_page structure. Many thanks to Drew Gallatin, who wrote the zero copy send and receive code, and to all the other folks who have tested and reviewed this code over the years.
* Turn VM_ALLOC_ZERO into a flag.jeff2002-06-251-4/+6
| | | | | Submitted by: tegge Reviewed by: dillon
* o Convert the vm_page buckets mutex to a spin lock. (This resolvesalc2002-04-301-14/+11
| | | | | | | an issue on the Alpha platform found by jeff@.) o Simplify vm_page_lookup(). Reviewed by: jhb
* We do not necessarily need to map/unmap pages to zero parts of them.peter2002-04-281-0/+12
| | | | | On systems where physical memory is also direct mapped (alpha, sparc, ia64 etc) this is slightly harmful.
* o Control access to the vm_page_buckets with a mutex.alc2002-04-261-33/+17
| | | | o Fix some style(9) bugs.
* Pass vm_page_t instead of physical addresses to pmap_zero_page[_area]()peter2002-04-151-23/+9
| | | | | | | | | | | and pmap_copy_page(). This gets rid of a couple more physical addresses in upper layers, with the eventual aim of supporting PAE and dealing with the physical addressing mostly within pmap. (We will need either 64 bit physical addresses or page indexes, possibly both depending on the circumstances. Leaving this to pmap itself gives more flexibilitly.) Reviewed by: jake Tested on: i386, ia64 and (I believe) sparc64. (my alpha was hosed)
* Change callers of mtx_init() to pass in an appropriate lock type name. Injhb2002-04-041-1/+2
| | | | | | | most cases NULL is passed, but in some cases such as network driver locks (which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used. Tested on: i386, alpha, sparc64
* Fix a long standing 32bit-ism. Don't assume that the size of a chunk ofjake2002-04-031-1/+1
| | | | | | | memory in phys_avail will fit in 'int', use vm_size_t. This fixes booting on sparc64 machines with more than 2 gigs of ram. Thanks to Jan Chrillesen for providing me with access to a 4 gig machine.
* This is the first part of the new kernel memory allocator. This replacesjeff2002-03-191-0/+16
| | | | | | malloc(9) and vm_zone with a slab like allocator. Reviewed by: arch@
* - Remove a number of extra newlines that do not belong here according toeivind2002-03-101-72/+13
| | | | | | | | | style(9) - Minor space adjustment in cases where we have "( ", " )", if(), return(), while(), for(), etc. - Add /* SYMBOL */ after a few #endifs. Reviewed by: alc
* o Create vm_pageq_enqueue() to encapsulate code that is duplicated timealc2002-03-041-18/+6
| | | | | | and again in vm_page.c and vm_pageq.c. o Delete unusused prototypes. (Mainly a result of the earlier renaming of various functions from vm_page_*() to vm_pageq_*().)
* Add a page queue, PQ_HOLD, that temporarily owns pages with nonzero holdtegge2002-02-191-3/+8
| | | | | | | | count that would otherwise be on one of the free queues. This eliminates a panic when broken programs unmap memory that still has pending IO from raw devices. Reviewed by: dillon, alc
* Add one more comment to the OOM changes so that future readers ofsilby2002-02-191-0/+3
| | | | | | | the code may better understand the code. Suggested by: dillon MFC after: 1 week
* Changes to make the OOM killer much more effective:silby2002-02-191-0/+23
| | | | | | | | | | | | - Allow the OOM killer to target processes currently locked in memory. These very often are the ones doing the memory hogging. - Drop the wakeup priority of processes currently sleeping while waiting for their page fault to complete. In order for the OOM killer to work well, the killed process and other system processes waiting on memory must be allowed to wakeup first. Reviewed by: dillon MFC after: 1 week
* This fixes a large number of bugs in our NFS client side code. A recentdillon2001-12-141-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit by Kirk also fixed a softupdates bug that could easily be triggered by server side NFS. * An edge case with shared R+W mmap()'s and truncate whereby the system would inappropriately clear the dirty bits on still-dirty data. (applicable to all filesystems) THIS FIX TEMPORARILY DISABLED PENDING FURTHER TESTING. see vm/vm_page.c line 1641 * The straddle case for VM pages and buffer cache buffers when truncating. (applicable to NFS client side) * Possible SMP database corruption due to vm_pager_unmap_page() not clearing the TLB for the other cpu's. (applicable to NFS client side but could effect all filesystems). Note: not considered serious since the corruption occurs beyond the file EOF. * When flusing a dirty buffer due to B_CACHE getting cleared, we were accidently setting B_CACHE again (that is, bwrite() sets B_CACHE), when we really want it to stay clear after the write is complete. This resulted in a corrupt buffer. (applicable to all filesystems but probably only triggered by NFS) * We have to call vtruncbuf() when ftruncate()ing to remove any buffer cache buffers. This is still tentitive, I may be able to remove it due to the second bug fix. (applicable to NFS client side) * vnode_pager_setsize() race against nfs_vinvalbuf()... we have to set n_size before calling nfs_vinvalbuf or the NFS code may recursively vnode_pager_setsize() to the original value before the truncate. This is what was causing the user mmap bus faults in the nfs tester program. (applicable to NFS client side) * Fix to softupdates (see ufs/ffs/ffs_inode.c 1.73, commit made by Kirk). Testing program written by: Avadis Tevanian, Jr. Testing program supplied by: jkh / Apple (see Dec2001 posting to freebsd-hackers with Subject 'NFS: How to make FreeBS fall on its face in one easy step') MFC after: 1 week
* Implement kern.maxvnodes. adjusting kern.maxvnodes now actually has adillon2001-10-261-1/+1
| | | | | | | | | | | | | | | | real effect. Optimize vfs_msync(). Avoid having to continually drop and re-obtain mutexes when scanning the vnode list. Improves looping case by 500%. Optimize ffs_sync(). Avoid having to continually drop and re-obtain mutexes when scanning the vnode list. This makes a couple of assumptions, which I believe are ok, in regards to vnode stability when the mount list mutex is held. Improves looping case by 500%. (more optimization work is needed on top of these fixes) MFC after: 1 week
* Implement idle zeroing of pages. I've been tinkering with thispeter2001-08-251-0/+1
| | | | | | | | | | | | | | on and off since John Dyson left his work-in-progress. It is off by default for now. sysctl vm.zeroidle_enable=1 to turn it on. There are some hacks here to deal with the present lack of preemption - we yield after doing a small number of pages since we wont preempt otherwise. This is basically Matt's algorithm [with hysteresis] with an idle process to call it in a similar way it used to be called from the idle loop. I cleaned up the includes a fair bit here too.
* KASSERT if vm_page_t->wire_count overflows.dillon2001-08-221-0/+1
|
* - Remove asleep(), await(), and M_ASLEEP.jhb2001-08-101-26/+0
| | | | | | | | | - Callers of asleep() and await() have been converted to calling tsleep(). The only caller outside of M_ASLEEP was the ata driver, which called both asleep() and await() with spl-raised, so there was no need for the asleep() and await() pair. M_ASLEEP was unused. Reviewed by: jasone, peter
* Oops. Last commit to vm_object.c should have got these files too.jake2001-07-311-6/+4
| | | | | | | Remove the use of atomic ops to manipulate vm_object and vm_page flags. Giant is required here, so they are superfluous. Discussed with: dillon
* make vm_page_select_cache staticassar2001-07-231-1/+1
| | | | Requested by: bde
* Reorg vm_page.c into vm_page.c, vm_pageq.c, and vm_contig.c (for contigmalloc).dillon2001-07-041-413/+47
| | | | | | | | | | | Also removed some spl's and added some VM mutexes, but they are not actually used yet, so this commit does not really make any operational changes to the system. vm_page.c relates to vm_page_t manipulation, including high level deactivation, activation, etc... vm_pageq.c relates to finding free pages and aquiring exclusive access to a page queue (exclusivity part not yet implemented). And the world still builds... :-)
* Change inlines back into mainline code in preparation for mutexing. Also,dillon2001-07-041-110/+314
| | | | | | | | most of these inlines had been bloated in -current far beyond their original intent. Normalize prototypes and function declarations to be ANSI only (half already were). And do some general cleanup. (kernel size also reduced by 50-100K, but that isn't the prime intent)
* whitespace / register cleanupdillon2001-07-041-28/+29
|
* With Alfred's permission, remove vm_mtx in favor of a fine-grained approachdillon2001-07-041-49/+28
| | | | | | | | | (this commit is just the first stage). Also add various GIANT_ macros to formalize the removal of Giant, making it easy to test in a more piecemeal fashion. These macros will allow us to test fine-grained locks to a degree before removing Giant, and also after, and to remove Giant in a piecemeal fashion via sysctl's on those subsystems which the authors believe can operate without Giant.
* This patch implements O_DIRECT about 80% of the way. It takes a patchsetdillon2001-05-241-0/+23
| | | | | | | | | | | | | | | | Tor created a while ago, removes the raw I/O piece (that has cache coherency problems), and adds a buffer cache / VM freeing piece. Essentially this patch causes O_DIRECT I/O to not be left in the cache, but does not prevent it from going through the cache, hence the 80%. For the last 20% we need a method by which the I/O can be issued directly to buffer supplied by the user process and bypass the buffer cache entirely, but still maintain cache coherency. I also have the code working under -stable but the changes made to sys/file.h may not be MFCable, so an MFC is not on the table yet. Submitted by: tegge, dillon
OpenPOWER on IntegriCloud