summaryrefslogtreecommitdiffstats
path: root/sys/vm
Commit message (Collapse)AuthorAgeFilesLines
...
* o Lock page queue accesses by vm_page_free().alc2002-07-211-0/+2
|
* Do not pass a thread with the state TDS_RUNQ to setrunqueue(), otherwisetanimura2002-07-211-1/+4
| | | | assertion in setrunqueue() fails.
* o Lock page queue accesses by vm_page_try_to_cache(). (The accessesalc2002-07-203-1/+5
| | | | | in kern/vfs_bio.c are already locked.) o Assert that the page queues lock is held in vm_page_try_to_cache().
* o Assert that the page queues lock is held in vm_page_try_to_free().alc2002-07-201-0/+2
|
* o Lock page queue accesses by vm_page_cache() in vm_fault() andalc2002-07-203-2/+5
| | | | | vm_pageout_scan(). (The others are already locked.) o Assert that the page queues lock is held in vm_page_cache().
* o Lock accesses to the active page queue in vm_pageout_scan() andalc2002-07-201-2/+4
| | | | vm_pageout_page_stats().
* o Lock page queue accesses by vm_page_cache() in vm_contig_launder().alc2002-07-201-2/+4
| | | | o Micro-optimize the control flow in vm_contig_launder().
* o Remove dead and/or unused code.alc2002-07-202-17/+1
|
* Infrastructure tweaks to allow having both an Elf32 and an Elf64 executablepeter2002-07-202-4/+3
| | | | | | | | | | | | | | | handler in the kernel at the same time. Also, allow for the exec_new_vmspace() code to build a different sized vmspace depending on the executable environment. This is a big help for execing i386 binaries on ia64. The ELF exec code grows the ability to map partial pages when there is a page size difference, eg: emulating 4K pages on 8K or 16K hardware pages. Flesh out the i386 emulation support for ia64. At this point, the only binary that I know of that fails is cvsup, because the cvsup runtime tries to execute code in pages not marked executable. Obtained from: dfr (mostly, many tweaks from me).
* Set P_NOLOAD on the pagezero kthread so that it doesn't artificially skewpeter2002-07-191-1/+7
| | | | | | the loadav. This is not real load. If you have a nice process running in the background, pagezero may sit in the run queue for ages and add one to the loadav, and thereby affecting other scheduling decisions.
* o Duplicate an odd side-effect of vm_page_wire() in vm_page_allocate()alc2002-07-191-1/+2
| | | | | | when VM_ALLOC_WIRED is specified: set the PG_MAPPED bit in flags. o In both vm_page_wire() and vm_page_allocate() add a comment saying that setting PG_MAPPED does not belong there.
* o Remove the acquisition and release of Giant from the idle priority threadalc2002-07-182-8/+1
| | | | | | | | | that pre-zeroes free pages. o Remove GIANT_REQUIRED from some low-level page queue functions. (Instead assertions on the page queue lock are being added to the higher-level functions, like vm_page_wire(), etc.) In collaboration with: peter
* Void functions cannot return values.markm2002-07-181-1/+1
|
* (VM_MAX_KERNEL_ADDRESS - KERNBASE) / PAGE_SIZE may not fit in an integer.peter2002-07-181-1/+1
| | | | | | Use lmin(long, long), not min(u_int, u_int). This is a problem here on ia64 which has *way* more than 2^32 pages of KVA. 281474976710655 pages to be precice.
* o Introduce an argument, VM_ALLOC_WIRED, that requests vm_page_alloc()alc2002-07-182-10/+15
| | | | | | | | | | to return a wired page. o Use VM_ALLOC_WIRED within Alpha's pmap_growkernel(). Also, because Alpha's pmap_growkernel() calls vm_page_alloc() from within a critical section, specify VM_ALLOC_INTERRUPT instead of VM_ALLOC_SYSTEM. (Only VM_ALLOC_INTERRUPT is implemented entirely with a spin mutex.) o Assert that the page queues mutex is held in vm_page_wire() on Alpha, just like the other platforms.
* o Use vm_pageq_remove_nowakeup() and vm_pageq_enqueue() inalc2002-07-161-7/+2
| | | | | | | | | vm_page_zero_idle() instead of partially duplicated implementations. In particular, this change guarantees that the number of free pages in the free queue(s) matches the global free page count when Giant is released. Submitted by: peter (via his p4 "pmap" branch)
* o Create vm_contig_launder() to replace code that appears twicealc2002-07-151-56/+38
| | | | in contigmalloc1().
* o Lock page queue accesses by vm_page_wire() that aren'talc2002-07-141-0/+3
| | | | | | within a critical section. o Assert that the page queues lock is held in vm_page_wire() unless an Alpha.
* o Lock page queue accesses by vm_page_wire().alc2002-07-142-0/+4
|
* o Lock page queue accesses by vm_page_unmanage().alc2002-07-132-0/+3
| | | | o Assert that the page queues lock is held in vm_page_unmanage().
* o Complete the locking of page queue accesses by vm_page_unwire().alc2002-07-132-5/+4
| | | | | | o Assert that the page queues lock is held in vm_page_unwire(). o Make vm_page_lock_queues() and vm_page_unlock_queues() visible to kernel loadable modules.
* o Lock some page queue accesses, in particular, those by vm_page_unwire().alc2002-07-132-1/+8
|
* o Assert GIANT_REQUIRED on system maps in _vm_map_lock(),alc2002-07-122-9/+6
| | | | | | | | | | _vm_map_lock_read(), and _vm_map_trylock(). Submitted by: tegge o Remove GIANT_REQUIRED from kmem_alloc_wait() and kmem_free_wakeup(). (This clears the way for exec_map accesses to move outside of Giant. The exec_map is not a system map.) o Remove some premature MPSAFE comments. Reviewed by: tegge
* Re-enable the idle page-zeroing code. Remove all IPIs from the idledillon2002-07-121-4/+0
| | | | | | | | | | | | | | | | | page-zeroing code as well as from the general page-zeroing code and use a lazy tlb page invalidation scheme based on a callback made at the end of mi_switch. A number of people came up with this idea at the same time so credit belongs to Peter, John, and Jake as well. Two-way SMP buildworld -j 5 tests (second run, after stabilization) 2282.76 real 2515.17 user 704.22 sys before peter's IPI commit 2266.69 real 2467.50 user 633.77 sys after peter's commit 2232.80 real 2468.99 user 615.89 sys after this commit Reviewed by: peter, jhb Approved by: peter
* Avoid a vm_page_lookup() - that uses a spinlock protected hash. We canpeter2002-07-121-2/+5
| | | | just use the object's memq for our nefarious purposes.
* o Lock some (unfortunately, not yet all) accesses to the page queues.alc2002-07-121-2/+2
|
* o Lock accesses to the page queues.alc2002-07-121-2/+3
|
* o Add a "needs wakeup" flag to the vm_map for use by kmem_alloc_wait()alc2002-07-113-6/+13
| | | | | | | and kmem_free_wakeup(). Previously, kmem_free_wakeup() always called wakeup(). In general, no one was sleeping. o Export vm_map_unlock_and_wait() and vm_map_wakeup() from vm_map.c for use in vm_kern.c.
* o Lock accesses to the page queues in vm_object_terminate().alc2002-07-091-1/+3
| | | | o Eliminate some unnecessary 64-bit arithmetic in vm_object_split().
* vm_page_queue_free_mtx is a spin mutex, not a normal sleep mutex.peter2002-07-081-4/+4
| | | | | | | | | | | | | I do not know why this didn't panic my box, but I have most certainly been using it: peter@overcee[3:14pm]~src/sys/i386/i386-110> sysctl -a | grep zero vm.stats.misc.zero_page_count: 2235 vm.stats.misc.cnt_prezero: 638951 vm.idlezero_enable: 1 vm.idlezero_maxrun: 16 Submitted by: Tor.Egge@cvsup.no.freebsd.org Approved by: Tor's patches are never wrong. :-)
* Turn the zeroidle process off for SMP systems, there is still a possiblepeter2002-07-081-0/+4
| | | | | | | | TLB problem when bouncing from one cpu to another (the original cpu will not have purged its TLB if the it simply went idle). Pointed out by: Tor.Egge@cvsup.no.freebsd.org Approved by: Tor is never wrong. :-)
* Add a special page zero entry point intended to be called via the singlepeter2002-07-082-7/+10
| | | | | | | | | | | | | | | threaded VM pagezero kthread outside of Giant. For some platforms, this is really easy since it can just use the direct mapped region. For others, IPI sending is involved or there are other issues, so grab Giant when needed. We still have preemption issues to deal with, but Alan Cox has an interesting suggestion on how to minimize the problem on x86. Use Luigi's hack for preserving the (lack of) priority. Turn the idle zeroing back on since it can now actually do something useful outside of Giant in many cases.
* Avoid vm_page_lookup() [grabs a spinlock] and just process the upagepeter2002-07-081-14/+9
| | | | | | object memq instead. Suggested by: alc
* Collect all the (now equivalent) pmap_new_proc/pmap_dispose_proc/peter2002-07-073-11/+154
| | | | | | | | | | | | | pmap_swapin_proc/pmap_swapout_proc functions from the MD pmap code and use a single equivalent MI version. There are other cleanups needed still. While here, use the UMA zone hooks to keep a cache of preinitialized proc structures handy, just like the thread system does. This eliminates one dependency on 'struct proc' being persistent even after being freed. There are some comments about things that can be factored out into ctor/dtor functions if it is worth it. For now they are mostly just doing statistics to get a feel of how it is working.
* o Lock accesses to the free queue(s) in vm_page_zero_idle().alc2002-07-071-0/+4
|
* o Traverse the object's memq rather than repeatedly calling vm_page_lookup()alc2002-07-071-5/+2
| | | | in vm_object_split().
* - Hold a lock on the vnode acquired from the file table across the call tojeff2002-07-061-3/+14
| | | | | | vm_mmap() as well as the GETATTR etc. - If the handle is a vnode in vm_mmap() assert that it is locked. - Wiggle Giant around a little to account for the extra vnode operation.
* Remove bogus vm_page_wakeup() in vm_page_cowfault() that will cause panicsgallatin2002-07-051-1/+0
| | | | | | | in the zero-copy send path if a process attempts to write to a page which is still in flight. reviewed by: ken
* Fix a lock order reversal in uma_zdestroy. The uma_mtx needs to be held acrossjeff2002-07-051-4/+4
| | | | | | calls to zone_drain(). Noticed by: scottl
* o Lock accesses to the free page queues in contigmalloc1().alc2002-07-051-0/+2
|
* Remove unnecessary includes.jeff2002-07-052-4/+0
|
* o Resurrect vm_page_lock_queues(), vm_page_unlock_queues(), and the freealc2002-07-042-5/+28
| | | | | | queue lock (revision 1.33 of vm/vm_page.c removed them). o Make the free queue lock a spin lock because it's sometimes acquired inside of a critical section.
* A small cleanup.julian2002-07-041-1/+0
|
* Don;t call teh thread setup routines from here..julian2002-07-041-1/+0
| | | | they are already called when uma calls thread_init()
* o Make the reservation of KVA space for kernel map entries a functionalc2002-07-031-1/+2
| | | | | | | | of the KVA space's size in addition to the amount of physical memory and reduce it by a factor of two. Under the old formula, our reservation amounted to one kernel map entry per virtual page in the KVA space on a 4GB i386.
* Actually use the fini callback.jeff2002-07-031-0/+1
| | | | | Pointy hat to: me :-( Noticed By: Julian
* - Use (OFF_TO_IDX(off) - pi) instead of (OFF_TO_IDX(off - IDX_TO_OFF(pi))).robert2002-07-011-5/+7
| | | | - Reformat a comment.
* o Remove some long dead code: from revision 1.41 of vm/vm_pager.calc2002-07-012-22/+0
| | | | | 3+ years ago. o Remove some unused prototypes.
* Change the type of `tscan' in vm_object_page_clean() to vm_pindex_t,iedowse2002-06-291-1/+1
| | | | as it stores an absolute page index that may not fit in a vm_offset_t.
* Part 1 of KSE-IIIjulian2002-06-295-55/+97
| | | | | | | | | | | | | The ability to schedule multiple threads per process (one one cpu) by making ALL system calls optionally asynchronous. to come: ia64 and power-pc patches, patches for gdb, test program (in tools) Reviewed by: Almost everyone who counts (at various times, peter, jhb, matt, alfred, mini, bernd, and a cast of thousands) NOTE: this is still Beta code, and contains lots of debugging stuff. expect slight instability in signals..
OpenPOWER on IntegriCloud