summaryrefslogtreecommitdiffstats
path: root/sys/i386/xen/pmap.c
Commit message (Collapse)AuthorAgeFilesLines
...
* Commit the support for removing cpumask_t and replacing it directly withattilio2011-05-051-26/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cpuset_t objects. That is going to offer the underlying support for a simple bump of MAXCPU and then support for number of cpus > 32 (as it is today). Right now, cpumask_t is an int, 32 bits on all our supported architecture. cpumask_t on the other side is implemented as an array of longs, and easilly extendible by definition. The architectures touched by this commit are the following: - amd64 - i386 - pc98 - arm - ia64 - XEN while the others are still missing. Userland is believed to be fully converted with the changes contained here. Some technical notes: - This commit may be considered an ABI nop for all the architectures different from amd64 and ia64 (and sparc64 in the future) - per-cpu members, which are now converted to cpuset_t, needs to be accessed avoiding migration, because the size of cpuset_t should be considered unknown - size of cpuset_t objects is different from kernel and userland (this is primirally done in order to leave some more space in userland to cope with KBI extensions). If you need to access kernel cpuset_t from the userland please refer to example in this patch on how to do that correctly (kgdb may be a good source, for example). - Support for other architectures is going to be added soon - Only MAXCPU for amd64 is bumped now The patch has been tested by sbruno and Nicholas Esborn on opteron 4 x 12 pack CPUs. More testing on big SMP is expected to came soon. pluknet tested the patch with his 8-ways on both amd64 and i386. Tested by: pluknet, sbruno, gianni, Nicholas Esborn Reviewed by: jeff, jhb, sbruno
* - Merge a fix fixup for the last lazyfix removalattilio2011-05-021-1/+0
| | | | - Sync xen with i386 about the ipi_send_cpu() usage
* Remove the support for lazy cr3 switching from i386.attilio2011-04-301-92/+0
| | | | | | amd64 has already this micro-optimization removed. Submitted by: kib
* Make MSGBUF_SIZE kernel option a loader tunable kern.msgbufsize.pluknet2011-01-211-2/+1
| | | | | | | Submitted by: perryh pluto.rain.com (previous version) Reviewed by: jhb Approved by: kib (mentor) Tested by: universe
* Add hamfisted locking to the Xen/PV pmap code: Only allow one thread tocperciva2011-01-041-0/+35
| | | | | | | | | | | | | | | be in {pmap_pinit, pmap_copy, pmap_release} at a time. This reduces the rate of panics when running 'make index' from ~0.6/hour to ~0.02/hour (p < 10^-30). At a later date this locking will be removed, and for this reason, it is wrapped in #ifdef HAMFISTED_LOCKING; this temporary hack is being put in place with the intention of shipping somewhat-stable Xen bits in FreeBSD 8.2-RELEASE. PR: kern/153672 MFC after: 3 days
* Make i386_set_ldt work on i386/XEN, step 1/5.cperciva2010-12-311-0/+4
| | | | | | | | | | Lock the vm page queue mutex around calls to pte_store. As with many other uses of the vm page queue mutex in i386/xen/pmap.c, this is bogus and needs to be replaced at some future date by a spin lock dedicated to protecting the queue of pending xen page mapping hypervisor calls. (But for now, bogus locking is better than a panic.) MFC after: 3 days
* Remove a "not strictly correct" (and panic-inducing) workaround for a bugcperciva2010-12-281-15/+2
| | | | | | | which doesn't seem to exist. PR: kern/141328 MFC after: 3 days
* Lock the vm page queue mutex in pmap_pte_release around the callcperciva2010-12-261-0/+2
| | | | | | | | | | | | | | | | | | | to PMAP_SET_VA; this fixes a mutex-not-held panic when a process which called mlock(2) exits, and parallels a change made in pmap_pte 10 months ago (svn r204160). Note: The locking in this code is utterly broken. We should not be using the VM page queue mutex to protect the queue of pending Xen page mapping hypervisor calls. Even if it made sense to do so, this commit and r204160 introduce LORs between the vm page queue mutex and PMAP2mutex. (However, a possible deadlock is better than a guaranteed panic, and this change will hopefully make life easier for whoever fixes the Xen pmap locking in the future.) PR: kern/140313 MFC after: 3 days
* Revert r215819 and fix the bug properly. In pmap_qremove, paging tablecperciva2010-11-251-12/+1
| | | | | | | | | | | updates were being queued by pmap_kremove, but the queue wasn't being flushed; as a result, the updates didn't happen until *after* the call to pmap_invalidate_range, and old entries could stick around in the TLB. Adding a PT_UPDATES_FLUSH() call immediately before pmap_invalidate_range ensures that after the invalidation the TLB will be repopulated with the correct new entries. Thanks to: kib, avg, alc
* Work around paging bug. Somehow we seem to be ending up with entries incperciva2010-11-251-0/+12
| | | | | | | the TLB which don't correspond to ptes with PG_V set; prior to this commit I'm sometimes getting the wrong data when pages are loaded into the buffer cache (they're being loaded, but the missing TLB invalidation is causing the wrong data to be visible).
* Unifdef XEN. This file is only compiled with the XEN kernel option set,cperciva2010-11-201-103/+2
| | | | and the !XEN bits get in the way of understanding the code.
* Add VTOM(va) macro as xpmap_ptom(VTOP(va)) to convert to machine addresses.cperciva2010-11-201-15/+15
| | | | | | | | | | Clean up the code by converting xpmap_ptom(VTOP(...)) to VTOM(...) and converting xpmap_ptom(VM_PAGE_TO_PHYS(...)) to VM_PAGE_TO_MACH(...). In a few places we take advantage of the fact that xpmap_ptom can commute with setting PG_* flags. This commit should have no net effect save to improve the readability of this code.
* Make pmap_release consistent with pmap_pinit with respect to unpinningcperciva2010-11-191-0/+5
| | | | | | | | | | pages. The pinning of NPGPTD pages is #if 0ed out in pmap_pinit (I'm not quite sure why...) and this commit adds a corresponding #if 0 in pmap_release to avoid unpinning those pages. Some versions of Xen seem to silently ignore requests to unpin pages which were never pinned in the first place, but some return an error (causing FreeBSD to panic) prior to this commit.
* Make pmap_release match pmap_pinit by invoking pmap_qremove(pmap->pm_pdpt)cperciva2010-11-181-0/+3
| | | | to match pmap_pinit's pmap_qenter(pmap->pm_pdpt) call in the case of PAE.
* Don't KASSERT in pmap_release thatcperciva2010-11-181-2/+3
| | | | | | | | | xpmap_ptom(VM_PAGE_TO_PHYS(m)) == (pmap->pm_pdpt[i] & PG_FRAME) for i = NPGPTD, since pmap->pm_pdpt[i] is only initialized for 0 <= i < NPGPTD. This fixes an inevitable panic with XEN && PAE && INVARIANTS when pmap_release is called (e.g., when /sbin/init is launched).
* Update various places that store or manipulate CPU masks to use cpumask_tjhb2010-08-111-6/+3
| | | | | instead of int or u_int. Since cpumask_t is currently u_int on all platforms this should just be a cosmetic change.
* Relax one of the new assertions in pmap_enter() a little. Specifically,alc2010-06-111-1/+2
| | | | | | allow pmap_enter() to be performed on an unmanaged page that doesn't have VPO_BUSY set. Having VPO_BUSY set really only matters for managed pages. (See, for example, pmap_remove_write().)
* Reduce the scope of the page queues lock and the number ofalc2010-06-101-9/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PG_REFERENCED changes in vm_pageout_object_deactivate_pages(). Simplify this function's inner loop using TAILQ_FOREACH(), and shorten some of its overly long lines. Update a stale comment. Assert that PG_REFERENCED may be cleared only if the object containing the page is locked. Add a comment documenting this. Assert that a caller to vm_page_requeue() holds the page queues lock, and assert that the page is on a page queue. Push down the page queues lock into pmap_ts_referenced() and pmap_page_exists_quick(). (As of now, there are no longer any pmap functions that expect to be called with the page queues lock held.) Neither pmap_ts_referenced() nor pmap_page_exists_quick() should ever be passed an unmanaged page. Assert this rather than returning "0" and "FALSE" respectively. ARM: Simplify pmap_page_exists_quick() by switching to TAILQ_FOREACH(). Push down the page queues lock inside of pmap_clearbit(), simplifying pmap_clear_modify(), pmap_clear_reference(), and pmap_remove_write(). Additionally, this allows for avoiding the acquisition of the page queues lock in some cases. PowerPC/AIM: moea*_page_exits_quick() and moea*_page_wired_mappings() will never be called before pmap initialization is complete. Therefore, the check for moea_initialized can be eliminated. Push down the page queues lock inside of moea*_clear_bit(), simplifying moea*_clear_modify() and moea*_clear_reference(). The last parameter to moea*_clear_bit() is never used. Eliminate it. PowerPC/BookE: Simplify mmu_booke_page_exists_quick()'s control flow. Reviewed by: kib@
* Eliminate a stale comment.alc2010-05-311-4/+0
|
* Simplify the inner loop of pmap_collect(): While iterating over the page'salc2010-05-301-2/+2
| | | | | pv list, there is no point in checking whether or not the pv list is empty. Instead, wait until the loop completes.
* Merge various changes from i386/i386/pmap.c:alc2010-05-301-72/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The remaining, unmerged portions of r175404 Retire PMAP_DIAGNOSTIC. Any useful diagnostics that were conditionally compiled under PMAP_DIAGNOSTIC are now KASSERT()s. (Note: The kernel option DIAGNOSTIC still disables inlining of certain pmap functions.) Eliminate dead code from pmap_enter(). This code implemented an assertion. On i386, an equivalent check is already implemented. However, on amd64, a small change is required to implement an equivalent check. Eliminate \n from a nearby panic string. Use KASSERT() to reimplement pmap_copy()'s two assertions. Merge portions of r177659 To date, we have assumed that the TLB will only set the PG_M bit in a PTE if that PTE has the PG_RW bit set. However, this assumption does not hold on recent processors from Intel. For example, consider a PTE that has the PG_RW bit set but the PG_M bit clear. Suppose this PTE is cached in the TLB and later the PG_RW bit is cleared in the PTE, but the corresponding TLB entry is not (yet) invalidated. Historically, upon a write access using this (stale) TLB entry, the TLB would observe that the PG_RW bit had been cleared and initiate a page fault, aborting the setting of the PG_M bit in the PTE. Now, however, P4- and Core2-family processors will set the PG_M bit before observing that the PG_RW bit is clear and initiating a page fault. In other words, the write does not occur but the PG_M bit is still set. The real impact of this difference is not that great. Specifically, we should no longer assert that any PTE with the PG_M bit set must also have the PG_RW bit set, and we should ignore the state of the PG_M bit unless the PG_RW bit is set. r208609 Defer freeing any page table pages in pmap_remove_all() until after the page queues lock is released. This may reduce the amount of time that the page queues lock is held by pmap_remove_all(). r208645 When I pushed down the page queues lock into pmap_is_modified(), I created an ordering dependence: A pmap operation that clears PG_WRITEABLE and calls vm_page_dirty() must perform the call first. Otherwise, pmap_is_modified() could return FALSE without acquiring the page queues lock because the page is not (currently) writeable, and the caller to pmap_is_modified() might believe that the page's dirty field is clear because it has not seen the effect of the vm_page_dirty() call. When I pushed down the page queues lock into pmap_is_modified(), I overlooked one place where this ordering dependence is violated: pmap_enter(). In a rare situation pmap_enter() can be called to replace a dirty mapping to one page with a mapping to another page. (I say rare because replacements generally occur as a result of a copy-on-write fault, and so the old page is not dirty.) This change delays clearing PG_WRITEABLE until after vm_page_dirty() has been called. Fixing the ordering dependency also makes it easy to introduce a small optimization: When pmap_enter() used to replace a mapping to one page with a mapping to another page, it freed the pv entry for the first mapping and later called the pv entry allocator for the new mapping. Now, pmap_enter() attempts to recycle the old pv entry, saving two calls to the pv entry allocator. There is no point in setting PG_WRITEABLE on unmanaged pages, so don't. Update a comment to reflect this. Tidy up the variable declarations at the start of pmap_enter().
* Push down page queues lock acquisition in pmap_enter_object() andalc2010-05-261-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pmap_is_referenced(). Eliminate the corresponding page queues lock acquisitions from vm_map_pmap_enter() and mincore(), respectively. In mincore(), this allows some additional cases to complete without ever acquiring the page queues lock. Assert that the page is managed in pmap_is_referenced(). On powerpc/aim, push down the page queues lock acquisition from moea*_is_modified() and moea*_is_referenced() into moea*_query_bit(). Again, this will allow some additional cases to complete without ever acquiring the page queues lock. Reorder a few statements in vm_page_dontneed() so that a race can't lead to an old reference persisting. This scenario is described in detail by a comment. Correct a spelling error in vm_page_dontneed(). Assert that the object is locked in vm_page_clear_dirty(), and restrict the page queues lock assertion to just those cases in which the page is currently writeable. Add object locking to vnode_pager_generic_putpages(). This was the one and only place where vm_page_clear_dirty() was being called without the object being locked. Eliminate an unnecessary vm_page_lock() around vnode_pager_setsize()'s call to vm_page_clear_dirty(). Change vnode_pager_generic_putpages() to the modern-style of function definition. Also, change the name of one of the parameters to follow virtual memory system naming conventions. Reviewed by: kib
* Roughly half of a typical pmap_mincore() implementation is machine-alc2010-05-241-54/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | independent code. Move this code into mincore(), and eliminate the page queues lock from pmap_mincore(). Push down the page queues lock into pmap_clear_modify(), pmap_clear_reference(), and pmap_is_modified(). Assert that these functions are never passed an unmanaged page. Eliminate an inaccurate comment from powerpc/powerpc/mmu_if.m: Contrary to what the comment says, pmap_mincore() is not simply an optimization. Without a complete pmap_mincore() implementation, mincore() cannot return either MINCORE_MODIFIED or MINCORE_REFERENCED because only the pmap can provide this information. Eliminate the page queues lock from vfs_setdirty_locked_object(), vm_pageout_clean(), vm_object_page_collect_flush(), and vm_object_page_clean(). Generally speaking, these are all accesses to the page's dirty field, which are synchronized by the containing vm object's lock. Reduce the scope of the page queues lock in vm_object_madvise() and vm_page_dontneed(). Reviewed by: kib (an earlier version)
* On entry to pmap_enter(), assert that the page is busy. While I'malc2010-05-161-7/+16
| | | | | | | | | | | | | | | | | | | | here, make the style of assertion used by pmap_enter() consistent across all architectures. On entry to pmap_remove_write(), assert that the page is neither unmanaged nor fictitious, since we cannot remove write access to either kind of page. With the push down of the page queues lock, pmap_remove_write() cannot condition its behavior on the state of the PG_WRITEABLE flag if the page is busy. Assert that the object containing the page is locked. This allows us to know that the page will neither become busy nor will PG_WRITEABLE be set on it while pmap_remove_write() is running. Correct a long-standing bug in vm_page_cowsetup(). We cannot possibly do copy-on-write-based zero-copy transmit on unmanaged or fictitious pages, so don't even try. Previously, the call to pmap_remove_write() would have failed silently.
* Push down the page queues into vm_page_cache(), vm_page_try_to_cache(), andalc2010-05-081-20/+17
| | | | | | | | | | | vm_page_try_to_free(). Consequently, push down the page queues lock into pmap_enter_quick(), pmap_page_wired_mapped(), pmap_remove_all(), and pmap_remove_write(). Push down the page queues lock into Xen's pmap_page_is_mapped(). (I overlooked the Xen pmap in r207702.) Switch to a per-processor counter for the total number of pages cached.
* merge 194209 in to the i386/xen pmapkmacy2010-04-301-46/+47
| | | | requested by: alc@
* On Alan's advice, rather than do a wholesale conversion on a singlekmacy2010-04-301-2/+9
| | | | | | | | | | | | architecture from page queue lock to a hashed array of page locks (based on a patch by Jeff Roberson), I've implemented page lock support in the MI code and have only moved vm_page's hold_count out from under page queue mutex to page lock. This changes pmap_extract_and_hold on all pmaps. Supported by: Bitgravity Inc. Discussed with: alc, jeffr, and kib
* MFi386 r207205alc2010-04-271-13/+7
| | | | | | Clearing a page table entry's accessed bit (PG_A) and setting the page's PG_REFERENCED flag in pmap_protect() can't really be justified, so don't do it.
* Resurrect pmap_is_referenced() and use it in mincore(). Essentially,alc2010-04-241-3/+29
| | | | | | | | | | | | | | | | | pmap_ts_referenced() is not always appropriate for checking whether or not pages have been referenced because it clears any reference bits that it encounters. For example, in mincore(), clearing the reference bits has two negative consequences. First, it throws off the activity count calculations performed by the page daemon. Specifically, a page on which mincore() has called pmap_ts_referenced() looks less active to the page daemon than it should. Consequently, the page could be deactivated prematurely by the page daemon. Arguably, this problem could be fixed by having mincore() duplicate the activity count calculation on the page. However, there is a second problem for which that is not a solution. In order to clear a reference on a 4KB page, it may be necessary to demote a 2/4MB page mapping. Thus, a mincore() by one process can have the side effect of demoting a superpage mapping within another process!
* - fix bootstrap for variable KVA_PAGESkmacy2010-02-211-4/+5
| | | | | | | - remove unused CADDR1 - hold lock across page table update MFC after: 3 days
* Allow the pmap code to be built with GCC from FreeBSD 7 again.ed2010-02-181-0/+4
| | | | | | | | | This patch basically gives us the best of both worlds. Instead of forcing the compiler to emulate GNU-style inline semantics even though we're using ISO C99, it will only use GNU-style inlining when the compiler is configured that way (__GNUC_GNU_INLINE__). Tested by: jhb
* Recommit r193732:ed2010-01-191-1/+1
| | | | | | | | | | | | | | | | Remove __gnu89_inline. Now that we use C99 almost everywhere, just use C99-style in the pmap code. Since the pmap code is the only consumer of __gnu89_inline, remove it from cdefs.h as well. Because the flag was only introduced 17 months ago, I don't expect any problems. Reviewed by: alc It was backed out, because it prevented us from building kernels using a 7.x compiler. Now that most people use 8.x, there is nothing that holds us back. Even if people run 7.x, they should be able to build a kernel if they run `make kernel-toolchain' or `make buildworld' first.
* Unbreak the XEN build after r201751.bz2010-01-081-0/+4
|
* Make pmap_set_pg() static.alc2010-01-071-1/+2
|
* - revert pmap_kenter_temporary to taking a physical addresskmacy2009-12-101-1/+2
| | | | - make minidump work
* fixup kernel core dumps on paravirtual guestskmacy2009-11-241-1/+1
|
* reflect that pg_ps_enabled is a tunable, not just a read-only sysctlavg2009-11-111-1/+1
| | | | Nod from: jhb
* o Introduce vm_sync_icache() for making the I-cache coherent withmarcel2009-10-211-0/+5
| | | | | | | | | | | | | | | | | | | | | the memory or D-cache, depending on the semantics of the platform. vm_sync_icache() is basically a wrapper around pmap_sync_icache(), that translates the vm_map_t argumument to pmap_t. o Introduce pmap_sync_icache() to all PMAP implementation. For powerpc it replaces the pmap_page_executable() function, added to solve the I-cache problem in uiomove_fromphys(). o In proc_rwmem() call vm_sync_icache() when writing to a page that has execute permissions. This assures that when breakpoints are written, the I-cache will be coherent and the process will actually hit the breakpoint. o This also fixes the Book-E PMAP implementation that was missing necessary locking while trying to deal with the I-cache coherency in pmap_enter() (read: mmu_booke_enter_locked). The key property of this change is that the I-cache is made coherent *after* writes have been done. Doing it in the PMAP layer when adding or changing a mapping means that the I-cache is made coherent *before* any writes happen. The difference is key when the I-cache prefetches.
* Consolidate CPUID to CPU family/model macros for amd64 and i386 to reducejkim2009-09-101-1/+1
| | | | unnecessary #ifdef's for shared code between them.
* As was done in r196643 for i386 and amd64, swap the start/end virtualkib2009-09-091-2/+2
| | | | | | | | addresses in pmap_invalidate_cache_range(). Reported by: Vincent Hoffman <vince unsane co uk> Reviewed by: jhb MFC after: 3 days
* Delete whitespace not in i386/pmap.cadrian2009-09-011-1/+0
|
* Migrate to use cpuset_t.adrian2009-09-011-5/+4
|
* Merge in the pat_works work from sys/i386/i386/pmap.c - primarily to reduceadrian2009-09-011-65/+74
| | | | diff size.
* Fix broken build.adrian2009-09-011-0/+1
|
* Shuffle pagezero() into the same location as in sys/i386/i386/pmap.c.adrian2009-08-311-16/+16
|
* Fix XEN build breakage, by implementing pmap_invalidate_cache_range()kib2009-07-291-16/+88
| | | | | | | and using it when appropriate. Merge analogue of the r195836 optimization to XEN. Approved by: re (kensmith)
* Add a new type of VM object: OBJT_SG. An OBJT_SG object is very similar tojhb2009-07-241-1/+1
| | | | | | | | | | | a device pager (OBJT_DEVICE) object in that it uses fictitious pages to provide aliases to other memory addresses. The primary difference is that it uses an sglist(9) to determine the physical addresses for a given offset into the object instead of invoking the d_mmap() method in a device driver. Reviewed by: alc Approved by: re (kensmith) MFC after: 2 weeks
* Change the handling of fictitious pages by pmap_page_set_memattr() onalc2009-07-191-0/+19
| | | | | | | | | | | | | | | | | amd64 and i386. Essentially, fictitious pages provide a mechanism for creating aliases for either normal or device-backed pages. Therefore, pmap_page_set_memattr() on a fictitious page needn't update the direct map or flush the cache. Such actions are the responsibility of the "primary" instance of the page or the device driver that "owns" the physical address. For example, these actions are already performed by pmap_mapdev(). The device pager needn't restore the memory attributes on a fictitious page before releasing it. It's now pointless. Add pmap_page_set_memattr() to the Xen pmap. Approved by: re (kib)
* Add support to the virtual memory system for configuring machine-alc2009-07-121-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dependent memory attributes: Rename vm_cache_mode_t to vm_memattr_t. The new name reflects the fact that there are machine-dependent memory attributes that have nothing to do with controlling the cache's behavior. Introduce vm_object_set_memattr() for setting the default memory attributes that will be given to an object's pages. Introduce and use pmap_page_{get,set}_memattr() for getting and setting a page's machine-dependent memory attributes. Add full support for these functions on amd64 and i386 and stubs for them on the other architectures. The function pmap_page_set_memattr() is also responsible for any other machine-dependent aspects of changing a page's memory attributes, such as flushing the cache or updating the direct map. The uses include kmem_alloc_contig(), vm_page_alloc(), and the device pager: kmem_alloc_contig() can now be used to allocate kernel memory with non-default memory attributes on amd64 and i386. vm_page_alloc() and the device pager will set the memory attributes for the real or fictitious page according to the object's default memory attributes. Update the various pmap functions on amd64 and i386 that map pages to incorporate each page's memory attributes in the mapping. Notes: (1) Inherent to this design are safety features that prevent the specification of inconsistent memory attributes by different mappings on amd64 and i386. In addition, the device pager provides a warning when a device driver creates a fictitious page with memory attributes that are inconsistent with the real page that the fictitious page is an alias for. (2) Storing the machine-dependent memory attributes for amd64 and i386 as a dedicated "int" in "struct md_page" represents a compromise between space efficiency and the ease of MFCing these changes to RELENG_7. In collaboration with: jhb Approved by: re (kib)
* PAE adds another level to the i386 page table. This level is a smallalc2009-07-051-6/+5
| | | | | | | | | | | | | | | | | | | | | | 4-entry table that must be located within the first 4GB of RAM. This requirement is met by defining an UMA zone with a custom back-end allocator function. This revision makes two changes to this back-end allocator function: (1) It replaces the use of contigmalloc() with the use of kmem_alloc_contig(). This eliminates "double accounting", i.e., accounting by both the UMA zone and malloc tags. (I made the same change for the same reason to the zones supporting jumbo frames a week ago.) (2) It passes through the "wait" parameter, i.e., M_WAITOK, M_ZERO, etc. to kmem_alloc_contig() rather than ignoring it. pmap_init() calls uma_zalloc() with both M_WAITOK and M_ZERO. At the moment, this is harmless only because the default behavior of contigmalloc()/kmem_alloc_contig() is to wait and because pmap_init() doesn't really depend on the memory being zeroed. The back-end allocator function in the Xen pmap is dead code. I am changing it nonetheless because I don't want to leave any "bad examples" in the source tree for someone to copy at a later date. Approved by: re (kib)
OpenPOWER on IntegriCloud