summaryrefslogtreecommitdiffstats
path: root/sys/powerpc/aim/mmu_oea.c
Commit message (Collapse)AuthorAgeFilesLines
* Fix error in r233949. Synchronizing icaches on uncacheable pages turns outnwhitehorn2012-04-111-2/+4
| | | | | not to be a good idea, and of course the PV entry list for a page is never empty after the page has been mapped.
* Reduce the frequency that the PowerPC/AIM pmaps invalidate instructionnwhitehorn2012-04-061-35/+5
| | | | | | | | caches, by invalidating kernel icaches only when needed and not flushing user caches for shared pages. Suggested by: kib MFC after: 2 weeks
* Use LIST_FOREACH_SAFE() instead of LIST_FOREACH() in pmap_remove(), sincenwhitehorn2012-03-141-2/+2
| | | | | | the point of this loop is to remove elements. This worked by accident before. MFC after: 2 days
* Allow this to work on embedded systems without Open Firmware by makingnwhitehorn2011-12-161-35/+67
| | | | | | | lack of a /chosen non-fatal, and manually removing memory in use by the kernel from the physical memory map. Submitted by: rpaulo
* Keep track of PVO entries in each pmap, which allows much fasternwhitehorn2011-12-111-5/+20
| | | | | | | pmap_remove() for large sparse requests. This can prevent pmap_remove() operations on 64-bit process destruction or swapout that would take several hundred times the lifetime of the universe to complete. This behavior is largely indistinguishable from a hang.
* Add an extra invariant here which was useful on 64-bit CPUs.nwhitehorn2011-11-171-0/+2
|
* Split the vm_page flags PG_WRITEABLE and PG_REFERENCED into atomickib2011-09-061-13/+13
| | | | | | | | | | | | | | | | | flags field. Updates to the atomic flags are performed using the atomic ops on the containing word, do not require any vm lock to be held, and are non-blocking. The vm_page_aflag_set(9) and vm_page_aflag_clear(9) functions are provided to modify afalgs. Document the changes to flags field to only require the page lock. Introduce vm_page_reference(9) function to provide a stable KPI and KBI for filesystems like tmpfs and zfs which need to mark a page as referenced. Reviewed by: alc, attilio Tested by: marius, flo (sparc64); andreast (powerpc, powerpc64) Approved by: re (bz)
* - Move the PG_UNMANAGED flag from m->flags to m->oflags, renaming the flagkib2011-08-091-20/+15
| | | | | | | | | | | | | | to VPO_UNMANAGED (and also making the flag protected by the vm object lock, instead of vm page queue lock). - Mark the fake pages with both PG_FICTITIOUS (as it is now) and VPO_UNMANAGED. As a consequence, pmap code now can use use just VPO_UNMANAGED to decide whether the page is unmanaged. Reviewed by: alc Tested by: pho (x86, previous version), marius (sparc64), marcel (arm, ia64, powerpc), ray (mips) Sponsored by: The FreeBSD Foundation Approved by: re (bz)
* With retirement of cpumask_t and usage of cpuset_t for representing aattilio2011-07-041-6/+2
| | | | | | | | | | | | | | | mask of CPUs, pc_other_cpus and pc_cpumask become highly inefficient. Remove them and replace their usage with custom pc_cpuid magic (as, atm, pc_cpumask can be easilly represented by (1 << pc_cpuid) and pc_other_cpus by (all_cpus & ~(1 << pc_cpuid))). This change is not targeted for MFC because of struct pcpu members removal and dependency by cpumask_t retirement. MD review by: marcel, marius, alc Tested by: pluknet MD testing by: marcel, marius, gonzo, andreast
* MFCattilio2011-06-031-19/+1
|\
| * Remove some dead code: unnecessary isyncs and memory sorting, which arenwhitehorn2011-06-021-19/+1
| | | | | | | | handled in mtmsr() and mem_regions(), respectively.
* | Add the powerpc support.attilio2011-05-091-3/+10
|/ | | | | | | Note that there is a dirty hack for calling openpic_write(), but nwhitehorn approved it. Discussed with: nwhitehorn
* The macro MOEA_PVO_CHECK is empty and not used. It is a left over from theandreast2011-04-141-15/+0
| | | | | | NetBSD import. Remove the definition and all its occurrences. Approved by: nwhitehorn (mentor)
* Make MSGBUF_SIZE kernel option a loader tunable kern.msgbufsize.pluknet2011-01-211-2/+2
| | | | | | | Submitted by: perryh pluto.rain.com (previous version) Reviewed by: jhb Approved by: kib (mentor) Tested by: universe
* Garbage-collect unused variable.nwhitehorn2010-12-191-5/+2
|
* Add an abstraction layer to the 64-bit AIM MMU's page table manipulationnwhitehorn2010-12-041-18/+0
| | | | | | | | | logic to support modifying the page table through a hypervisor. This uses KOBJ inheritance to provide subclasses of the base 64-bit AIM MMU class with additional methods for page table manipulation. Many thanks to Peter Grehan for suggesting this design and implementing the MMU KOBJ inheritance mechanism.
* Remove use of a separate ofw_pmap on 32-bit CPUs. Many Open Firmwarenwhitehorn2010-11-121-31/+12
| | | | | | | | | | mappings need to end up in the kernel anyway since the kernel begins executing in OF context. Separating them adds needless complexity, especially since the powerpc64 and mmu_oea64 code gave up on it a long time ago. As a side effect, the PPC ofw_machdep code is no longer AIM-specific, so move it to powerpc/ofw.
* Implement pmap_is_prefaultable().alc2010-11-011-0/+15
| | | | Reviewed by: nwhitehorn
* Add some missing parentheses so that moea_bat_mapped() actually works.nwhitehorn2010-10-311-1/+1
| | | | | Submitted by: alc MFC after: 3 days
* Follow exactly the steps in architecture manual for correctly invalidatingnwhitehorn2010-10-041-2/+2
| | | | TLB entries instead of trying to cut corners.
* Fix pmap_page_set_memattr() behavior in the presence of fictitious pagesnwhitehorn2010-10-011-21/+11
| | | | | | by just caching the mode for later use by pmap_enter(), following amd64. While here, correct some mismerges from mmu_oea64 -> mmu_oea and clean up some dead code found while fixing the fictitious page behavior.
* Add support for memory attributes (pmap_mapdev_attr() and friends) onnwhitehorn2010-09-301-24/+91
| | | | | PowerPC/AIM. This is currently stubbed out on Book-E, since I have no idea how to implement it there.
* Introduce inheritance into the PowerPC MMU kobj interface.grehan2010-09-151-6/+2
| | | | | | | | | | | | | | | | | include/mmuvar.h - Change the MMU_DEF macro to also create the class definition as well as define the DATA_SET. Add a macro, MMU_DEF_INHERIT, which has an extra parameter specifying the MMU class to inherit methods from. Update the comments at the start of the header file to describe the new macros. booke/pmap.c aim/mmu_oea.c aim/mmu_oea64.c - Collapse mmu_def_t declaration into updated MMU_DEF macro The MMU_DEF_INHERIT macro will be used in the PS3 MMU implementation to allow it to inherit the stock powerpc64 MMU methods. Reviewed by: nwhitehorn
* Fix a typo in the original import of this code from NetBSD that caused thenwhitehorn2010-09-081-1/+1
| | | | | | | | | wrong element of the VSID bitmap array to be examined after a collision, leading to reallocation of in-use VSIDs under some circumstances, with attendant memory corruption. Also add an assert to check for this kind of problem in the future. MFC after: 4 days
* Fix the same race condition on 32-bit AIM CPUs that was fixed for 64-bitnwhitehorn2010-09-061-0/+7
| | | | ones in r211967 involving VSID allocation.
* pmap_mapdev() does not appear to actually need GIANT to be held here,nwhitehorn2010-08-271-2/+0
| | | | | | and asserting that is held breaks drm. MFC after: 2 weeks
* MFppc64:nwhitehorn2010-07-131-2/+2
| | | | | | | Kernel sources for 64-bit PowerPC, along with build-system changes to keep 32-bit kernels compiling (build system changes for 64-bit kernels are coming later). Existing 32-bit PowerPC kernel configurations must be updated after this change to specify their architecture.
* Temporarily disable instruction relocation while setting up the kernel'snwhitehorn2010-06-201-1/+6
| | | | | | | | IBAT entry in early boot in order to prevent possible faults from races between the instruction cache and the MMU. PR: powerpc/148003 MFC after: 3 days
* Relax one of the new assertions in pmap_enter() a little. Specifically,alc2010-06-111-1/+2
| | | | | | allow pmap_enter() to be performed on an unmanaged page that doesn't have VPO_BUSY set. Having VPO_BUSY set really only matters for managed pages. (See, for example, pmap_remove_write().)
* Reduce the scope of the page queues lock and the number ofalc2010-06-101-30/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PG_REFERENCED changes in vm_pageout_object_deactivate_pages(). Simplify this function's inner loop using TAILQ_FOREACH(), and shorten some of its overly long lines. Update a stale comment. Assert that PG_REFERENCED may be cleared only if the object containing the page is locked. Add a comment documenting this. Assert that a caller to vm_page_requeue() holds the page queues lock, and assert that the page is on a page queue. Push down the page queues lock into pmap_ts_referenced() and pmap_page_exists_quick(). (As of now, there are no longer any pmap functions that expect to be called with the page queues lock held.) Neither pmap_ts_referenced() nor pmap_page_exists_quick() should ever be passed an unmanaged page. Assert this rather than returning "0" and "FALSE" respectively. ARM: Simplify pmap_page_exists_quick() by switching to TAILQ_FOREACH(). Push down the page queues lock inside of pmap_clearbit(), simplifying pmap_clear_modify(), pmap_clear_reference(), and pmap_remove_write(). Additionally, this allows for avoiding the acquisition of the page queues lock in some cases. PowerPC/AIM: moea*_page_exits_quick() and moea*_page_wired_mappings() will never be called before pmap initialization is complete. Therefore, the check for moea_initialized can be eliminated. Push down the page queues lock inside of moea*_clear_bit(), simplifying moea*_clear_modify() and moea*_clear_reference(). The last parameter to moea*_clear_bit() is never used. Eliminate it. PowerPC/BookE: Simplify mmu_booke_page_exists_quick()'s control flow. Reviewed by: kib@
* Correct a harmless typo introduced when copying code from mmu_oea64.nwhitehorn2010-06-051-1/+1
| | | | | Submitted by: alc MFC after: 8.1-RELEASE
* Don't set PG_WRITEABLE in pmap_enter() unless the page is managed.alc2010-06-051-1/+2
|
* Push down page queues lock acquisition in pmap_enter_object() andalc2010-05-261-7/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pmap_is_referenced(). Eliminate the corresponding page queues lock acquisitions from vm_map_pmap_enter() and mincore(), respectively. In mincore(), this allows some additional cases to complete without ever acquiring the page queues lock. Assert that the page is managed in pmap_is_referenced(). On powerpc/aim, push down the page queues lock acquisition from moea*_is_modified() and moea*_is_referenced() into moea*_query_bit(). Again, this will allow some additional cases to complete without ever acquiring the page queues lock. Reorder a few statements in vm_page_dontneed() so that a race can't lead to an old reference persisting. This scenario is described in detail by a comment. Correct a spelling error in vm_page_dontneed(). Assert that the object is locked in vm_page_clear_dirty(), and restrict the page queues lock assertion to just those cases in which the page is currently writeable. Add object locking to vnode_pager_generic_putpages(). This was the one and only place where vm_page_clear_dirty() was being called without the object being locked. Eliminate an unnecessary vm_page_lock() around vnode_pager_setsize()'s call to vm_page_clear_dirty(). Change vnode_pager_generic_putpages() to the modern-style of function definition. Also, change the name of one of the parameters to follow virtual memory system naming conventions. Reviewed by: kib
* Roughly half of a typical pmap_mincore() implementation is machine-alc2010-05-241-6/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | independent code. Move this code into mincore(), and eliminate the page queues lock from pmap_mincore(). Push down the page queues lock into pmap_clear_modify(), pmap_clear_reference(), and pmap_is_modified(). Assert that these functions are never passed an unmanaged page. Eliminate an inaccurate comment from powerpc/powerpc/mmu_if.m: Contrary to what the comment says, pmap_mincore() is not simply an optimization. Without a complete pmap_mincore() implementation, mincore() cannot return either MINCORE_MODIFIED or MINCORE_REFERENCED because only the pmap can provide this information. Eliminate the page queues lock from vfs_setdirty_locked_object(), vm_pageout_clean(), vm_object_page_collect_flush(), and vm_object_page_clean(). Generally speaking, these are all accesses to the page's dirty field, which are synchronized by the containing vm object's lock. Reduce the scope of the page queues lock in vm_object_madvise() and vm_page_dontneed(). Reviewed by: kib (an earlier version)
* On entry to pmap_enter(), assert that the page is busy. While I'malc2010-05-161-1/+13
| | | | | | | | | | | | | | | | | | | | here, make the style of assertion used by pmap_enter() consistent across all architectures. On entry to pmap_remove_write(), assert that the page is neither unmanaged nor fictitious, since we cannot remove write access to either kind of page. With the push down of the page queues lock, pmap_remove_write() cannot condition its behavior on the state of the PG_WRITEABLE flag if the page is busy. Assert that the object containing the page is locked. This allows us to know that the page will neither become busy nor will PG_WRITEABLE be set on it while pmap_remove_write() is running. Correct a long-standing bug in vm_page_cowsetup(). We cannot possibly do copy-on-write-based zero-copy transmit on unmanaged or fictitious pages, so don't even try. Previously, the call to pmap_remove_write() would have failed silently.
* Push down the page queues into vm_page_cache(), vm_page_try_to_cache(), andalc2010-05-081-5/+8
| | | | | | | | | | | vm_page_try_to_free(). Consequently, push down the page queues lock into pmap_enter_quick(), pmap_page_wired_mapped(), pmap_remove_all(), and pmap_remove_write(). Push down the page queues lock into Xen's pmap_page_is_mapped(). (I overlooked the Xen pmap in r207702.) Switch to a per-processor counter for the total number of pages cached.
* On Alan's advice, rather than do a wholesale conversion on a singlekmacy2010-04-301-3/+7
| | | | | | | | | | | | architecture from page queue lock to a hashed array of page locks (based on a patch by Jeff Roberson), I've implemented page lock support in the MI code and have only moved vm_page's hold_count out from under page queue mutex to page lock. This changes pmap_extract_and_hold on all pmaps. Supported by: Bitgravity Inc. Discussed with: alc, jeffr, and kib
* Resurrect pmap_is_referenced() and use it in mincore(). Essentially,alc2010-04-241-0/+11
| | | | | | | | | | | | | | | | | pmap_ts_referenced() is not always appropriate for checking whether or not pages have been referenced because it clears any reference bits that it encounters. For example, in mincore(), clearing the reference bits has two negative consequences. First, it throws off the activity count calculations performed by the page daemon. Specifically, a page on which mincore() has called pmap_ts_referenced() looks less active to the page daemon than it should. Consequently, the page could be deactivated prematurely by the page daemon. Arguably, this problem could be fixed by having mincore() duplicate the activity count calculation on the page. However, there is a second problem for which that is not a solution. In order to clear a reference on a 4KB page, it may be necessary to demote a 2/4MB page mapping. Thus, a mincore() by one process can have the side effect of demoting a superpage mapping within another process!
* Reduce KVA pressure on OEA64 systems running in bridge mode by mappingnwhitehorn2010-02-201-2/+2
| | | | | | | | | | | | | UMA segments at their physical addresses instead of into KVA. This emulates the direct mapping behavior of OEA32 in an ad-hoc way. To make this work properly required sharing the entire kernel PMAP with Open Firmware, so ofw_pmap is transformed into a stub on 64-bit CPUs. Also implement some more tweaks to get more mileage out of our limited amount of KVA, principally by extending KVA into segment 16 until the beginning of the first OFW mapping. Reported by: linimon
* Fix a bug where pages being removed from memory entirely no longer havenwhitehorn2010-02-181-2/+4
| | | | | | | | | | | PVOs, and so the modified state of the page can no longer be communicated to the VM layer, causing pages not to be flushed to swap when needed, in turn causing memory corruption. Also make several correctness adjustments to I-Cache synchronization and TLB invalidation for 64-bit Book-S CPUs. Obtained from: projects/ppc64 Discussed with: grehan MFC after: 2 weeks
* Remove extraneous semicolons, no functional changes.mbr2010-01-071-3/+3
| | | | | Submitted by: Marc Balmer <marc@msys.ch> MFC after: 1 week
* o Introduce vm_sync_icache() for making the I-cache coherent withmarcel2009-10-211-8/+26
| | | | | | | | | | | | | | | | | | | | | the memory or D-cache, depending on the semantics of the platform. vm_sync_icache() is basically a wrapper around pmap_sync_icache(), that translates the vm_map_t argumument to pmap_t. o Introduce pmap_sync_icache() to all PMAP implementation. For powerpc it replaces the pmap_page_executable() function, added to solve the I-cache problem in uiomove_fromphys(). o In proc_rwmem() call vm_sync_icache() when writing to a page that has execute permissions. This assures that when breakpoints are written, the I-cache will be coherent and the process will actually hit the breakpoint. o This also fixes the Book-E PMAP implementation that was missing necessary locking while trying to deal with the I-cache coherency in pmap_enter() (read: mmu_booke_enter_locked). The key property of this change is that the I-cache is made coherent *after* writes have been done. Doing it in the PMAP layer when adding or changing a mapping means that the I-cache is made coherent *before* any writes happen. The difference is key when the I-cache prefetches.
* Implement a facility for dynamic per-cpu variables.jeff2009-06-231-0/+15
| | | | | | | | | | | | | | | - Modules and kernel code alike may use DPCPU_DEFINE(), DPCPU_GET(), DPCPU_SET(), etc. akin to the statically defined PCPU_*. Requires only one extra instruction more than PCPU_* and is virtually the same as __thread for builtin and much faster for shared objects. DPCPU variables can be initialized when defined. - Modules are supported by relocating the module's per-cpu linker set over space reserved in the kernel. Modules may fail to load if there is insufficient space available. - Track space available for modules with a one-off extent allocator. Free may block for memory to allocate space for an extent. Reviewed by: jhb, rwatson, kan, sam, grehan, marius, marcel, stas
* Factor out platform dependent things unrelated to device drivers into anwhitehorn2009-05-141-1/+1
| | | | | | | | | | new platform module. These are probed in early boot, and have the responsibility of determining the layout of physical memory, determining the CPU timebase frequency, and handling the zoo of SMP mechanisms found on PowerPC. Reviewed by: marcel, raj Book-E parts by: raj
* Add support for 64-bit PowerPC CPUs operating in the 64-bit bridge modenwhitehorn2009-04-041-2/+7
| | | | | | | | | | provided, for example, on the PowerPC 970 (G5), as well as on related CPUs like the POWER3 and POWER4. This also adds support for various built-in hardware found on Apple G5 hardware (e.g. the IBM CPC925 northbridge). Reviewed by: grehan
* Change the PVO zone for fictitious pages to the unmanaged PVO zone, to matchnwhitehorn2009-03-111-1/+4
| | | | | | | | | the unmanaged flag set in the PVO attributes. Without doing this, pmap_remove() could try to remove fictitious pages (like those created by mmap of physical memory) from the wrong UMA zone, causing a panic. Reported by: Justin Hibbits MFC after: 1 week
* In preparation for PowerPC G5 support, allow PVO objects to contain pagenwhitehorn2008-09-231-52/+54
| | | | table entries for both the 32-bit and 64-bit AIM MMUs.
* o When not making a translation cache-inhibit and guarded (PTE_I|PTE_G)marcel2008-09-161-40/+42
| | | | | | | | | | | | | make it memory-coherency enforced (PTE_M). This is required for SMP to work. o Serialize tlbie operations and implement the tlbie operation in a function called tlbie(). Hardware can end up in a live-lock if between the tlbsync and subsequent sync on one processor another processor executes a tlbie or tlbsync. o Eliminate the following defines: TLBIE, TLBSYNC, SYNC and EIEIO Use either inline assembly statements or inline functions defined in <machine/cpufunc.h>
* Remove the tracing from the AP startup. The AP is knownmarcel2008-09-161-21/+2
| | | | | | to start and the tracing can interfere with AP startup. Instead, use the available space in the reset vector for the initial stack.
* Remove redundant KTR statements.marcel2008-08-311-6/+0
|
OpenPOWER on IntegriCloud