summaryrefslogtreecommitdiffstats
path: root/sys/powerpc/aim
Commit message (Collapse)AuthorAgeFilesLines
* - Create kern.ipc.sendfile namespace, and put the new "readhead" OIDglebius2013-09-221-0/+11
| | | | | | | | | there as "kern.ipc.sendfile.readahead". - Push all nsfbuf related tunables into MD code. Don't move them to new namespace in favor of POLA. Reviewed by: scottl Approved by: re (gjb)
* The pmap function pmap_clear_reference() is no longer used. Remove it.alc2013-09-202-24/+0
| | | | | | | | | pmap_clear_reference() has had exactly one caller in the kernel for several years, more precisely, since FreeBSD 8. Now, that call no longer exists. Approved by: re (kib) Sponsored by: EMC / Isilon Storage Division
* Change VM object lock assertion to match locking higher in the callnwhitehorn2013-09-131-1/+1
| | | | | | | | | | chain. This repairs a panic observed during pageout on some 64-bit PowerPC systems. Submitted by: grehan Approved by: re (kib) MFC after: 2 weeks Revisit after: 10.0
* Raise artificial limits on number of CPUs and number of interrupts.nwhitehorn2013-09-091-2/+3
| | | | Approved by: re (kib)
* Add POWER CPUs to the kernel's knowledge. This does not imply we currentlynwhitehorn2013-09-092-5/+3
| | | | | | | actually run on any machines with POWER CPUs but avoids closing that door unnecessarily. Approved by: re (kib)
* Align stacks of kernel threads correctly at 16-byte boundaries rather thannwhitehorn2013-09-051-0/+1
| | | | | | | | | | making sure they are all misaligned at +8 bytes. This fixes clang builds of powerpc64 kernels (aside from a required increase in KSTACK_PAGES which will come later). This commit from FreeBSD/powerpc64 with a clang-built kernel. MFC after: 2 weeks
* Enable PMC interrupt handling, and fix a DTrace trap handling bug.jhibbits2013-09-031-5/+5
|
* Revert r254501. Instead, reuse the type stability of the struct pmapkib2013-08-222-6/+4
| | | | | | | | | | | which is the part of struct vmspace, allocated from UMA_ZONE_NOFREE zone. Initialize the pmap lock in the vmspace zone init function, and remove pmap lock initialization and destruction from pmap_pinit() and pmap_release(). Suggested and reviewed by: alc (previous version) Tested by: pho Sponsored by: The FreeBSD Foundation
* The soft and hard busy mechanism rely on the vm object lock to work.attilio2013-08-092-24/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Unify the 2 concept into a real, minimal, sxlock where the shared acquisition represent the soft busy and the exclusive acquisition represent the hard busy. The old VPO_WANTED mechanism becames the hard-path for this new lock and it becomes per-page rather than per-object. The vm_object lock becames an interlock for this functionality: it can be held in both read or write mode. However, if the vm_object lock is held in read mode while acquiring or releasing the busy state, the thread owner cannot make any assumption on the busy state unless it is also busying it. Also: - Add a new flag to directly shared busy pages while vm_page_alloc and vm_page_grab are being executed. This will be very helpful once these functions happen under a read object lock. - Move the swapping sleep into its own per-object flag The KPI is heavilly changed this is why the version is bumped. It is very likely that some VM ports users will need to change their own code. Sponsored by: EMC / Isilon storage division Discussed with: alc Reviewed by: jeff, kib Tested by: gavin, bapt (older version) Tested by: pho, scottl
* Replace kernel virtual address space allocation with vmem. This providesjeff2013-08-073-5/+5
| | | | | | | | | | | | | transparent layering and better fragmentation. - Normalize functions that allocate memory to use kmem_* - Those that allocate address space are named kva_* - Those that operate on maps are named kmap_* - Implement recursive allocation handling for kmem_arena in vmem. Reviewed by: alc Tested by: pho Sponsored by: EMC / Isilon Storage Division
* Remove an unnecessary panic. The PVO's PTE entry and the PTEG's PTE entry mayjhibbits2013-08-061-3/+0
| | | | not match, if the PVO's PTE is invalid.
* Evict pages from the PTEG when it's full and trying to insert a new PTE,jhibbits2013-08-061-7/+77
| | | | | | | rather than panicking. Reviewed by: nwhitehorn MFC after: 3 weeks
* Introduce new structure sfstat for collecting sendfile's statisticsae2013-07-151-1/+1
| | | | | | | and remove corresponding fields from struct mbstat. Use PCPU counters and SFSTAT_INC() macro for update these statistics. Discussed with: glebius
* Fix check: bitwise and has only one &.nwhitehorn2013-07-121-1/+1
| | | | MFC after: 1 week
* o Relax locking assertions for vm_page_find_least()attilio2013-05-212-0/+4
| | | | | | | | | | | | o Relax locking assertions for pmap_enter_object() and add them also to architectures that currently don't have any o Introduce VM_OBJECT_LOCK_DOWNGRADE() which is basically a downgrade operation on the per-object rwlock o Use all the mechanisms above to make vm_map_pmap_enter() to work mostl of the times only with readlocks. Sponsored by: EMC / Isilon storage division Reviewed by: alc
* Relax the object locking assertion in pmap_enter_locked().alc2013-05-172-2/+2
| | | | | Reviewed by: attilio Sponsored by: EMC / Isilon Storage Division
* Remove a comment that shouldn't have gone in.jhibbits2013-04-261-3/+0
| | | | X-MFC-with: r249864
* Introduce kernel coredumps to ppc32 AIM. Leeched from the booke code.jhibbits2013-04-251-0/+111
| | | | MFC after: 2 weeks
* Print out DSISR in a fatal DSI trap.jhibbits2013-04-051-0/+2
| | | | Sponsored by:
* Implement the concept of the unmapped VMIO buffers, i.e. buffers whichkib2013-03-191-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | do not map the b_pages pages into buffer_map KVA. The use of the unmapped buffers eliminate the need to perform TLB shootdown for mapping on the buffer creation and reuse, greatly reducing the amount of IPIs for shootdown on big-SMP machines and eliminating up to 25-30% of the system time on i/o intensive workloads. The unmapped buffer should be explicitely requested by the GB_UNMAPPED flag by the consumer. For unmapped buffer, no KVA reservation is performed at all. The consumer might request unmapped buffer which does have a KVA reserve, to manually map it without recursing into buffer cache and blocking, with the GB_KVAALLOC flag. When the mapped buffer is requested and unmapped buffer already exists, the cache performs an upgrade, possibly reusing the KVA reservation. Unmapped buffer is translated into unmapped bio in g_vfs_strategy(). Unmapped bio carry a pointer to the vm_page_t array, offset and length instead of the data pointer. The provider which processes the bio should explicitely specify a readiness to accept unmapped bio, otherwise g_down geom thread performs the transient upgrade of the bio request by mapping the pages into the new bio_transient_map KVA submap. The bio_transient_map submap claims up to 10% of the buffer map, and the total buffer_map + bio_transient_map KVA usage stays the same. Still, it could be manually tuned by kern.bio_transient_maxcnt tunable, in the units of the transient mappings. Eventually, the bio_transient_map could be removed after all geom classes and drivers can accept unmapped i/o requests. Unmapped support can be turned off by the vfs.unmapped_buf_allowed tunable, disabling which makes the buffer (or cluster) creation requests to ignore GB_UNMAPPED and GB_KVAALLOC flags. Unmapped buffers are only enabled by default on the architectures where pmap_copy_page() was implemented and tested. In the rework, filesystem metadata is not the subject to maxbufspace limit anymore. Since the metadata buffers are always mapped, the buffers still have to fit into the buffer map, which provides a reasonable (but practically unreachable) upper bound on it. The non-metadata buffer allocations, both mapped and unmapped, is accounted against maxbufspace, as before. Effectively, this means that the maxbufspace is forced on mapped and unmapped buffers separately. The pre-patch bufspace limiting code did not worked, because buffer_map fragmentation does not allow the limit to be reached. By Jeff Roberson request, the getnewbuf() function was split into smaller single-purpose functions. Sponsored by: The FreeBSD Foundation Discussed with: jeff (previous version) Tested by: pho, scottl (previous version), jhb, bf MFC after: 2 weeks
* Add FBT for PowerPC DTrace. Also, clean up the DTrace assembly code,jhibbits2013-03-183-19/+14
| | | | | | | | | | | much of which is not necessary for PowerPC. The FBT module can likely be factored into 3 separate files: common, intel, and powerpc, rather than duplicating most of the code between the x86 and PowerPC flavors. All DTrace modules for PowerPC will be MFC'd together once Fasttrap is completed.
* Add pmap function pmap_copy_pages(), which copies the content of thekib2013-03-142-0/+96
| | | | | | | | | | | | | | | | | | | | | | | | pages around, taking array of vm_page_t both for source and destination. Starting offsets and total transfer size are specified. The function implements optimal algorithm for copying using the platform-specific optimizations. For instance, on the architectures were the direct map is available, no transient mappings are created, for i386 the per-cpu ephemeral page frame is used. The code was typically borrowed from the pmap_copy_page() for the same architecture. Only i386/amd64, powerpc aim and arm/arm-v6 implementations were tested at the time of commit. High-level code, not committed yet to the tree, ensures that the use of the function is only allowed after explicit enablement. For sparc64, the existing code has known issues and a stab is added instead, to allow the kernel linking. Sponsored by: The FreeBSD Foundation Tested by: pho (i386, amd64), scottl (amd64), ian (arm and arm-v6) MFC after: 2 weeks
* MFCattilio2013-03-021-17/+9
|\
| * MFcalloutng:mav2013-02-281-17/+9
| | | | | | | | | | | | | | Switch eventtimers(9) from using struct bintime to sbintime_t. Even before this not a single driver really supported full dynamic range of struct bintime even in theory, not speaking about practical inexpediency. This change legitimates the status quo and cleans up the code.
| * Merge from vmobj-rwlock:attilio2013-02-272-6/+4
| | | | | | | | | | | | | | | | | | | | | | VM_OBJECT_LOCKED() macro is only used to implement a custom version of lock assertions right now (which likely spread out thanks to copy and paste). Remove it and implement actual assertions. Sponsored by: EMC / Isilon storage division Reviewed by: alc Tested by: pho
| * Merge from vmobj-rwlock branch:attilio2013-02-263-3/+0
| | | | | | | | | | | | | | | | Remove unused inclusion of vm/vm_pager.h and vm/vnode_pager.h. Sponsored by: EMC / Isilon storage division Tested by: pho Reviewed by: alc
* | Hide the details for the assertion for VM_OBJECT_LOCK operations.attilio2013-02-212-8/+8
| | | | | | | | | | | | | | | | Rename current VM_OBJECT_LOCK_ASSERT(foo, RA_WLOCKED) into VM_OBJECT_ASSERT_WLOCKED(foo) Sponsored by: EMC / Isilon storage division Requested by: alc
* | Fix other architectures and ZFS.attilio2013-02-214-3/+1
| | | | | | | | Sponsored by: EMC / Isilon storage division
* | There is no need to use VM_OBJECT_LOCKED() as the assertion won'tattilio2013-02-202-6/+4
| | | | | | | | | | make the check available in any case if INVARIANTS is switched off. Remove VM_OBJECT_LOCKED().
* | Switch vm_object lock to be a rwlock.attilio2013-02-202-6/+6
|/ | | | | | | | * VM_OBJECT_LOCK and VM_OBJECT_UNLOCK are mapped to write operations * VM_OBJECT_SLEEP() is introduced as a general purpose primitve to get a sleep operation using a VM_OBJECT_LOCK() as protection * The approach must bear with vm_pager.h namespace pollution so many files require including directly rwlock.h
* Setup BAT0 and BAT1 on the Wii.adrian2012-11-212-22/+40
| | | | | | | | | | | | This is the missing piece for FreeBSD/Wii, but there's still a lot of work ahead. We have to reset the MMU in locore before continuing the boot process because we don't know how the boot loaders might have setup the BATs. We also disable the PCI BAT because there's no PCI bus on the Wii. Thanks to Nathan Whitehorn and Peter Grenhan for their help. Submitted by: Margarida Gouveia
* Flip the semantic of M_NOWAIT to only require the allocation to notkib2012-11-143-19/+6
| | | | | | | | | | | | | | | | | | | | sleep, and perform the page allocations with VM_ALLOC_SYSTEM class. Previously, the allocation was also allowed to completely drain the reserve of the free pages, being translated to VM_ALLOC_INTERRUPT request class for vm_page_alloc() and similar functions. Allow the caller of malloc* to request the 'deep drain' semantic by providing M_USE_RESERVE flag, now translated to VM_ALLOC_INTERRUPT class. Previously, it resulted in less aggressive VM_ALLOC_SYSTEM allocation class. Centralize the translation of the M_* malloc(9) flags in the single inline function malloc2vm_flags(). Discussion started by: "Sears, Steven" <Steven.Sears@netapp.com> Reviewed by: alc, mdf (previous version) Tested by: pho (previous version) MFC after: 2 weeks
* Implement DTrace for PowerPC. This includes both 32-bit and 64-bit.jhibbits2012-11-075-0/+97
| | | | | | | | | | | There is one known issue: Some probes will display an error message along the lines of: "Invalid address (0)" I tested this with both a simple dtrace probe and dtruss on a few different binaries on 32-bit. I only compiled 64-bit, did not run it, but I don't expect problems without the modules loaded. Volunteers are welcome. MFC after: 1 month
* Rework the known rwlock to benefit about staying on their ownattilio2012-11-031-10/+1
| | | | | | | cache line in order to avoid manual frobbing but using struct rwlock_padalign. Reviewed by: alc, jimharris
* Eliminate a stale comment. It describes another use case for the pmap inalc2012-09-282-10/+0
| | | | Mach that doesn't exist in FreeBSD.
* userret() already checks for td_locks when INVARIANTS is enabled, soattilio2012-09-081-1/+0
| | | | | | | there is no need to check if Giant is acquired after it. Reviewed by: kib MFC after: 1 week
* Unbreak tinderbox.rpaulo2012-08-251-1/+4
|
* Set mdp only under #ifdef WII.rpaulo2012-08-251-0/+3
|
* On Nintendo Wii CPUs, the mdp value will be garbage. Set it to NULLadrian2012-08-211-1/+9
| | | | | | so as to not confuse things. Submitted by: Margarida Gouveia
* Avoid recursion on the pvh global lock in the aim oea pmap.alc2012-07-102-14/+26
| | | | | | | Correct the return type of the pmap_ts_referenced() implementations. Reported by: jhibbits [1] Tested by: andreast
* Replace all uses of the vm page queues lock by a r/w lock that is privatealc2012-07-061-29/+46
| | | | | | to this pmap. Tested by: andreast, jhibbits
* The `end' symbol doesn't match the end of the kernel image because it'srpaulo2012-06-292-6/+8
| | | | | | | | | | | | | relative to the start address (unless the start address is 0, which is not the case). This is currently not a problem because all powerpc architectures are using loader(8) which passes metadata to the kernel including the correct `endkernel' address. If we don't use loader(8), register 4 and 5 will have the size of the kernel ELF file, not its end address. We fix that simply by adding `kernel_text' to `end' to compute `endkernel'. Discussed with: nathanw
* Fix physical address type to vm_paddr_t also for powerpc64.raj2012-05-251-11/+11
|
* Fix physical address type to vm_paddr_t.raj2012-05-241-11/+11
|
* Replace the list of PVOs owned by each PMAP with an RB tree. This simplifiesnwhitehorn2012-05-202-176/+57
| | | | | | | range operations like pmap_remove() and pmap_protect() as well as allowing simple operations like pmap_extract() not to involve any global state. This substantially reduces lock coverages for the global table lock and improves concurrency.
* Fix final bugs in memory barriers on PowerPC:nwhitehorn2012-05-042-2/+4
| | | | | | | | | | | | - Use isync/lwsync unconditionally for acquire/release. Use of isync guarantees a complete memory barrier, which is important for serialization of bus space accesses with mutexes on multi-processor systems. - Go back to using sync as the I/O memory barrier, which solves the same problem as above with respect to mutex release using lwsync, while not penalizing non-I/O operations like a return to sync on the atomic release operations would. - Place an acquisition barrier around thread lock acquisition in cpu_switchin().
* Fix build on 32-bit systems.nwhitehorn2012-04-281-1/+1
|
* After switching mutexes to use lwsync, they no longer provide sufficientnwhitehorn2012-04-282-30/+19
| | | | | | | | guarantees on acquire for the tlbie mutex. Conversely, the TLB invalidation sequence provides guarantees that do not need to be redundantly applied on release. Roll a small custom lock that is just right. Simultaneously, convert the SLB tree changes back to lwsync, as changing them to sync was a misdiagnosis of the tlbie barrier problem this commit actually fixes.
* Revert r234581 for this file. The lockless SLB tree code does in fact neednwhitehorn2012-04-241-2/+2
| | | | | a heavyweight sync instead of a lightweight sync to function properly. Thanks to mdf for the clarification.
* Use lwsync to provide memory barriers on systems that support it insteadnwhitehorn2012-04-221-2/+2
| | | | | | | | of sync (lwsync is an alternate encoding of sync on systems that do not support it, providing graceful fallback). This provides more than an order of magnitude reduction in the time required to acquire or release a mutex. MFC after: 2 months
OpenPOWER on IntegriCloud