summaryrefslogtreecommitdiffstats
path: root/sys/sparc64/include/tte.h
Commit message (Collapse)AuthorAgeFilesLines
* - Add TTE and context register bits for the additional page sizes supportedmarius2010-03-171-17/+46
| | | | | | | | | by UltraSparc-IV and -IV+ as well as SPARC64 V, VI, VII and VIIIfx CPUs. - Replace TLB_PCXR_PGSZ_MASK and TLB_SCXR_PGSZ_MASK with TLB_CXR_PGSZ_MASK which just is the complement of TLB_CXR_CTX_MASK instead of trying to assemble it from the page size bits which vary across CPUs. - Add macros for the remainder of the SFSR bits, which are useful for at least debugging purposes.
* The physical address space of cheetah-class CPUs has been extendedmarius2008-09-041-4/+7
| | | | | | | | | | to 43 bits so update TD_PA_BITS accordingly. For the most part this increase is transparent to the existing code except for when reading the physical address from ASI_{D,I}TLB_DATA_ACCESS_REG, which we only do in the loader and which was already adjusted in r182478, or from the OFW translations node. While at it, ensure we are only taking valid OFW mapping entries into account.
* - Reimplement {d,i}tlb_enter() and {d,i}tlb_va_to_pa() in C. There'smarius2008-08-071-1/+5
| | | | | | | | | | | | | | | no particular reason for them to be implemented in assembler and having them in C allows easier extension as well as using more C macros and {d,i}tlb_slot_max rather than hard-coding magic (and actually spitfire-only) values. - Fix the compilation of pmap_print_tte(). - Change pmap_print_tlb() to use ldxa() rather than re-rolling it inline as well as TLB_DAR_SLOT and {d,i}tlb_slot_max rather than hardcoding magic (and actually spitfire-only) values. - While at it, suffix the above mentioned functions with "_sun4u" to underline they're architecture-specific. - Use __FBSDID and macros instead of magic values in locore.S. - Remove unused includes and smp_stack in locore.S.
* Handle the fictitious pages created by the device pager. For fictitiousjake2003-03-271-0/+2
| | | | | | | | | | pages which represent actual physical memory we must strip off the fake page in order to allow illegal aliases to be detected. Otherwise we map uncacheable in the virtual and physical caches and set the side effect bit, as is required for mapping device memory. This fixes gstat on sparc64, which wants to mmap kernel memory through a character device.
* Use memset instead of __builtin_memset. Apparently there's an inlinejake2002-12-291-1/+1
| | | | memset in libkern which causes problems; why that's there is beyond me.
* - Add a pmap pointer to struct md_page, and use this to find the pmap thatjake2002-12-211-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | a mapping belongs to by setting it in the vm_page_t structure that backs the tsb page that the tte for a mapping is in. This allows the pmap that a mapping belongs to to be found without keeping a pointer to it in the tte itself. - Remove the pmap pointer from struct tte and use the space to make the tte pv lists doubly linked (TAILQs), like on other architectures. This makes entering or removing a mapping O(1) instead of O(n) where n is the number of pmaps a page is mapped by (including kernel_pmap). - Use atomic ops for setting and clearing bits in the ttes, now that they return the old value and can be easily used for this purpose. - Use __builtin_memset for zeroing ttes instead of bzero, so that gcc will inline it (4 inline stores using %g0 instead of a function call). - Initially set the virtual colour for all the vm_page_ts to be equal to their physical colour. This will be more useful once uma_small_alloc is implemented, but basically pages with virtual colour equal to phsyical colour are easier to handle at the pmap level because they can be safely accessed through cachable direct virtual to physical mappings with that colour, without fear of causing illegal dcache aliases. In total these changes give a minor performance improvement, about 1% reduction in system time during buildworld.
* Add pmap support for user mappings of multiple page sizes (super pages).jake2002-08-181-25/+29
| | | | | This supports all hardware page sizes (8K, 64K, 512K, 4MB), but only 8k pages are actually used as of yet.
* Remove the tlb argument to tlb_page_demap (itlb or dtlb), in order to betterjake2002-07-261-2/+0
| | | | match the pmap_invalidate api.
* Add pv list linkage and a pmap pointer to struct tte. Remove separatelyjake2002-05-291-1/+7
| | | | allocated pv entries and use the linkage in the tte for pv operations.
* Redefine the tte accessor macros to take a pointer to a tte, instead of thejake2002-05-211-19/+32
| | | | | | | value of the tag or data field. Add macros for getting the page shift, size and mask for the physical page that a tte maps (which may be one of several sizes). Use the new cache functions for invalidating single pages.
* Modify the tte format to not include the tlb context number and to store thejake2002-02-251-34/+8
| | | | | | | | | | | | | virtual page number in a much more convenient way; all in one piece. This greatly simplifies the comparison for a matching tte, and allows the fault handlers to be much simpler due to not having to load wierd masks. Rewrite the tlb fault handlers to account for the new format. These are also written to allow faults on the user tsb inside of the fault handlers; the kernel fault handler must be aware of this and not clobber the other's registers. The faults do not yet occur due to other support that is needed (and still under my desk). Bug fixes from: tmm
* Add a macro for getting the tlbs (itlb and/or dtlb) which the givenjake2002-01-081-0/+1
| | | | tte may be mapped by.
* Make tte bit constants explicitly unsigned and long.jake2001-12-291-46/+33
| | | | | Add a wierd soft bit. Remove struct stte.
* Add a macro to get the context from a tte tag, not necesarily a wholejake2001-09-301-6/+2
| | | | tte. Remove the old inline.
* style(9) the structure definitions.obrien2001-09-051-2/+2
|
* Implement pv_bit_count which is used by pmap_ts_referenced.jake2001-09-031-7/+5
| | | | | | | | | | | | | | | | | | | | | | | Remove the modified tte bit and add a softwrite bit. Mappings are only writeable if they have been written to, thus in general modify just duplicates the write bit. The softwrite bit makes it easier to distinguish mappings which should be writeable but are not yet modified. Move the exec bit down one, it was being sign extended when used as an immediate operand. Use the lock bit to mean tsb page and remove the tsb bit. These are the only form of locked (tsb) entries we support and we need to conserve bits where possible. Implement pmap_copy_page and pmap_is_modified and friends. Detect mappings that are being being upgraded from read-only to read-write due to copy-on-write and update the write bit appropriately. Make trap_mmu_fault do the right thing for protection faults, which is necessary to implement copy on write correctly. Also handle a bunch more userland trap types and add ktr traces.
* Fix macros for dealing with tte contexts.jake2001-08-061-4/+8
| | | | Add tte bits for initializing tsbs and for specifying managed mappings.
* Flesh out the sparc64 port considerably. This contains:jake2001-07-311-0/+146
- mostly complete kernel pmap support, and tested but currently turned off userland pmap support - low level assembly language trap, context switching and support code - fully implemented atomic.h and supporting cpufunc.h - some support for kernel debugging with ddb - various header tweaks and filling out of machine dependent structures
OpenPOWER on IntegriCloud