diff options
author | David S. Miller <davem@davemloft.net> | 2006-01-31 18:31:20 -0800 |
---|---|---|
committer | David S. Miller <davem@sunset.davemloft.net> | 2006-03-20 01:11:17 -0800 |
commit | 98c5584cfc47932c4f3ccf5eee2e0bae1447b85e (patch) | |
tree | c067ac8bfc081bbe0b3073374cb15708458e04ab /include/asm-sparc64/mmu.h | |
parent | 09f94287f7260e03bbeab497e743691fafcc22c3 (diff) | |
download | op-kernel-dev-98c5584cfc47932c4f3ccf5eee2e0bae1447b85e.zip op-kernel-dev-98c5584cfc47932c4f3ccf5eee2e0bae1447b85e.tar.gz |
[SPARC64]: Add infrastructure for dynamic TSB sizing.
This also cleans up tsb_context_switch(). The assembler
routine is now __tsb_context_switch() and the former is
an inline function that picks out the bits from the mm_struct
and passes it into the assembler code as arguments.
setup_tsb_parms() computes the locked TLB entry to map the
TSB. Later when we support using the physical address quad
load instructions of Cheetah+ and later, we'll simply use
the physical address for the TSB register value and set
the map virtual and PTE both to zero.
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/asm-sparc64/mmu.h')
-rw-r--r-- | include/asm-sparc64/mmu.h | 13 |
1 files changed, 12 insertions, 1 deletions
diff --git a/include/asm-sparc64/mmu.h b/include/asm-sparc64/mmu.h index 36384cf..2effeba 100644 --- a/include/asm-sparc64/mmu.h +++ b/include/asm-sparc64/mmu.h @@ -90,9 +90,20 @@ #ifndef __ASSEMBLY__ +#define TSB_ENTRY_ALIGNMENT 16 + +struct tsb { + unsigned long tag; + unsigned long pte; +} __attribute__((aligned(TSB_ENTRY_ALIGNMENT))); + typedef struct { unsigned long sparc64_ctx_val; - unsigned long *sparc64_tsb; + struct tsb *tsb; + unsigned long tsb_nentries; + unsigned long tsb_reg_val; + unsigned long tsb_map_vaddr; + unsigned long tsb_map_pte; } mm_context_t; #endif /* !__ASSEMBLY__ */ |