summaryrefslogtreecommitdiffstats
path: root/arch/x86/include/asm/pgtable_64.h
Commit message (Collapse)AuthorAgeFilesLines
* gcc-4.6: mm: fix unused but set warningsAndi Kleen2010-08-091-2/+2
| | | | | | | | No real bugs, just some dead code and some fixups. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itselfRussell King2010-02-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On VIVT ARM, when we have multiple shared mappings of the same file in the same MM, we need to ensure that we have coherency across all copies. We do this via make_coherent() by making the pages uncacheable. This used to work fine, until we allowed highmem with highpte - we now have a page table which is mapped as required, and is not available for modification via update_mmu_cache(). Ralf Beache suggested getting rid of the PTE value passed to update_mmu_cache(): On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables to construct a pointer to the pte again. Passing a pte_t * is much more elegant. Maybe we might even replace the pte argument with the pte_t? Ben Herrenschmidt would also like the pte pointer for PowerPC: Passing the ptep in there is exactly what I want. I want that -instead- of the PTE value, because I have issue on some ppc cases, for I$/D$ coherency, where set_pte_at() may decide to mask out the _PAGE_EXEC. So, pass in the mapped page table pointer into update_mmu_cache(), and remove the PTE value, updating all implementations and call sites to suit. Includes a fix from Stephen Rothwell: sparc: fix fallout from update_mmu_cache API change Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* x86, 64-bit: Clean up user address maskingLinus Torvalds2009-06-201-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The discussion about using "access_ok()" in get_user_pages_fast() (see commit 7f8189068726492950bf1a2dcfd9b51314560abf: "x86: don't use 'access_ok()' as a range check in get_user_pages_fast()" for details and end result), made us notice that x86-64 was really being very sloppy about virtual address checking. So be way more careful and straightforward about masking x86-64 virtual addresses: - All the VIRTUAL_MASK* variants now cover half of the address space, it's not like we can use the full mask on a signed integer, and the larger mask just invites mistakes when applying it to either half of the 48-bit address space. - /proc/kcore's kc_offset_to_vaddr() becomes a lot more obvious when it transforms a file offset into a (kernel-half) virtual address. - Unify/simplify the 32-bit and 64-bit USER_DS definition to be based on TASK_SIZE_MAX. This cleanup and more careful/obvious user virtual address checking also uncovered a buglet in the x86-64 implementation of strnlen_user(): it would do an "access_ok()" check on the whole potential area, even if the string itself was much shorter, and thus return an error even for valid strings. Our sloppy checking had hidden this. So this fixes 'strnlen_user()' to do this properly, the same way we already handled user strings in 'strncpy_from_user()'. Namely by just checking the first byte, and then relying on fault handling for the rest. That always works, since we impose a guard page that cannot be mapped at the end of the user space address space (and even if we didn't, we'd have the address space hole). Acked-by: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Nick Piggin <npiggin@suse.de> Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86: clean up declarations and variablesJaswinder Singh Rajput2009-04-121-6/+0
| | | | | | | | | | | | | | | | Impact: cleanup, no code changed - syscalls.h update declarations due to unifications - irq.c declare smp_generic_interrupt() before it gets used - process.c declare sys_fork() and sys_vfork() before they get used - tsc.c rename tsc_khz shadowed variable - apic/probe_32.c declare apic_default before it gets used - apic/nmi.c prev_nmi_count should be unsigned - apic/io_apic.c declare smp_irq_move_cleanup_interrupt() before it gets used - mm/init.c declare direct_gbpages and free_initrd_mem before they get used Signed-off-by: Jaswinder Singh Rajput <jaswinder@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: Split pgtable_64.h into pgtable_64_types.h and pgtable_64.hJeremy Fitzhardinge2009-02-111-46/+2
| | | | Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org>
* Merge commit 'remotes/tip/x86/paravirt' into x86/untangle2Jeremy Fitzhardinge2009-02-111-1/+0
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * commit 'remotes/tip/x86/paravirt': (175 commits) xen: use direct ops on 64-bit xen: make direct versions of irq_enable/disable/save/restore to common code xen: setup percpu data pointers xen: fix 32-bit build resulting from mmu move x86/paravirt: return full 64-bit result x86, percpu: fix kexec with vmlinux x86/vmi: fix interrupt enable/disable/save/restore calling convention. x86/paravirt: don't restore second return reg xen: setup percpu data pointers x86: split loading percpu segments from loading gdt x86: pass in cpu number to switch_to_new_gdt() x86: UV fix uv_flush_send_and_wait() x86/paravirt: fix missing callee-save call on pud_val x86/paravirt: use callee-saved convention for pte_val/make_pte/etc x86/paravirt: implement PVOP_CALL macros for callee-save functions x86/paravirt: add register-saving thunks to reduce caller register pressure x86/paravirt: selectively save/restore regs around pvops calls x86: fix paravirt clobber in entry_64.S x86/pvops: add a paravirt_ident functions to allow special patching xen: move remaining mmu-related stuff into mmu.c ... Conflicts: arch/x86/mach-voyager/voyager_smp.c arch/x86/mm/fault.c
| * x86: remove pda.hBrian Gerst2009-01-201-1/+0
| | | | | | | | | | | | Impact: cleanup Signed-off-by: Brian Gerst <brgerst@gmail.com>
* | x86: unify io_remap_pfn_rangeJeremy Fitzhardinge2009-02-061-3/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify io_remap_pfn_range. Don't demacro yet. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pgd_noneJeremy Fitzhardinge2009-02-061-2/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pgd_none. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pud_noneJeremy Fitzhardinge2009-02-061-1/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pud_none. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pages_to_mbJeremy Fitzhardinge2009-02-061-2/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pages_to_mb. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pmd_badJeremy Fitzhardinge2009-02-061-5/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pmd_bad. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pgd_badJeremy Fitzhardinge2009-02-061-5/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pgd_bad. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pgd_badJeremy Fitzhardinge2009-02-061-5/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pgd_bad. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pud_largeJeremy Fitzhardinge2009-02-061-6/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pud_large. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pte_offset_kernelJeremy Fitzhardinge2009-02-061-3/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pte_offset_kernel. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pte_indexJeremy Fitzhardinge2009-02-061-1/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pte_index. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pmd_pfnJeremy Fitzhardinge2009-02-061-3/+0
| | | | | | | | | | | | | | | | | | Impact: cleanup Unify pmd_pfn. Unfortunately it can't be demacroed because it has a cyclic dependency on linux/mm.h:page_to_nid(). Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pmd_pfnJeremy Fitzhardinge2009-02-061-2/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pmd_pfn. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: remove redundant pfn_pmd definitionJeremy Fitzhardinge2009-02-061-1/+0
| | | | | | | | | | | | | | | | Impact: cleanup It's already defined in pgtable.h Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pmd_offsetJeremy Fitzhardinge2009-02-061-2/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pmd_offset. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pmd_indexJeremy Fitzhardinge2009-02-061-1/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pmd_index. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pmd_pageJeremy Fitzhardinge2009-02-061-2/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pmd_page. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pmd_page_vaddrJeremy Fitzhardinge2009-02-061-1/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pmd_page_vaddr. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pud_offsetJeremy Fitzhardinge2009-02-061-2/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pud_offset. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pud_indexJeremy Fitzhardinge2009-02-061-2/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pud_index. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pgd_pageJeremy Fitzhardinge2009-02-061-1/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pgd_page. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pud_pageJeremy Fitzhardinge2009-02-061-1/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pud_page. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pud_page_vaddrJeremy Fitzhardinge2009-02-061-2/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pud_page_vaddr. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pgd_page_vaddrJeremy Fitzhardinge2009-02-061-2/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pgd_page_vaddr. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pmd_noneJeremy Fitzhardinge2009-02-061-1/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pmd_none. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pmd_presentJeremy Fitzhardinge2009-02-061-1/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pmd_present. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pgd_presentJeremy Fitzhardinge2009-02-061-1/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pgd_present. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pud_presentJeremy Fitzhardinge2009-02-061-1/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pud_present. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pte_presentJeremy Fitzhardinge2009-02-061-2/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pte_present. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pte_sameJeremy Fitzhardinge2009-02-061-2/+0
| | | | | | | | | | | | | | | | Impact: cleanup Unify and demacro pte_same. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* | x86: unify pte_noneJeremy Fitzhardinge2009-02-061-1/+0
|/ | | | | | | | Impact: cleanup Unify and demacro pte_none. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
*-. Merge branches 'x86/apic', 'x86/cleanups', 'x86/cpufeature', ↵Ingo Molnar2008-12-231-11/+17
|\ \ | | | | | | | | | 'x86/crashdump', 'x86/debug', 'x86/defconfig', 'x86/detect-hyper', 'x86/doc', 'x86/dumpstack', 'x86/early-printk', 'x86/fpu', 'x86/idle', 'x86/io', 'x86/memory-corruption-check', 'x86/microcode', 'x86/mm', 'x86/mtrr', 'x86/nmi-watchdog', 'x86/pat2', 'x86/pci-ioapic-boot-irq-quirks', 'x86/ptrace', 'x86/quirks', 'x86/reboot', 'x86/setup-memory', 'x86/signal', 'x86/sparse-fixes', 'x86/time', 'x86/uv' and 'x86/xen' into x86/core
| | * x86: PAT: change pgprot_noncached to uc_minus instead of strong uc - v3venkatesh.pallipadi@intel.com2008-12-181-6/+0
| |/ |/| | | | | | | | | | | | | | | | | | | | | Impact: mm behavior change. Make pgprot_noncached uc_minus instead of strong UC. This will make pgprot_noncached to be in line with ioremap_nocache() and all the other APIs that map page uc_minus on uc request. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| * x86, mm: limit MAXMEM on 64-bitIngo Molnar2008-12-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | on 64-bit x86 the physical memory limit is controlled by the sparsemem bits - which are 44 bits right now. But MAXMEM (the max pfn number e820 parsing will allow to enter our sizing routines) is set to 0x00003fffffffffff, i.e. 46 bits - that's too large because it overlaps into the vmalloc range. So couple MAXMEM to MAX_PHYSMEM_BITS, and add a comment that the maximum of MAX_PHYSMEM_BITS is 45 bits. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86: consolidate __swp_XXX() macrosJan Beulich2008-12-161-4/+16
|/ | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup, code robustization The __swp_...() macros silently relied upon which bits are used for _PAGE_FILE and _PAGE_PROTNONE. After having changed _PAGE_PROTNONE in our Xen kernel to no longer overlap _PAGE_PAT, live locks and crashes were reported that could have been avoided if these macros properly used the symbolic constants. Since, as pointed out earlier, for Xen Dom0 support mainline likewise will need to eliminate the conflict between _PAGE_PAT and _PAGE_PROTNONE, this patch does all the necessary adjustments, plus it introduces a mechanism to check consistency between MAX_SWAPFILES_SHIFT and the actual encoding macros. This also fixes a latent bug in that x86-64 used a 6-bit mask in __swp_type(), and if MAX_SWAPFILES_SHIFT was increased beyond 5 in (the seemingly unrelated) linux/swap.h, this would have resulted in a collision with _PAGE_FILE. Non-PAE 32-bit code gets similarly adjusted for its pte_to_pgoff() and pgoff_to_pte() calculations. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: Fix ASM_X86__ header guardsH. Peter Anvin2008-10-221-3/+3
| | | | | | | | | Change header guards named "ASM_X86__*" to "_ASM_X86_*" since: a. the double underscore is ugly and pointless. b. no leading underscore violates namespace constraints. Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* x86, um: ... and asm-x86 moveAl Viro2008-10-221-0/+285
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
OpenPOWER on IntegriCloud