summaryrefslogtreecommitdiffstats
path: root/arch/sparc64/mm/init.c
Commit message (Collapse)AuthorAgeFilesLines
* [SPARC64]: Fix section mismatch from kernel_map_rangeSam Ravnborg2008-02-241-1/+2
| | | | | | | | | | | | | | Fix following warnings: WARNING: vmlinux.o(.text+0x4f980): Section mismatch in reference from the function kernel_map_range() to the function .init.text:__alloc_bootmem() WARNING: vmlinux.o(.text+0x4f9cc): Section mismatch in reference from the function kernel_map_range() to the function .init.text:__alloc_bootmem() alloc_bootmem() is only used during early init and for any subsequent call to kernel_map_range() the program logic avoid the call. So annotate kernel_map_range() with __ref to tell modpost to ignore the reference to a __init function. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Always register a PROM based early console.David S. Miller2008-02-171-3/+3
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Remove DEBUG_BOOTMEM.David S. Miller2008-02-131-42/+1
| | | | | | | We'll replace it in the future with better logging facilities that can be enabled at run time. Signed-off-by: David S. Miller <davem@davemloft.net>
* Introduce flags for reserve_bootmem()Bernhard Walle2008-02-071-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | This patchset adds a flags variable to reserve_bootmem() and uses the BOOTMEM_EXCLUSIVE flag in crashkernel reservation code to detect collisions between crashkernel area and already used memory. This patch: Change the reserve_bootmem() function to accept a new flag BOOTMEM_EXCLUSIVE. If that flag is set, the function returns with -EBUSY if the memory already has been reserved in the past. This is to avoid conflicts. Because that code runs before SMP initialisation, there's no race condition inside reserve_bootmem_core(). [akpm@linux-foundation.org: coding-style fixes] [akpm@linux-foundation.org: fix powerpc build] Signed-off-by: Bernhard Walle <bwalle@suse.de> Cc: <linux-arch@vger.kernel.org> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SPARC64: use generic percputravis@sgi.com2008-01-301-0/+5
| | | | | | | | | | | | Sparc64 has a way of providing the base address for the per cpu area of the currently executing processor in a global register. Sparc64 also provides a way to calculate the address of a per cpu area from a base address instead of performing an array lookup. Cc: David Miller <davem@davemloft.net> Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* [SPARC64]: Fix two kernel linear mapping setup bugs.David S. Miller2007-12-131-9/+20
| | | | | | | | | | | | | | | | This was caught and identified by Greg Onufer. Since we setup the 256M/4M bitmap table after taking over the trap table, it's possible for some 4M mapping to get loaded in the TLB beforhand which later will be 256M mappings. This can cause illegal TLB multiple-match conditions. Fix this by setting up the bitmap before we take over the trap table. Next, __flush_tlb_all() was not doing anything on hypervisor platforms. Fix by adding sun4v_mmu_demap_all() and calling it. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: __inline__ --> inlineDavid S. Miller2007-10-271-2/+2
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* fix memory hot remove not configured case.KAMEZAWA Hiroyuki2007-10-161-5/+0
| | | | | | | | | | | | | | | | | | | | | | Now, arch dependent code around CONFIG_MEMORY_HOTREMOVE is a mess. This patch cleans up them. This is against 2.6.23-rc6-mm1. - fix compile failure on ia64/ CONFIG_MEMORY_HOTPLUG && !CONFIG_MEMORY_HOTREMOVE case. - For !CONFIG_MEMORY_HOTREMOVE, add generic no-op remove_memory(), which returns -EINVAL. - removed remove_pages() only used in powerpc. - removed no-op remove_memory() in i386, sh, sparc64, x86_64. - only powerpc returns -ENOSYS at memory hot remove(no-op). changes it to return -EINVAL. Note: Currently, only ia64 supports CONFIG_MEMORY_HOTREMOVE. I welcome other archs if there are requirements and testers. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SPARC64: SPARSEMEM_VMEMMAP supportDavid Miller2007-10-161-0/+52
| | | | | | | | | | | [apw@shadowen.org: style fixups] [apw@shadowen.org: vmemmap sparc64: convert to new config options] Signed-off-by: Andy Whitcroft <apw@shadowen.org> Acked-by: Mel Gorman <mel@csn.ul.ie> Acked-by: Christoph Lameter <clameter@sgi.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [SPARC64]: Only use bypass accesses to INO buckets.David S. Miller2007-10-131-2/+0
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fill holes in hypervisor APIs and fix KTSB registry.David S. Miller2007-05-291-30/+11
| | | | | | | | | | | Several interfaces were missing and others misnumbered or improperly documented. Also, make sure to check the return value when registering the kernel TSBs with the hypervisor. This helped to find the 4MB kernel TSB alignment bug fixed in a previous changeset. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix two bugs wrt. kernel 4MB TSB.David S. Miller2007-05-291-2/+5
| | | | | | | | | | | | 1) The TSB lookup was not using the correct hash mask. 2) It was not aligned on a boundary equal to it's size, which is required by the sun4v Hypervisor. wasn't having it's return value checked, and that bug will be fixed up as well in a subsequent changeset. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Eliminate NR_CPUS limitations.David S. Miller2007-05-291-6/+16
| | | | | | | | | | | | | | | | | | | | | | | | | Cheetah systems can have cpuids as large as 1023, although physical systems don't have that many cpus. Only three limitations existed in the kernel preventing arbitrary NR_CPUS values: 1) dcache dirty cpu state stored in page->flags on D-cache aliasing platforms. With some build time calculations and some build-time BUG checks on page->flags layout, this one was easily solved. 2) The cheetah XCALL delivery code could only handle a cpumask with up to 32 cpus set. Some simple looping logic clears that up too. 3) thread_info->cpu was a u8, easily changed to a u16. There are a few spots in the kernel that still put NR_CPUS sized arrays on the kernel stack, but that's not a sparc64 specific problem. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Use machine description and OBP properly for cpu probing.David S. Miller2007-05-291-3/+14
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Report proper system soft state to the hypervisor.David S. Miller2007-05-291-0/+3
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Spelling fixes.Simon Arlott2007-05-111-1/+1
| | | | | | | Spelling fixes in arch/sparc64/. Signed-off-by: Simon Arlott <simon@fire.lp0.eu> Signed-off-by: David S. Miller <davem@davemloft.net>
* Quicklist support for sparc64David Miller2007-05-071-24/+0
| | | | | | | | | | | | | | I ported this to sparc64 as per the patch below, tested on UP SunBlade1500 and 24 cpu Niagara T1000. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Andi Kleen <ak@suse.de> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [SPARC64]: Document and fix calculation of pages_avail.David S. Miller2007-04-261-2/+23
| | | | | | | | It should be set to the total number of pages that the system will really have available after things like initmem, the bootmem map, and initrd are freed up. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Use bootmem_bootmap_pages() in choose_bootmap_pfn().David S. Miller2007-04-261-2/+2
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add proper header file extern for cmdline_memory_size.David S. Miller2007-04-261-2/+0
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Kill sparc_ultra_dump_{i,d}tlb()David S. Miller2007-04-261-87/+0
| | | | | | | | | | | While useful in odd circumstances to debug something, they are normally totally unused and anyone can fetch this code out of the history if they really need it. And in any event, the person who needs this kind of code is usually me :-) Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Use DECLARE_BITMAP and BITS_TO_LONGS in mm/init.cDavid S. Miller2007-04-261-6/+7
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Give move verbose show_mem() output just like i386.David S. Miller2007-04-261-2/+37
| | | | | | | We now report everything i386 does except for highmem which doesn't apply. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Mark show_mem() printk's with KERN_INFO.David S. Miller2007-04-261-4/+4
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Kill kvaddr_to_phys() and friends.David S. Miller2007-04-261-63/+28
| | | | | | | Just inline it into flush_icache_range() which is the only user. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Privatize sun4u_get_pte() and fix name.David S. Miller2007-04-261-67/+62
| | | | | | | | | | | | | | | | | __get_phys is only called from init.c as is prom_virt_to_phys(), __get_iospace() is not called at all, and sun4u_get_pte() is largely misnamed. Privatize the implementation and helper functions of sun4u_get_phys() to mm/init.c, and rename to kvaddr_to_paddr(). The only used of this thing is flush_icache_range(), and thus things can be considerably further simplified. For example, we should only see module or PAGE_OFFSET kernel addresses here, so we don't need the OBP firmware range handling at all. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Kill _start[]/_end[] declarations in mm/init.cDavid S. Miller2007-04-261-3/+0
| | | | | | We already get those from asm/sections.h Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Simplify read_obp_memory().David S. Miller2007-04-261-16/+11
| | | | | | | Kick out empty entries as soon as we spot them, and use memmove() instead of a silly loop to make the operation more clear. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Get DEBUG_PAGEALLOC working again.David S. Miller2007-03-161-2/+28
| | | | | | | | | | | | | We have to make sure to use base-pagesize TLB entries even during the early transition period where we need TLB miss handling but don't have the kernel page tables setup yet for the linear region. Also, it is necessary therefore to not use the 4MB TSB for these translations, and instead use the normal kernel TSB. This allows us to also get rid of the 4MB tsb for debug builds which shrinks the kernel a little bit. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: We do not need ZONE_DMA.David S. Miller2007-02-121-2/+2
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [PATCH] Drop free_pages()Christoph Lameter2007-02-111-2/+2
| | | | | | | | | | | | | | | nr_free_pages is now a simple access to a global variable. Make it a macro instead of a function. The nr_free_pages now requires vmstat.h to be included. There is one occurrence in power management where we need to add the include. Directly refrer to global_page_state() there to clarify why the #include was added. [akpm@osdl.org: arm build fix] [akpm@osdl.org: sparc64 build fix] Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [SPARC64]: Fix "mem=xxx" handling.David S. Miller2006-12-311-23/+124
| | | | | | | | We were not being careful enough. When we trim the physical memory areas, we have to make sure we don't remove the kernel image or initial ramdisk image ranges. Signed-off-by: David S. Miller <davem@davemloft.net>
* [PATCH] slab: remove kmem_cache_tChristoph Lameter2006-12-071-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace all uses of kmem_cache_t with struct kmem_cache. The patch was generated using the following script: #!/bin/sh # # Replace one string by another in all the kernel sources. # set -e for file in `find * -name "*.c" -o -name "*.h"|xargs grep -l $1`; do quilt add $file sed -e "1,\$s/$1/$2/g" $file >/tmp/$$ mv /tmp/$$ $file quilt refresh done The script was run like this sh replace kmem_cache_t "struct kmem_cache" Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [SPARC64]: Kill bogus check from bootmem_init().David S. Miller2006-09-261-2/+1
| | | | | | | | | | | | | | | | | There is an ancient and totally incorrect sanity check being done on the ramdisk location. The check assumes that the kernel is always loaded to physical address zero, which is wrong. It was trying to validate the ramdisk value by saying that if it fell within the kernel image address range it must be wrong. Anyways, kill this because it actually creates problems. The 'ramdisk_image' should always be adjusted down by KERNBASE. SILO can easily put the ramdisk in a location which causes this test to trigger, breaking things. [ Based almost entirely upon a patch from Ben Collins. ] Signed-off-by: David S. Miller <davem@davemloft.net>
* Remove obsolete #include <linux/config.h>Jörn Engel2006-06-301-1/+0
| | | | | Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
* [PATCH] add poison.h and patch primary usersRandy Dunlap2006-06-271-1/+2
| | | | | | | | | | | | | | | | | Localize poison values into one header file for better documentation and easier/quicker debugging and so that the same values won't be used for multiple purposes. Use these constants in core arch., mm, driver, and fs code. Signed-off-by: Randy Dunlap <rdunlap@xenotime.net> Acked-by: Matt Mackall <mpm@selenic.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [SPARC64]: Export _PAGE_IE to modules.David S. Miller2006-06-251-0/+1
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix for Niagara memory corruption.David S. Miller2006-06-231-1/+27
| | | | | | | | | | | | | | | | | | | On some sun4v systems, after netboot the ethernet controller and it's DMA mappings can be left active. The net result is that the kernel can end up using memory the ethernet controller will continue to DMA into, resulting in corruption. To deal with this, we are more careful about importing IOMMU translations which OBP has left in the IO-TLB. If the mapping maps into an area the firmware claimed was free and available memory for the kernel to use, we demap instead of import that IOMMU entry. This is going to cause the network chip to take a PCI master abort on the next DMA it attempts, if it has been left going like this. All tests show that this is handled properly by the PCI layer and the e1000 drivers. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Minor bug fix to obp_read_memory().David S. Miller2006-06-231-2/+19
| | | | | | | | | If we end up zero'ing out the size of one of the entries, pop it out of the array completely because some code that examines these things cannot handle a zero length element properly. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Import OBP device tree into kernel data structures.David S. Miller2006-06-231-0/+3
| | | | | | | | | The basic framework is based on the PowerPC OF code. This code even tries to get the device addressing components correct in the full path names. Signed-off-by: David S. Miller <davem@davemloft.net>
* [PATCH] sparc64: fix set_page_count merge clashNick Piggin2006-03-231-2/+2
| | | | | | | | | Merge clash will have broken sparc64. Synch up its online_page implementation with powerpc, which was identical until the set_page_count removal. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Merge master.kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6Linus Torvalds2006-03-221-2/+19
|\ | | | | | | | | | | * master.kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6: [SPARC64]: Add a secondary TSB for hugepage mappings. [SPARC]: Respect vm_page_prot in io_remap_page_range().
| * [SPARC64]: Add a secondary TSB for hugepage mappings.David S. Miller2006-03-221-2/+19
| | | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* | [PATCH] remove set_page_count() outside mm/Nick Piggin2006-03-221-2/+2
|/ | | | | | | | | | | | | set_page_count usage outside mm/ is limited to setting the refcount to 1. Remove set_page_count from outside mm/, and replace those users with init_page_count() and set_page_refcounted(). This allows more debug checking, and tighter control on how code is allowed to play around with page->_count. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [SPARC64]: Allow CONFIG_MEMORY_HOTPLUG to build.David S. Miller2006-03-201-0/+18
| | | | | | | | | | online_page() is straightforward, and then add a dummy remove_memory() that returns -EINVAL just like i386. There is no point in implementing remove_memory() since __remove_pages() has no implementation either. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Use SLAB caches for TSB tables.David S. Miller2006-03-201-1/+4
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix and re-enable dynamic TSB sizing.David S. Miller2006-03-201-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is good for up to %50 performance improvement of some test cases. The problem has been the race conditions, and hopefully I've plugged them all up here. 1) There was a serious race in switch_mm() wrt. lazy TLB switching to and from kernel threads. We could erroneously skip a tsb_context_switch() and thus use a stale TSB across a TSB grow event. There is a big comment now in that function describing exactly how it can happen. 2) All code paths that do something with the TSB need to be guarded with the mm->context.lock spinlock. This makes page table flushing paths properly synchronize with both TSB growing and TLB context changes. 3) TSB growing events are moved to the end of successful fault processing. Previously it was in update_mmu_cache() but that is deadlock prone. At the end of do_sparc64_fault() we hold no spinlocks that could deadlock the TSB grow sequence. We also have dropped the address space semaphore. While we're here, add prefetching to the copy_tsb() routine and put it in assembler into the tsb.S file. This piece of code is quite time critical. There are some small negative side effects to this code which can be improved upon. In particular we grab the mm->context.lock even for the tsb insert done by update_mmu_cache() now and that's a bit excessive. We can get rid of that locking, and the same lock taking in flush_tsb_user(), by disabling PSTATE_IE around the whole operation including the capturing of the tsb pointer and tsb_nentries value. That would work because anyone growing the TSB won't free up the old TSB until all cpus respond to the TSB change cross call. I'm not quite so confident in that optimization to put it in right now, but eventually we might be able to and the description is here for reference. This code seems very solid now. It passes several parallel GCC bootstrap builds, and our favorite "nut cruncher" stress test which is a full "make -j8192" build of a "make allmodconfig" kernel. That puts about 256 processes on each cpu's run queue, makes lots of process cpu migrations occur, causes lots of page table and TLB flushing activity, incurs many context version number changes, and it swaps the machine real far out to disk even though there is 16GB of ram on this test system. :-) Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix 32-bit truncation which broke sparsemem.David S. Miller2006-03-201-10/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The page->flags manipulations done by the D-cache dirty state tracking was broken because the constants were not marked with "UL" to make them 64-bit, which means we were clobbering the upper 32-bits of page->flags all the time. This doesn't jive well with sparsemem which stores the section and indexing information in the top 32-bits of page->flags. This is yet another sparc64 bug which has been with us forever. While we're here, tidy up some things in bootmem_init() and paginig_init(): 1) Pass min_low_pfn to init_bootmem_node(), it's identical to (phys_base >> PAGE_SHIFT) but we should use consistent with the variable names we print in CONFIG_BOOTMEM_DEBUG 2) max_mapnr, although no longer used, was being set inaccurately, we shouldn't subtract pfn_base any more. 3) All the games with phys_base in the zones_*[] arrays we pass to free_area_init_node() are no longer necessary. Thanks to Josh Grebe and Fabbione for the bug reports and testing. Fix also verified locally on an SB2500 which had a memory layout that triggered the same problem. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Move over to sparsemem.David S. Miller2006-03-201-36/+98
| | | | | | | | This has been pending for a long time, and the fact that we waste a ton of ram on some configurations kind of pushed things over the edge. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Bulletproof MMU context locking.David S. Miller2006-03-201-2/+3
| | | | | | | | | | | 1) Always spin_lock_init() in init_context(). The caller essentially clears it out, or copies the mm info from the parent. In both cases we need to explicitly initialize the spinlock. 2) Always do explicit IRQ disabling while taking mm->context.lock and ctx_alloc_lock. Signed-off-by: David S. Miller <davem@davemloft.net>
OpenPOWER on IntegriCloud