summaryrefslogtreecommitdiffstats
path: root/arch/sparc
Commit message (Collapse)AuthorAgeFilesLines
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6Linus Torvalds2009-08-2512-140/+167
|\ | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6: sparc64: Validate linear D-TLB misses. sparc64: Update defconfig. sparc32: Update defconfig. sparc32: Kill trap table freeing code. sparc: sys32.S incorrect compat-layer splice() system call sparc: Use page_fault_out_of_memory() for VM_FAULT_OOM. sparc64: Sign extend length arg to truncate syscalls when compat. sparc: Fix cleanup crash in bbc_envctrl_cleanup()
| * sparc64: Validate linear D-TLB misses.David S. Miller2009-08-254-28/+76
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When page alloc debugging is not enabled, we essentially accept any virtual address for linear kernel TLB misses. But with kgdb, kernel address probing, and other facilities we can try to access arbitrary crap. So, make sure the address we miss on will translate to physical memory that actually exists. In order to make this work we have to embed the valid address bitmap into the kernel image. And in order to make that less expensive we make an adjustment, in that the max physical memory address is decreased to "1 << 41", even on the chips that support a 42-bit physical address space. We can do this because bit 41 indicates "I/O space" and thus covers non-memory ranges. The result of this is that: 1) kpte_linear_bitmap shrinks from 2K to 1K in size 2) we need 64K more for the valid address bitmap We can't let the valid address bitmap be dynamically allocated once we start using it to validate TLB misses, otherwise we have crazy issues to deal with wrt. recursive TLB misses and such. If we're in a TLB miss it could be the deepest trap level that's legal inside of the cpu. So if we TLB miss referencing the bitmap, the cpu will be out of trap levels and enter RED state. To guard against out-of-range accesses to the bitmap, we have to check to make sure no bits in the physical address above bit 40 are set. We could export and use last_valid_pfn for this check, but that's just an unnecessary extra memory reference. On the plus side of all this, since we load all of these translations into the special 4MB mapping TSB, and we check the TSB first for TLB misses, there should be absolutely no real cost for these new checks in the TLB miss path. Reported-by: heyongli@gmail.com Signed-off-by: David S. Miller <davem@davemloft.net>
| * sparc64: Update defconfig.David S. Miller2009-08-181-25/+34
| | | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net>
| * sparc32: Update defconfig.David S. Miller2009-08-181-30/+44
| | | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net>
| * sparc32: Kill trap table freeing code.David S. Miller2009-08-182-48/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Normally, srmmu uses different trap table register values to allow determination of the cpu we're on. All of the trap tables have identical content, they just sit at different offsets from the first trap table, and the offset shifted down and masked out determines the cpu we are on. The code tries to free them up when they aren't actually used (don't have all 4 cpus, we're on sun4d, etc.) but that causes problems. For one thing it triggers false positives in the DMA debugging code. And fixing that up while preserving this relative offset thing isn't trivial. So just kill the freeing code, it costs us at most 3 pages, big deal... Signed-off-by: David S. Miller <davem@davemloft.net>
| * sparc: sys32.S incorrect compat-layer splice() system callMathieu Desnoyers2009-08-181-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I think arch/sparc/kernel/sys32.S has an incorrect splice definition: SIGN2(sys32_splice, sys_splice, %o0, %o1) The splice() prototype looks like : long splice(int fd_in, loff_t *off_in, int fd_out, loff_t *off_out, size_t len, unsigned int flags); So I think we should have : SIGN2(sys32_splice, sys_splice, %o0, %o2) Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: David S. Miller <davem@davemloft.net>
| * sparc: Use page_fault_out_of_memory() for VM_FAULT_OOM.David S. Miller2009-08-022-6/+8
| | | | | | | | | | | | As noted by Nick Piggin. Signed-off-by: David S. Miller <davem@davemloft.net>
| * sparc64: Sign extend length arg to truncate syscalls when compat.David S. Miller2009-07-272-2/+4
| | | | | | | | | | | | | | | | | | The first thing sys_truncate() and sys_ftruncate() do is sign extend the unsigned length arg to a signed type. Thanks to Benjamin Herrenschmidt for the tip. Signed-off-by: David S. Miller <davem@davemloft.net>
* | percpu, sparc64: fix sparse possible cpu map handlingTejun Heo2009-08-141-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | percpu code has been assuming num_possible_cpus() == nr_cpu_ids which is incorrect if cpu_possible_map contains holes. This causes percpu code to access beyond allocated memories and vmalloc areas. On a sparc64 machine with cpus 0 and 2 (u60), this triggers the following warning or fails boot. WARNING: at /devel/tj/os/work/mm/vmalloc.c:106 vmap_page_range_noflush+0x1f0/0x240() Modules linked in: Call Trace: [00000000004b17d0] vmap_page_range_noflush+0x1f0/0x240 [00000000004b1840] map_vm_area+0x20/0x60 [00000000004b1950] __vmalloc_area_node+0xd0/0x160 [0000000000593434] deflate_init+0x14/0xe0 [0000000000583b94] __crypto_alloc_tfm+0xd4/0x1e0 [00000000005844f0] crypto_alloc_base+0x50/0xa0 [000000000058b898] alg_test_comp+0x18/0x80 [000000000058dad4] alg_test+0x54/0x180 [000000000058af00] cryptomgr_test+0x40/0x60 [0000000000473098] kthread+0x58/0x80 [000000000042b590] kernel_thread+0x30/0x60 [0000000000472fd0] kthreadd+0xf0/0x160 ---[ end trace 429b268a213317ba ]--- This patch fixes generic percpu functions and sparc64 setup_per_cpu_areas() so that they handle sparse cpu_possible_map properly. Please note that on x86, cpu_possible_map() doesn't contain holes and thus num_possible_cpus() == nr_cpu_ids and this patch doesn't cause any behavior difference. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: David S. Miller <davem@davemloft.net> Cc: Ingo Molnar <mingo@elte.hu>
* | mm: Pass virtual address to [__]p{te,ud,md}_free_tlb()Benjamin Herrenschmidt2009-07-272-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mm: Pass virtual address to [__]p{te,ud,md}_free_tlb() Upcoming paches to support the new 64-bit "BookE" powerpc architecture will need to have the virtual address corresponding to PTE page when freeing it, due to the way the HW table walker works. Basically, the TLB can be loaded with "large" pages that cover the whole virtual space (well, sort-of, half of it actually) represented by a PTE page, and which contain an "indirect" bit indicating that this TLB entry RPN points to an array of PTEs from which the TLB can then create direct entries. Thus, in order to invalidate those when PTE pages are deleted, we need the virtual address to pass to tlbilx or tlbivax instructions. The old trick of sticking it somewhere in the PTE page struct page sucks too much, the address is almost readily available in all call sites and almost everybody implemets these as macros, so we may as well add the argument everywhere. I added it to the pmd and pud variants for consistency. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: David Howells <dhowells@redhat.com> [MN10300 & FRV] Acked-by: Nick Piggin <npiggin@suse.de> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [s390] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core-2.6Linus Torvalds2009-07-131-1/+6
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core-2.6: wm97xx_batery: replace driver_data with dev_get_drvdata() omap: video: remove direct access of driver_data Sound: remove direct access of driver_data driver model: fix show/store prototypes in doc. Firmware: firmware_class, fix lock imbalance Driver Core: remove BUS_ID_SIZE sparc: remove driver-core BUS_ID_SIZE partitions: fix broken uevent_suppress conversion devres: WARN() and return, don't crash on device_del() of uninitialized device
| * | sparc: remove driver-core BUS_ID_SIZEKay Sievers2009-07-121-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The name size limit is gone from the driver-core, the BUS_ID_SIZE value will be removed. Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* | | headers: smp_lock.h reduxAlexey Dobriyan2009-07-124-4/+0
|/ / | | | | | | | | | | | | | | | | | | | | | | | | * Remove smp_lock.h from files which don't need it (including some headers!) * Add smp_lock.h to files which do need it * Make smp_lock.h include conditional in hardirq.h It's needed only for one kernel_locked() usage which is under CONFIG_PREEMPT This will make hardirq.h inclusion cheaper for every PREEMPT=n config (which includes allmodconfig/allyesconfig, BTW) Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | sched: INIT_PREEMPT_COUNTPeter Zijlstra2009-07-102-6/+2
|/ | | | | | | | | | | | | | | | | | | | | Pull the initial preempt_count value into a single definition site. Maintainers for: alpha, ia64 and m68k, please have a look, your arch code is funny. The header magic is a bit odd, but similar to the KERNEL_DS one, CPP waits with expanding these macros until the INIT_THREAD_INFO macro itself is expanded, which is in arch/*/kernel/init_task.c where we've already included sched.h so we're good. Cc: tony.luck@intel.com Cc: rth@twiddle.net Cc: geert@linux-m68k.org Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Matt Mackall <mpm@selenic.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* sparc32: Fix makefile not generating required filesJulian Calaby2009-06-251-2/+2
| | | | | | | | | The tftpboot build was failing with missing file errors. It turns out that $(obj)/image wasn't being generated which was causing the a.out conversion to be skipped and hence piggyback to be called with nonexistent files. Signed-off-by: Julian Calaby <julian.calaby@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc32: Fix tftpboot.img MakefileJulian Calaby2009-06-251-2/+2
| | | | | Signed-off-by: Julian Calaby <julian.calaby@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc: fix tftpboot.img buildSam Ravnborg2009-06-251-1/+1
| | | | | | | | | | | | | | | | | Kjetil Oftedal mentioned that piggyback_32 was failing when building a sparc image. I tracked this down to the fact that the kernel no longer provided an absolute symbol named "end". Commit 86ed40bd6fe511d26bb8f3fa65a84cb65c235366 ("sparc: unify sections.h") renamed end to _end but failed to update piggyback_32. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Kjetil Oftedal <oftedal@gmail.com> Cc: Robert Reif <reif@earthlink.net> Signed-off-by: Julian Calaby <julian.calaby@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc32: Fix obvious build issues for tftpboot.img build.Robert Reif2009-06-252-2/+2
| | | | | Signed-off-by: Robert Reif <reif@earthlink.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Fix build warnings in piggyback_64.cJulian Calaby2009-06-251-0/+1
| | | | | | | | | | This patch fixes the following build warnings: arch/sparc/boot/piggyback_64.c: In function 'main': arch/sparc/boot/piggyback_64.c:44: warning: 'end' may be used uninitialized in this function arch/sparc/boot/piggyback_64.c:44: warning: 'start' may be used uninitialized in this function Signed-off-by: Julian Calaby <julian.calaby@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Don't use alloc_bootmem() in init_IRQ() code paths.David S. Miller2009-06-251-26/+19
| | | | | | | | The page allocator and SLAB are available at this point now, and if we still try to use bootmem allocations here the kernel spits out warnings. Signed-off-by: David S. Miller <davem@davemloft.net>
* Move FAULT_FLAG_xyz into handle_mm_fault() callersLinus Torvalds2009-06-212-3/+3
| | | | | | | | | | This allows the callers to now pass down the full set of FAULT_FLAG_xyz flags to handle_mm_fault(). All callers have been (mechanically) converted to the new calling convention, there's almost certainly room for architectures to clean up their code and then add FAULT_FLAG_RETRY when that support is added. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next-2.6Linus Torvalds2009-06-191-0/+6
|\ | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next-2.6: sparc64: Fix UP bootup regression.
| * sparc64: Fix UP bootup regression.David S. Miller2009-06-181-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit b696fdc259f0d94348a9327bed352fac44d4883d ("sparc64: Defer cpu_data() setup until end of per-cpu data initialization.") broke bootup for UP builds because the cpu_data() initialization only occurs in setup_per_cpu_areas() which is never compiled in nor called in UP builds. Fix this up by calling the setups directly from init_64.c when non-SMP. Reported-by: Alexander Beregalov <a.beregalov@gmail.com> Tested-by: Alexander Beregalov <a.beregalov@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | Delete pcibios_select_rootMatthew Wilcox2009-06-172-15/+0
| | | | | | | | | | | | | | | | This function was only used by pci_claim_resource(), and the last commit deleted that use. Signed-off-by: Matthew Wilcox <willy@linux.intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge branch 'akpm'Linus Torvalds2009-06-162-19/+1
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * akpm: (182 commits) fbdev: bf54x-lq043fb: use kzalloc over kmalloc/memset fbdev: *bfin*: fix __dev{init,exit} markings fbdev: *bfin*: drop unnecessary calls to memset fbdev: bfin-t350mcqb-fb: drop unused local variables fbdev: blackfin has __raw I/O accessors, so use them in fb.h fbdev: s1d13xxxfb: add accelerated bitblt functions tcx: use standard fields for framebuffer physical address and length fbdev: add support for handoff from firmware to hw framebuffers intelfb: fix a bug when changing video timing fbdev: use framebuffer_release() for freeing fb_info structures radeon: P2G2CLK_ALWAYS_ONb tested twice, should 2nd be P2G2CLK_DAC_ALWAYS_ONb? s3c-fb: CPUFREQ frequency scaling support s3c-fb: fix resource releasing on error during probing carminefb: fix possible access beyond end of carmine_modedb[] acornfb: remove fb_mmap function mb862xxfb: use CONFIG_OF instead of CONFIG_PPC_OF mb862xxfb: restrict compliation of platform driver to PPC Samsung SoC Framebuffer driver: add Alpha Channel support atmel-lcdc: fix pixclock upper bound detection offb: use framebuffer_alloc() to allocate fb_info struct ... Manually fix up conflicts due to kmemcheck in mm/slab.c
| * kmap_types: make most arches use generic header fileRandy Dunlap2009-06-161-16/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert most arches to use asm-generic/kmap_types.h. Move the KM_FENCE_ macro additions into asm-generic/kmap_types.h, controlled by __WITH_KM_FENCE from each arch's kmap_types.h file. Would be nice to be able to add custom KM_types per arch, but I don't yet see a nice, clean way to do that. Built on x86_64, i386, mips, sparc, alpha(tonyb), powerpc(tonyb), and 68k(tonyb). Note: avr32 should be able to remove KM_PTE2 (since it's not used) and then just use the generic kmap_types.h file. Get avr32 maintainer approval. Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Cc: <linux-arch@vger.kernel.org> Acked-by: Mike Frysinger <vapier@gentoo.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Bryan Wu <cooloney@kernel.org> Cc: Mikael Starvik <starvik@axis.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: "Luck Tony" <tony.luck@intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Howells <dhowells@redhat.com> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * mm: consolidate init_mm definitionAlexey Dobriyan2009-06-161-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * create mm/init-mm.c, move init_mm there * remove INIT_MM, initialize init_mm with C99 initializer * unexport init_mm on all arches: init_mm is already unexported on x86. One strange place is some OMAP driver (drivers/video/omap/) which won't build modular, but it's already wants get_vm_area() export. Somebody should look there. [akpm@linux-foundation.org: add missing #includes] Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Cc: Mike Frysinger <vapier.adi@gmail.com> Cc: Americo Wang <xiyou.wangcong@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | sparc64: Update defconfig.David S. Miller2009-06-161-18/+45
| | | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc: Wire up sys_rt_tgsigqueueinfo().David S. Miller2009-06-163-4/+9
| | | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc: replace uses of CPU_MASK_ALL_PTRStephen Rothwell2009-06-162-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CPU_MASK_ALL is the (deprecated) "all bits set" cpumask, defined as so: #define CPU_MASK_ALL (cpumask_t) { { ... } } Taking the address of such a temporary is questionable at best, unfortunately 321a8e9d (cpumask: add CPU_MASK_ALL_PTR macro) added CPU_MASK_ALL_PTR: #define CPU_MASK_ALL_PTR (&CPU_MASK_ALL) Which formalizes this practice. One day gcc could bite us over this usage (though we seem to have gotten away with it so far). [Description by Rusty Russell] Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc64: Add proper dynamic ftrace support.David S. Miller2009-06-163-15/+45
| | | | | | | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Ingo Molnar <mingo@elte.hu>
* | sparc: Simplify code using is_power_of_2() routine.Robert P. J. Day2009-06-161-1/+2
| | | | | | | | | | Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca> Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc: move of_device common code to of_device_commonRobert Reif2009-06-165-378/+216
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch moves code common to of_device_32.c and of_device_64.c into of_device_common.h and of_device_common.c. The only functional difference is in sparc32 where of_bus_default_map is used in place of of_bus_sbus_map because they are equivelent. There is still room for further code consolidation with some minor refactoring. Boot tested on sparc32 and compile tested on sparc64. Signed-off-by: Robert Reif <reif@earthlink.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc: remove dma-mapping_{32|64}.hFUJITA Tomonori2009-06-164-238/+172
| | | | | | | | | | | | | | | | | | This modifies SPARC32 to use struct dma_map ops. It means that we can remove dma-mapping_{32|64}.h. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Tested-by: Robert Reif <reif@earthlink.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc: use dma_map_page instead of dma_map_singleFUJITA Tomonori2009-06-163-25/+28
| | | | | | | | | | | | | | | | | | | | | | | | This patch converts dma_map_single and dma_unmap_single to use map_page and unmap_page respectively and removes unnecessary map_single and unmap_single. map_page can be used to implement map_single but the opposite is impossible. Having only dma_map_page in struct dma_ops is enough. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Tested-by: Robert Reif <reif@earthlink.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc: add sync_single_for_device and sync_sg_for_device to struct dma_opsFUJITA Tomonori2009-06-161-0/+6
| | | | | | | | | | | | | | | | | | This adds sync_single_for_device() and sync_sg_for_device() to struct dma_ops in order to unify dma-mpping_{32|64}.h. dma-mpping_32.h needs them though dma-mpping_64.h doesn't. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Tested-by: Robert Reif <reif@earthlink.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc: move the duplication in dma-mapping_{32|64}.h to dma-mapping.hFUJITA Tomonori2009-06-164-88/+42
| | | | | | | | | | | | Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Tested-by: Robert Reif <reif@earthlink.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc64: fix and optimize irq distributionHong H. Pham2009-06-165-25/+456
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | irq_choose_cpu() should compare the affinity mask against cpu_online_map rather than CPU_MASK_ALL, since irq_select_affinity() sets the interrupt's affinity mask to cpu_online_map "and" CPU_MASK_ALL (which ends up being just cpu_online_map). The mask comparison in irq_choose_cpu() will always fail since the two masks are not the same. So the CPU chosen is the first CPU in the intersection of cpu_online_map and CPU_MASK_ALL, which is always CPU0. That means all interrupts are reassigned to CPU0... Distributing interrupts to CPUs in a linearly increasing round robin fashion is not optimal for the UltraSPARC T1/T2. Also, the irq_rover in irq_choose_cpu() causes an interrupt to be assigned to a different processor each time the interrupt is allocated and released. This may lead to an unbalanced distribution over time. A static mapping of interrupts to processors is done to optimize and balance interrupt distribution. For the T1/T2, interrupts are spread to different cores first, and then to strands within a core. The following is some benchmarks showing the effects of interrupt distribution on a T2. The test was done with iperf using a pair of T5220 boxes, each with a 10GBe NIU (XAUI) connected back to back. TCP | Stock Linear RR IRQ Optimized IRQ Streams | 2.6.30-rc5 Distribution Distribution | GBits/sec GBits/sec GBits/sec --------+----------------------------------------- 1 0.839 0.862 0.868 8 1.16 4.96 5.88 16 1.15 6.40 8.04 100 1.09 7.28 8.68 Signed-off-by: Hong H. Pham <hong.pham@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc64: Use new dynamic per-cpu allocator.David S. Miller2009-06-162-9/+159
| | | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc64: Only allocate per-cpu areas for possible cpus.David S. Miller2009-06-161-16/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This gets us real close to the generic implementation of setup_per_cpu_areas() except: 1) We store the per-cpu offset into the trap_block[], whereas the generic code has it's own static array. 2) We have to initialize the %g5 register to hold the boot cpu's per-cpu area offset. 3) The OBP/MDESC cpu info scan is performed at the end. Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc64: Get rid of real_setup_per_cpu_areas().David S. Miller2009-06-163-17/+5
| | | | | | | | | | | | | | | | | | | | | | Now that we defer the cpu_data() initializations to the end of per-cpu setup, we can get rid of this local hack we had to setup the per-cpu areas eary. This is a necessary step in order to support HAVE_DYNAMIC_PER_CPU_AREA since the per-cpu setup must run when page structs are available. Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc64: Defer cpu_data() setup until end of per-cpu data initialization.David S. Miller2009-06-165-10/+9
| | | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc64: Make mdesc_fill_in_cpu_data take a cpumask_t pointer.David S. Miller2009-06-164-6/+6
| | | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc: Call OF and MD cpu scanning explicitly from paging_init()David S. Miller2009-06-166-8/+6
| | | | | | | | | | | | | | We need to split up the cpu present mask setup from the cpu_data initialization, and this is a first step towards that. Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc64: Refactor MDESC cpu scanning code using an iterator.David S. Miller2009-06-162-57/+90
| | | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc64: Refactor OBP cpu scanning code using an iterator.David S. Miller2009-06-162-109/+125
| | | | | | | | | | | | With feedback from Sam Ravnborg. Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc64: Use BUILD_BUG_ON() in trap_init().David S. Miller2009-06-161-80/+91
| | | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc64: Store per-cpu offset in trap_block[]David S. Miller2009-06-165-44/+21
| | | | | | | | | | | | | | Surprisingly this actually makes LOAD_PER_CPU_BASE() a little more efficient. Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc64: Move trap_block[] definitions into a new header file.David S. Miller2009-06-162-196/+208
| | | | | | | | | | | | | | | | Later we're going to want to get at these definitions from asm/percpu.h and that's not possible via cpudata.h because of the set of dependencies the non-trap_block[] stuff has. Signed-off-by: David S. Miller <davem@davemloft.net>
* | sparc64: Reclaim trap_block[]->hdescDavid S. Miller2009-06-162-9/+7
|/ | | | | | | | | | This really isn't necessary at all, a local variable suits the job just fine. This frees up 8 bytes in the trap_block[] that we can use later to store the per-cpu base addresses. Signed-off-by: David S. Miller <davem@davemloft.net>
OpenPOWER on IntegriCloud