summaryrefslogtreecommitdiffstats
path: root/include/asm-ppc64
Commit message (Collapse)AuthorAgeFilesLines
* [PATCH] compat: be more consistent about [ug]id_tStephen Rothwell2005-09-071-8/+10
| | | | | | | | | | | | | | | | | | | | When I first wrote the compat layer patches, I was somewhat cavalier about the definition of compat_uid_t and compat_gid_t (or maybe I just misunderstood :-)). This patch makes the compat types much more consistent with the types we are being compatible with and hopefully will fix a few bugs along the way. compat type type in compat arch __compat_[ug]id_t __kernel_[ug]id_t __compat_[ug]id32_t __kernel_[ug]id32_t compat_[ug]id_t [ug]id_t The difference is that compat_uid_t is always 32 bits (for the archs we care about) but __compat_uid_t may be 16 bits on some. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] FUTEX_WAKE_OP: pthread_cond_signal() speedupJakub Jelinek2005-09-072-0/+85
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ATM pthread_cond_signal is unnecessarily slow, because it wakes one waiter (which at least on UP usually means an immediate context switch to one of the waiter threads). This waiter wakes up and after a few instructions it attempts to acquire the cv internal lock, but that lock is still held by the thread calling pthread_cond_signal. So it goes to sleep and eventually the signalling thread is scheduled in, unlocks the internal lock and wakes the waiter again. Now, before 2003-09-21 NPTL was using FUTEX_REQUEUE in pthread_cond_signal to avoid this performance issue, but it was removed when locks were redesigned to the 3 state scheme (unlocked, locked uncontended, locked contended). Following scenario shows why simply using FUTEX_REQUEUE in pthread_cond_signal together with using lll_mutex_unlock_force in place of lll_mutex_unlock is not enough and probably why it has been disabled at that time: The number is value in cv->__data.__lock. thr1 thr2 thr3 0 pthread_cond_wait 1 lll_mutex_lock (cv->__data.__lock) 0 lll_mutex_unlock (cv->__data.__lock) 0 lll_futex_wait (&cv->__data.__futex, futexval) 0 pthread_cond_signal 1 lll_mutex_lock (cv->__data.__lock) 1 pthread_cond_signal 2 lll_mutex_lock (cv->__data.__lock) 2 lll_futex_wait (&cv->__data.__lock, 2) 2 lll_futex_requeue (&cv->__data.__futex, 0, 1, &cv->__data.__lock) # FUTEX_REQUEUE, not FUTEX_CMP_REQUEUE 2 lll_mutex_unlock_force (cv->__data.__lock) 0 cv->__data.__lock = 0 0 lll_futex_wake (&cv->__data.__lock, 1) 1 lll_mutex_lock (cv->__data.__lock) 0 lll_mutex_unlock (cv->__data.__lock) # Here, lll_mutex_unlock doesn't know there are threads waiting # on the internal cv's lock Now, I believe it is possible to use FUTEX_REQUEUE in pthread_cond_signal, but it will cost us not one, but 2 extra syscalls and, what's worse, one of these extra syscalls will be done for every single waiting loop in pthread_cond_*wait. We would need to use lll_mutex_unlock_force in pthread_cond_signal after requeue and lll_mutex_cond_lock in pthread_cond_*wait after lll_futex_wait. Another alternative is to do the unlocking pthread_cond_signal needs to do (the lock can't be unlocked before lll_futex_wake, as that is racy) in the kernel. I have implemented both variants, futex-requeue-glibc.patch is the first one and futex-wake_op{,-glibc}.patch is the unlocking inside of the kernel. The kernel interface allows userland to specify how exactly an unlocking operation should look like (some atomic arithmetic operation with optional constant argument and comparison of the previous futex value with another constant). It has been implemented just for ppc*, x86_64 and i?86, for other architectures I'm including just a stub header which can be used as a starting point by maintainers to write support for their arches and ATM will just return -ENOSYS for FUTEX_WAKE_OP. The requeue patch has been (lightly) tested just on x86_64, the wake_op patch on ppc64 kernel running 32-bit and 64-bit NPTL and x86_64 kernel running 32-bit and 64-bit NPTL. With the following benchmark on UP x86-64 I get: for i in nptl-orig nptl-requeue nptl-wake_op; do echo time elf/ld.so --library-path .:$i /tmp/bench; \ for j in 1 2; do echo ( time elf/ld.so --library-path .:$i /tmp/bench ) 2>&1; done; done time elf/ld.so --library-path .:nptl-orig /tmp/bench real 0m0.655s user 0m0.253s sys 0m0.403s real 0m0.657s user 0m0.269s sys 0m0.388s time elf/ld.so --library-path .:nptl-requeue /tmp/bench real 0m0.496s user 0m0.225s sys 0m0.271s real 0m0.531s user 0m0.242s sys 0m0.288s time elf/ld.so --library-path .:nptl-wake_op /tmp/bench real 0m0.380s user 0m0.176s sys 0m0.204s real 0m0.382s user 0m0.175s sys 0m0.207s The benchmark is at: http://sourceware.org/ml/libc-alpha/2005-03/txt00001.txt Older futex-requeue-glibc.patch version is at: http://sourceware.org/ml/libc-alpha/2005-03/txt00002.txt Older futex-wake_op-glibc.patch version is at: http://sourceware.org/ml/libc-alpha/2005-03/txt00003.txt Will post a new version (just x86-64 fixes so that the patch applies against pthread_cond_signal.S) to libc-hacker ml soon. Attached is the kernel FUTEX_WAKE_OP patch as well as a simple-minded testcase that will not test the atomicity of the operation, but at least check if the threads that should have been woken up are woken up and whether the arithmetic operation in the kernel gave the expected results. Acked-by: Ingo Molnar <mingo@redhat.com> Cc: Ulrich Drepper <drepper@redhat.com> Cc: Jamie Lokier <jamie@shareable.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Yoichi Yuasa <yuasa@hh.iij4u.or.jp> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Invert sense of SLB class bitDavid Gibson2005-09-061-2/+4
| | | | | | | | | | | | | Currently, we set the class bit in kernel SLB entries, and clear it on user SLB entries. On POWER5, ERAT entries created in real mode have the class bit clear. So to avoid flushing kernel ERAT entries on each context switch, this patch inverts our usage of the class bit, setting it on user SLB entries and clearing it on kernel SLB entries. Booted on POWER5 and G5. Signed-off-by: David Gibson <dwg@au1.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] ppc64: Move oprofile_model into cpu feature structAnton Blanchard2005-09-061-0/+4
| | | | | | | Move oprofile_model into cpu feature struct. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] ppc64: Move oprofile_impl.h into include/asm-ppc64Anton Blanchard2005-09-061-0/+111
| | | | | | | | Move oprofile_impl.h into include/asm-ppc64 in preparation for moving oprofile_model into cpu feature struct. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] ppc64: Add oprofile cpu_type to cpu feature structAnton Blanchard2005-09-061-0/+3
| | | | | | | Add oprofile cpu_type to cpu feature struct. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] ppc64: remove CPU_FTR_PMC8Anton Blanchard2005-09-061-1/+1
| | | | | | | | Remove the CPU_FTR_PMC8 feature now we encode the number of PMCs directly. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] ppc64: add number of PMCs to cputableAnton Blanchard2005-09-061-0/+3
| | | | | | | Add a field in the cputable struct to store the number of PMCs. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] ppc/ppc64: Merge more include filesJon Loeliger2005-09-068-244/+0
| | | | | | | | | This patch merges several include files from asm-ppc and asm-ppc64 into the new asm-powerpc. Signed-off-by: Jon Loeliger <jdl@freescale.com> Signed-off-by: Kumar Gala <kumar.gala@freescale.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] Move 3 more headers to asm-powerpcBecky Bruce2005-09-063-480/+0
| | | | | | | | | Merged several nearly-identical header files from asm-ppc and asm-ppc64 into asm-powerpc. Signed-off-by: Kumar Gala <kumar.gala@freescale.com> Signed-off-by: Becky Bruce <becky.bruce@freescale.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] ppc64: speedup cmpxchgAnton Blanchard2005-09-061-11/+8
| | | | | | | | | | | | | | | | | | | | | cmpxchg has the following code: __typeof__(*(ptr)) _o_ = (o); __typeof__(*(ptr)) _n_ = (n); Unfortunately it makes gcc 4.0 store and load the variables to the stack. Eg in atomic_dec_and_test we get: stw r10,112(r1) stw r9,116(r1) lwz r9,112(r1) lwz r0,116(r1) x86 is just casting the values so do that instead. Also change __xchg* and __cmpxchg* to take unsigned values, removing a few sign extensions. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] ppc64: Consolidate early console and PPCDBG codeMilton Miller2005-09-061-3/+3
| | | | | | | | Consolidate the early console and PPCDBG code in udbg.c Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] ppc64: Take udbg out of ppc_mdMilton Miller2005-09-062-16/+14
| | | | | | | | | Take udbg out of ppc_md. Allows us to not overwrite early udbg inits when assigning ppc_md. Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] ppc64: Add VMX save flag to VPAOlof Johansson2005-09-051-1/+1
| | | | | | | | | | | | | | | | We need to indicate to the hypervisor that it needs to save our VMX registers when switching partitions on a shared-processor system, just as it needs to for FP and PMC registers. This could be made to be on-demand when VMX is used, but we don't do that for FP nor PMC right now either so let's not overcomplicate things. Signed-off-by: Olof Johansson <olof@lixom.net> Acked-by: Paul Mackerras <paulus@samba.org> Cc: Anton Blanchard <anton@samba.org> Cc: <engebret@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] sab: consolidate kmem_bufctl_tKyle Moffett2005-09-051-1/+0
| | | | | | | | | | | This is used only in slab.c and each architecture gets to define whcih underlying type is to be used. Seems a bit silly - move it to slab.c and use the same type for all architectures: unsigned int. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: consolidate get_orderStephen Rothwell2005-09-051-14/+3
| | | | | | | | | | | | Someone mentioned that almost all the architectures used basically the same implementation of get_order. This patch consolidates them into asm-generic/page.h and includes that in the appropriate places. The exceptions are ia64 and ppc which have their own (presumably optimised) versions. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] SPARSEMEM EXTREMEBob Picco2005-09-051-0/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | A new option for SPARSEMEM is ARCH_SPARSEMEM_EXTREME. Architecture platforms with a very sparse physical address space would likely want to select this option. For those architecture platforms that don't select the option, the code generated is equivalent to SPARSEMEM currently in -mm. I'll be posting a patch on ia64 ml which uses this new SPARSEMEM feature. ARCH_SPARSEMEM_EXTREME makes mem_section a one dimensional array of pointers to mem_sections. This two level layout scheme is able to achieve smaller memory requirements for SPARSEMEM with the tradeoff of an additional shift and load when fetching the memory section. The current SPARSEMEM -mm implementation is a one dimensional array of mem_sections which is the default SPARSEMEM configuration. The patch attempts isolates the implementation details of the physical layout of the sparsemem section array. ARCH_SPARSEMEM_EXTREME depends on 64BIT and is by default boolean false. I've boot tested under aim load ia64 configured for ARCH_SPARSEMEM_EXTREME. I've also boot tested a 4 way Opteron machine with !ARCH_SPARSEMEM_EXTREME and tested with aim. Signed-off-by: Andy Whitcroft <apw@shadowen.org> Signed-off-by: Bob Picco <bob.picco@hp.com> Signed-off-by: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Merge HEAD from master.kernel.org:/pub/scm/linux/kernel/git/paulus/ppc64-2.6 Linus Torvalds2005-08-2924-453/+53
|\
| * [PATCH] ppc64: Add CONFIG_HZAnton Blanchard2005-08-301-1/+3
| | | | | | | | | | | | | | | | While ppc64 has the CONFIG_HZ Kconfig option, it wasnt actually being used. Connect it up and set all platforms to 250Hz. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [PATCH] oprofile PVR 970MPJake Moilanen2005-08-301-0/+1
| | | | | | | | | | | | | | Here's the 970MP's PVR (processor version register) entry for oprofile. Signed-off-by: Jake Moilanen <moilanen@austin.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [PATCH] Move all the very similar files to asm-powerpcStephen Rothwell2005-08-3011-343/+0
| | | | | | | | | | | | | | They differed in either simple comments or in the protecting ifdefs. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [PATCH] Move the identical files from include/asm-ppc{,64}Stephen Rothwell2005-08-308-49/+0
| | | | | | | | | | | | | | | | | | Move the identical files from include/asm-ppc{,64}/ to include/asm-powerpc/. Remove hdreg.h completely as it is unused in the tree. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [PATCH] Create include/asm-powerpcStephen Rothwell2005-08-301-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | The ppc and ppc64 trees are hopefully going to merge over time, so this patch begins the process by creating a place for the merging of the header files. Create include/asm-powerpc (and move linkage.h into it from asm-{ppc,ppc64} since we don't like empty directories). Modify the ppc and ppc64 Makefiles to cope. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [PATCH] Make MODULE_DEVICE_TABLE work for vio devicesStephen Rothwell2005-08-301-5/+1
| | | | | | | | | | | | | | Make MODULE_DEVICE_TABLE work for vio devices. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [PATCH] Create vio_bus_opsStephen Rothwell2005-08-301-49/+48
| | | | | | | | | | | | | | | | | | | | | | | | Create vio_bus_ops so that we just pass a structure to vio_bus_init instead of three separate function pointers. Rearrange vio.h to avoid forward references. vio.h only needs struct device_node from prom.h so remove the include and just declare it. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [PATCH] Create vio_register_deviceStephen Rothwell2005-08-301-3/+1
| | | | | | | | | | | | | | | | Take some assignments out of vio_register_device_common and rename it to vio_register_device. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [PATCH] ppc64: four level pagetables fixAndrew Morton2005-08-301-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | With CONFIG_HUGETLB_PAGE=n: In file included from kernel/sysctl.c:37: include/linux/hugetlb.h:104:1: warning: "hugetlb_free_pgd_range" redefined In file included from include/linux/mm.h:36, from kernel/sysctl.c:23: include/asm/pgtable.h:492:1: warning: this is the location of the previous definition Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: David Gibson <dwg@au1.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | Merge HEAD from master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6.git Linus Torvalds2005-08-291-0/+2
|\ \ | |/ |/|
| * [NET]: Introduce SO_{SND,RCV}BUFFORCE socket optionsPatrick McHardy2005-08-291-0/+2
| | | | | | | | | | | | | | Allows overriding of sysctl_{wmem,rmrm}_max Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* | [PATCH] Dynamic hugepage addresses for ppc64David Gibson2005-08-292-13/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Paulus, I think this is now a reasonable candidate for the post-2.6.13 queue. Relax address restrictions for hugepages on ppc64 Presently, 64-bit applications on ppc64 may only use hugepages in the address region from 1-1.5T. Furthermore, if hugepages are enabled in the kernel config, they may only use hugepages and never normal pages in this area. This patch relaxes this restriction, allowing any address to be used with hugepages, but with a 1TB granularity. That is if you map a hugepage anywhere in the region 1TB-2TB, that entire area will be reserved exclusively for hugepages for the remainder of the process's lifetime. This works analagously to hugepages in 32-bit applications, where hugepages can be mapped anywhere, but with 256MB (mmu segment) granularity. This patch applies on top of the four level pagetable patch (http://patchwork.ozlabs.org/linuxppc64/patch?id=1936). Signed-off-by: David Gibson <dwg@au1.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: Move ppc64_enable_pmcs() logic into a ppc_md functionMichael Ellerman2005-08-292-0/+5
| | | | | | | | | | | | | | | | | | This patch moves power4_enable_pmcs() to arch/ppc64/kernel/pmc.c. I've tested it on P5 LPAR and P4. It does what it used to. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: allow xmon=offOlaf Hering2005-08-291-1/+1
| | | | | | | | | | | | | | | | | | | | If both CONFIG_XMON and CONFIG_XMON_DEFAULT is enabled in the .config, there is no way to disable xmon again. setup_system calls first xmon_init, later parse_early_param. So a new 'xmon=off' cmdline option will do the right thing. Signed-off-by: Olaf Hering <olh@suse.de> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: Remove CONFIG_MSCHUNKSMichael Ellerman2005-08-291-14/+5
| | | | | | | | | | | | | | | | | | | | | | | | We can now remove CONFIG_MSCHUNKS as it doesn't do anything interesting anymore. The only macro in abs_addr.h which is called by non-iSeries code is phys_to_abs(), so remove the other dummy implementations, and we add a firmware feature check to phys_to_abs(). Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: Remove physbase from the lmb_property structMichael Ellerman2005-08-291-1/+0
| | | | | | | | | | | | | | | | We no longer need the lmb code to know about abs and phys addresses, so remove the physbase variable from the lmb_property struct. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: Remove redundant abs_to_phys() macroMichael Ellerman2005-08-291-5/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | abs_to_phys() is a macro that turns out to do nothing, and also has the unfortunate property that it's not the inverse of phys_to_abs() on iSeries. The following is for my benefit as much as everyone else. With CONFIG_MSCHUNKS enabled, the lmb code is changed such that it keeps a physbase variable for each lmb region. This is used to take the possibly discontiguous lmb regions and present them as a contiguous address space beginning from zero. In this context each lmb region's base address is its "absolute" base address, and its physbase is it's "physical" address (from Linux's point of view). The abs_to_phys() macro does the mapping from "absolute" to "physical". Note: This is not related to the iSeries mapping of physical to absolute (ie. Hypervisor) addresses which is maintained with the msChunks structure. And the msChunks structure is not controlled via CONFIG_MSCHUNKS. Once upon a time you could compile for non-iSeries with CONFIG_MSCHUNKS enabled. But these days CONFIG_MSCHUNKS depends on CONFIG_PPC_ISERIES, so for non-iSeries code abs_to_phys() is a no-op. On iSeries we always have one lmb region which spans from 0 to systemcfg->physicalMemorySize (arch/ppc64/kernel/iSeries_setup.c line 383). This region has a base (ie. absolute) address of 0, and a physbase address of 0 (as calculated in lmb_analyze() (arch/ppc64/kernel/lmb.c line 144)). On iSeries, abs_to_phys(aa) is defined as lmb_abs_to_phys(aa), which finds the lmb region containing aa (and there's only one, ie. 0), and then does: return lmb.memory.region[0].physbase + (aa - lmb.memory.region[0].base) physbase == base == 0, so you're left with "return aa". So remove abs_to_phys(), and lmb_abs_to_phys() which is the implementation of abs_to_phys() for iSeries. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: Remove redundant uses of physRpn_to_absRpnMichael Ellerman2005-08-291-8/+0
| | | | | | | | | | | | | | | | | | | | | | physRpn_to_absRpn is a no-op on non-iSeries platforms, remove the two redundant calls. There's only one caller on iSeries so fold the logic in there so we can get rid of it completely. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: Consolidate some macrosMichael Ellerman2005-08-291-14/+7
| | | | | | | | | | | | | | | | The only caller of chunk_offset() and abs_chunk() is phys_to_abs(), so fold the former two into the latter. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: Rename msChunks structureMichael Ellerman2005-08-291-8/+7
| | | | | | | | | | | | | | | | Rename the msChunks struct to get rid of the StUdlY caps and make it a bit clearer what it's for. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: msChunks cleanupsMichael Ellerman2005-08-291-6/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Chunks are 256KB, so use constants for the size/shift/mask, rather than getting them from the msChunks struct. The iSeries debugger (??) might still need access to the values in the msChunks struct, so we keep them around for now, but set them from the constant values. Replace msChunks_entry typedef with regular u32. Simplify msChunks_alloc() to manipulate klimit directly, rather than via a parameter. Move msChunks_alloc() and msChunks into iSeries_setup.c, as that's where they're used. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: Remove PTRRELOC() from msChunks codeMichael Ellerman2005-08-292-26/+12
| | | | | | | | | | | | | | | | | | | | | | The msChunks code was written to work on pSeries, but now it's only used on iSeries. This means there's no need to do PTRRELOC anymore, so remove it all. A few places were getting "extern reloc_offset()" from abs_addr.h, move it into system.h instead. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: introduce FW_FEATURE_ISERIESStephen Rothwell2005-08-291-3/+19
| | | | | | | | | | Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: make firmware_has_feature() strongerStephen Rothwell2005-08-291-1/+19
| | | | | | | | | | | | | | | | Make firmware_has_feature() evaluate at compile time for the non pSeries case and tidy up code where possible. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: create firmware_has_feature()Stephen Rothwell2005-08-292-44/+70
| | | | | | | | | | | | | | | | Create the firmware_has_feature() inline and move the firmware feature stuff into its own header file. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: remove firmware features from cpu_specStephen Rothwell2005-08-291-5/+5
| | | | | | | | | | | | | | | | | | | | | | The firmware_features field of struct cpu_spec should really be a separate variable as the firmware features do not depend on the chip and the bitmask is constructed independently. By removing it, we save 112 bytes from the cpu_specs array and we access the bitmask directly instead of via the cur_cpu_spec pointer. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] Change address of ppc64 initial segment tableDavid Gibson2005-08-291-2/+5
| | | | | | | | | | | | | | | | | | | | On ppc64 machines with segment tables, CPU0's segment table is at a fixed address, currently 0x9000. This patch moves it to the free space at 0x6000, just below the fwnmi data area. This saves 8k of space in vmlinux and the runtime kernel image. Signed-off-by: David Gibson <dwg@au1.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] Remove NACA fixed address constraintDavid Gibson2005-08-291-7/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Comments in head.S suggest that the iSeries naca has a fixed address, because tools expect to find it there. The only tool which appears to access the naca is addRamDisk, but both the in-kernel version and the version used in RHEL and SuSE in fact locate the NACA the same way as the hypervisor does, by following the pointer in the hvReleaseData structure. Since the requirement for a fixed address seems to be obsolete, this patch removes the naca from head.S and replaces it with a normal C initializer. For good measure, it removes an old version of addRamDisk.c which was sitting, unused, in the ppc32 tree. Signed-off-by: David Gibson <dwg@au1.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: split pSeries specific parts out of vio.cStephen Rothwell2005-08-291-1/+3
| | | | | | | | | | | | | | This patch just splits out the pSeries specific parts of vio.c. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: make the bus matching function platform specificStephen Rothwell2005-08-291-1/+2
| | | | | | | | | | | | | | | | This patch allows us to have a different bus if matching function for each platform. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: move iSeries vio iommu initStephen Rothwell2005-08-291-3/+0
| | | | | | | | | | | | | | | | Since the iSeries vio iommu tables cannot be used until after the vio bus has been initialised, move the initialisation of the tables to there. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] ppc64: split iSeries specific parts out of vio.cStephen Rothwell2005-08-291-0/+7
| | | | | | | | | | | | | | This patch splits the iSeries specific parts out of vio.c. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
OpenPOWER on IntegriCloud