summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* sparc: unify ptrace.hSam Ravnborg2009-01-024-548/+444
| | | | | | | | | | | | | | | | | | | | | The two ptrace.h implementations are very alike but the small differences required two set of ifdef/else/endif pairs. The definition of reg_window32 could have been shared but that would have required several updates in sparc32 code as all printk formatting for example assume it is longs. sparc_stackf looked like anohter candidate to share if the 32 bit was renamed to sparc_stackf32. But it contains two pointers in the sparc32 version which would have been 64 bit in the sparc64 version so it was non-trivial. Using a set of accessor macros could do the trick if pursued later. The sparc64 specific definitions are not protected by ifdef - as it should not be required to do so. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc: unify sigcontext.hSam Ravnborg2009-01-024-155/+95
| | | | | | | With the renamed types in place the unification was straightforward. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc: add '32' suffix to reg_window, sigcontext, __siginfo_tSam Ravnborg2009-01-0211-41/+38
| | | | | | | | | | | | Renaming a few types to contain a 32 suffix makes the type names compatible with sparc64 and thus makes sharing between the two a lot easier. Note: None of these definitions are expected part of the stable ABI towards userspace. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc: unify signal.hSam Ravnborg2009-01-024-424/+207
| | | | | | | | | | | | | | They were almost identical and with the preapration patch nothing was needed to be added. The unified version contains a few sparc64 only definitions but they are kept as is and not protected by ifdef/endif. The unified version exports a bit more to userspace then the 32 bit version did. This is not considered fatal. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: prepare signal_64 for unificationSam Ravnborg2009-01-021-12/+28
| | | | | | | | | o add a sparc32 only definition o fix a few style issues (white space errors etc). o include compiler.h (for __user) Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc: Kill bogus comment about IRQF_SHARED in pci_psycho.cDavid S. Miller2009-01-021-4/+1
| | | | | | Noticed by Geert Uytterhoeven. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc: unify stat.hSam Ravnborg2009-01-024-117/+105
| | | | | | | | | To my suprise struct stat64 was not equal on sparc 32 and sparc64, so there was really nothing to share here. Unify the files by adding their respective content to stat.h. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc32: use proper types in struct statSam Ravnborg2009-01-021-10/+10
| | | | | | | Like sparc64 use proper types in struct stat Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc32: drop __old_kernel_statSam Ravnborg2009-01-021-14/+0
| | | | | | | | sparc32 does not define __ARCH_WANT_OLD_STAT so we do not use this structure neither do we support it. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc: unify posix_types.hSam Ravnborg2009-01-024-247/+152
| | | | | | | | The posix types differed so much in their definition that they are kept in separate blocks. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc: delete unused config symbolsSam Ravnborg2009-01-021-28/+0
| | | | | | | | | | | | | There is no need to define a config symbol if it is never set to any value. Undefined symbols equal to 'n'. GENERIC_GPIO looks like it is similar but it is set using select in some other file so it must be kept. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* Disallow gcc versions 3.{0,1}Ingo Molnar2009-01-021-0/+4
| | | | | | | | | | GCC 3.0 and 3.1 are too old to build a working kernel. Signed-off-by: Ingo Molnar <mingo@elte.hu> [ This check got dropped as obsolete when I simplified the gcc header inclusion mess in f153b82121b0366fe0e5f9553545cce237335175, but Willy Tarreau reports actually having those old versions still.. -Linus ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'cpus4096-for-linus-2' of ↵Linus Torvalds2009-01-02198-1423/+2018
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'cpus4096-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (66 commits) x86: export vector_used_by_percpu_irq x86: use logical apicid in x2apic_cluster's x2apic_cpu_mask_to_apicid_and() sched: nominate preferred wakeup cpu, fix x86: fix lguest used_vectors breakage, -v2 x86: fix warning in arch/x86/kernel/io_apic.c sched: fix warning in kernel/sched.c sched: move test_sd_parent() to an SMP section of sched.h sched: add SD_BALANCE_NEWIDLE at MC and CPU level for sched_mc>0 sched: activate active load balancing in new idle cpus sched: bias task wakeups to preferred semi-idle packages sched: nominate preferred wakeup cpu sched: favour lower logical cpu number for sched_mc balance sched: framework for sched_mc/smt_power_savings=N sched: convert BALANCE_FOR_xx_POWER to inline functions x86: use possible_cpus=NUM to extend the possible cpus allowed x86: fix cpu_mask_to_apicid_and to include cpu_online_mask x86: update io_apic.c to the new cpumask code x86: Introduce topology_core_cpumask()/topology_thread_cpumask() x86: xen: use smp_call_function_many() x86: use work_on_cpu in x86/kernel/cpu/mcheck/mce_amd_64.c ... Fixed up trivial conflict in kernel/time/tick-sched.c manually
| * x86: export vector_used_by_percpu_irqIngo Molnar2008-12-231-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | Impact: build fix lguest can be built as a module and makes use of this new symbol: ERROR: "vector_used_by_percpu_irq" [drivers/lguest/lg.ko] undefined! export it. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86: use logical apicid in x2apic_cluster's x2apic_cpu_mask_to_apicid_and()Suresh Siddha2008-12-231-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These commits: commit 95d313cf1c1ecedc8bec5727b09bdacbf67dfc45 Author: Mike Travis <travis@sgi.com> Date: Tue Dec 16 17:33:54 2008 -0800 x86: Add cpu_mask_to_apicid_and and commit 6eeb7c5a99434596c5953a95baa17d2f085664e3 Author: Mike Travis <travis@sgi.com> Date: Tue Dec 16 17:33:55 2008 -0800 x86: update add-cpu_mask_to_apicid_and to use struct cpumask* broke interrupt delivery on x2apic platforms. As x2apic cluster mode uses logical delivery mode, we need to use logical apicid instead of physical apicid in x2apic_cpu_mask_to_apicid_and() Impact: fixes the broken interrupt delivery issue on generic x2apic platforms. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Acked-by: Mike Travis <travis@sgi.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * sched: nominate preferred wakeup cpu, fixVaidyanathan Srinivasan2008-12-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Andrew Morton reported: > kernel/sched.c: In function 'schedule': > kernel/sched.c:3679: warning: 'active_balance' may be used uninitialized in this function > > This warning is correct - the code is buggy. In sched.c load_balance_newidle, there's real potential use of uninitialised variable - fix it. Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86: fix lguest used_vectors breakage, -v2Yinghai Lu2008-12-238-24/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: fix lguest, clean up 32-bit lguest used used_vectors to record vectors, but that model of allocating vectors changed and got broken, after we changed vector allocation to a per_cpu array. Try enable that for 64bit, and the array is used for all vectors that are not managed by vector_irq per_cpu array. Also kill system_vectors[], that is now a duplication of the used_vectors bitmap. [ merged in cpus4096 due to io_apic.c cpumask changes. ] [ -v2, fix build failure ] Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86: fix warning in arch/x86/kernel/io_apic.cIngo Molnar2008-12-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | this warning: arch/x86/kernel/io_apic.c: In function ‘ir_set_msi_irq_affinity’: arch/x86/kernel/io_apic.c:3373: warning: ‘cfg’ may be used uninitialized in this function triggers because the variable was truly uninitialized. We'd crash on entering this code. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * sched: fix warning in kernel/sched.cIngo Molnar2008-12-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Impact: fix cpumask conversion bug this warning: kernel/sched.c: In function ‘find_busiest_group’: kernel/sched.c:3429: warning: passing argument 1 of ‘__first_cpu’ from incompatible pointer type shows that we forgot to convert a new patch to the new cpumask APIs. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * sched: move test_sd_parent() to an SMP section of sched.hIngo Molnar2008-12-191-9/+9
| | | | | | | | | | | | Impact: build fix Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * sched: add SD_BALANCE_NEWIDLE at MC and CPU level for sched_mc>0Vaidyanathan Srinivasan2008-12-192-2/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: change task balancing to save power more agressively Add SD_BALANCE_NEWIDLE flag at MC level and CPU level if sched_mc is set. This helps power savings and will not affect performance when sched_mc=0 Ingo and Mike Galbraith have optimised the SD flags by removing SD_BALANCE_NEWIDLE at MC and CPU level. This helps performance but hurts power savings since this slows down task consolidation by reducing the number of times load_balance is run. sched: fine-tune SD_MC_INIT commit 14800984706bf6936bbec5187f736e928be5c218 Author: Mike Galbraith <efault@gmx.de> Date: Fri Nov 7 15:26:50 2008 +0100 sched: re-tune balancing -- revert commit 9fcd18c9e63e325dbd2b4c726623f760788d5aa8 Author: Ingo Molnar <mingo@elte.hu> Date: Wed Nov 5 16:52:08 2008 +0100 This patch selectively enables SD_BALANCE_NEWIDLE flag only when sched_mc is set to 1 or 2. This helps power savings by task consolidation and also does not hurt performance at sched_mc=0 where all power saving optimisations are turned off. Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * sched: activate active load balancing in new idle cpusVaidyanathan Srinivasan2008-12-191-0/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: tweak task balancing to save power more agressively Active load balancing is a process by which migration thread is woken up on the target CPU in order to pull current running task on another package into this newly idle package. This method is already in use with normal load_balance(), this patch introduces this method to new idle cpus when sched_mc is set to POWERSAVINGS_BALANCE_WAKEUP. This logic provides effective consolidation of short running daemon jobs in a almost idle system The side effect of this patch may be ping-ponging of tasks if the system is moderately utilised. May need to adjust the iterations before triggering. Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * sched: bias task wakeups to preferred semi-idle packagesVaidyanathan Srinivasan2008-12-191-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: tweak task wakeup to save power more agressively Preferred wakeup cpu (from a semi idle package) has been nominated in find_busiest_group() in the previous patch. Use this information in sched_mc_preferred_wakeup_cpu in function wake_idle() to bias task wakeups if the following conditions are satisfied: - The present cpu that is trying to wakeup the process is idle and waking the target process on this cpu will potentially wakeup a completely idle package - The previous cpu on which the target process ran is also idle and hence selecting the previous cpu may wakeup a semi idle cpu package - The task being woken up is allowed to run in the nominated cpu (cpu affinity and restrictions) Basically if both the current cpu and the previous cpu on which the task ran is idle, select the nominated cpu from semi idle cpu package for running the new task that is waking up. Cache hotness is considered since the actual biasing happens in wake_idle() only if the application is cache cold. This technique will effectively move short running bursty jobs in a mostly idle system. Wakeup biasing for power savings gets automatically disabled if system utilisation increases due to the fact that the probability of finding both this_cpu and prev_cpu idle decreases. Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * sched: nominate preferred wakeup cpuVaidyanathan Srinivasan2008-12-191-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: extend load-balancing code (no change in behavior yet) When the system utilisation is low and more cpus are idle, then the process waking up from sleep should prefer to wakeup an idle cpu from semi-idle cpu package (multi core package) rather than a completely idle cpu package which would waste power. Use the sched_mc balance logic in find_busiest_group() to nominate a preferred wakeup cpu. This info can be stored in appropriate sched_domain, but updating this info in all copies of sched_domain is not practical. Hence this information is stored in root_domain struct which is one copy per partitioned sched domain. The root_domain can be accessed from each cpu's runqueue and there is one copy per partitioned sched domain. Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * sched: favour lower logical cpu number for sched_mc balanceVaidyanathan Srinivasan2008-12-191-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: change load-balancing direction to match that of irqbalanced Just in case two groups have identical load, prefer to move load to lower logical cpu number rather than the present logic of moving to higher logical number. find_busiest_group() tries to look for a group_leader that has spare capacity to take more tasks and freeup an appropriate least loaded group. Just in case there is a tie and the load is equal, then the group with higher logical number is favoured. This conflicts with user space irqbalance daemon that will move interrupts to lower logical number if the system utilisation is very low. Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * sched: framework for sched_mc/smt_power_savings=NGautham R Shenoy2008-12-192-3/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: extend range of /sys/devices/system/cpu/sched_mc_power_savings Currently the sched_mc/smt_power_savings variable is a boolean, which either enables or disables topology based power savings. This patch extends the behaviour of the variable from boolean to multivalued, such that based on the value, we decide how aggressively do we want to perform powersavings balance at appropriate sched domain based on topology. Variable levels of power saving tunable would benefit end user to match the required level of power savings vs performance trade-off depending on the system configuration and workloads. This version makes the sched_mc_power_savings global variable to take more values (0,1,2). Later versions can have a single tunable called sched_power_savings instead of sched_{mc,smt}_power_savings. Signed-off-by: Gautham R Shenoy <ego@in.ibm.com> Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * sched: convert BALANCE_FOR_xx_POWER to inline functionsVaidyanathan Srinivasan2008-12-192-11/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup BALANCE_FOR_MC_POWER and similar macros defined in sched.h are not constants and have various condition checks and significant amount of code that is not suitable to be contain in a macro. Also there could be side effects on the expressions passed to some of them like test_sd_parent(). This patch converts all complex macros related to power savings balance to inline functions. Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86: use possible_cpus=NUM to extend the possible cpus allowedMike Travis2008-12-183-20/+42
| | | | | | | | | | | | | | | | | | | | | | | | Impact: add new boot parameter Use possible_cpus=NUM kernel parameter to extend the number of possible cpus. The ability to HOTPLUG ON cpus that are "possible" but not "present" is dealt with in a later patch. Signed-off-by: Mike Travis <travis@sgi.com>
| * x86: fix cpu_mask_to_apicid_and to include cpu_online_maskMike Travis2008-12-188-41/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: fix potential APIC crash In determining the destination apicid, there are usually three cpumasks that are considered: the incoming cpumask arg, cfg->domain and the cpu_online_mask. Since we are just introducing the cpu_mask_to_apicid_and function, make sure it includes the cpu_online_mask in it's evaluation. [Added with this patch.] There are two io_apic.c functions that did not previously use the cpu_online_mask: setup_IO_APIC_irq and msi_compose_msg. Both of these simply used cpu_mask_to_apicid(cfg->domain & TARGET_CPUS), and all but one arch (NUMAQ[*]) returns only online cpus in the TARGET_CPUS mask, so the behavior is identical for all cases. [*: NUMAQ bug?] Note that alloc_cpumask_var is only used for the 32-bit cases where it's highly likely that the cpumask set size will be small and therefore CPUMASK_OFFSTACK=n. But if that's not the case, failing the allocate will cause the same return value as the default. Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * Merge branch 'x86/apic' into cpus4096Ingo Molnar2008-12-182-74/+90
| |\ | | | | | | | | | | | | | | | This done for conflict prevention: we merge it into the cpus4096 tree because upcoming cpumask changes will touch apic.c that would collide with x86/apic otherwise.
| * \ Merge branch 'linus' into cpus4096Ingo Molnar2008-12-1828-95/+99
| |\ \
| * | | x86: update io_apic.c to the new cpumask codeIngo Molnar2008-12-171-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: build fix The sparseirq tree crossed with the cpumask changes, fix the fallout. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | Merge branch 'x86/crashdump' into cpus4096Ingo Molnar2008-12-177-211/+207
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/x86/kernel/crash.c Merged for semantic conflict: arch/x86/kernel/reboot.c
| * \ \ \ Merge branch 'irq/sparseirq' into cpus4096Ingo Molnar2008-12-1710-14/+317
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/x86/kernel/io_apic.c Merge irq/sparseirq here, to resolve conflicts.
| * \ \ \ \ Merge branch 'master' of ↵Ingo Molnar2008-12-1746-508/+824
| |\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/travis/linux-2.6-cpus4096-for-ingo into cpus4096
| | * | | | | x86: Introduce topology_core_cpumask()/topology_thread_cpumask()Mike Travis2008-12-161-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: new API The old topology_core_siblings() and topology_thread_siblings() return a cpumask_t; these new ones return a (const) struct cpumask *. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
| | * | | | | x86: xen: use smp_call_function_many()Mike Travis2008-12-161-5/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: use new API, remove cpumask from stack. Change smp_call_function_mask() callers to smp_call_function_many(). This removes a cpumask from the stack, and falls back should allocating the cpumask var fail (only possible with CONFIG_CPUMASKS_OFFSTACK). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Cc: jeremy@xensource.com
| | * | | | | x86: use work_on_cpu in x86/kernel/cpu/mcheck/mce_amd_64.cMike Travis2008-12-161-53/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: Remove cpumask_t's from stack. Simple transition to work_on_cpu(), rather than cpumask games. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Robert Richter <robert.richter@amd.com> Cc: jacob.shin@amd.com
| | * | | | | x86: Remove cpumask games in x86/kernel/cpu/intel_cacheinfo.cMike Travis2008-12-161-22/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: remove cpumask_t from stack. We should not try to save and restore cpus_allowed on current. We can't use work_on_cpu() here, since it's in the hotplug cpu path (if anyone else tries to get the hotplug lock from a workqueue we could deadlock against them). Fortunately, we can just use smp_call_function_single() since the function can run from an interrupt. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Oleg Nesterov <oleg@tv-sign.ru>
| | * | | | | x86: Use cpumask accessors code for possible/present maps.Mike Travis2008-12-161-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: use new API Use the accessors rather than frobbing bits directly. Most of this is in arch code I haven't even compiled, but is straightforward. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
| | * | | | | x86: prepare for cpumask iterators to only go to nr_cpu_idsMike Travis2008-12-164-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup, futureproof In fact, all cpumask ops will only be valid (in general) for bit numbers < nr_cpu_ids. So use that instead of NR_CPUS in various places. This is always safe: no cpu number can be >= nr_cpu_ids, and nr_cpu_ids is initialized to NR_CPUS at boot. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Acked-by: Ingo Molnar <mingo@elte.hu>
| | * | | | | x86: Set CONFIG_NR_CPUS even on UPMike Travis2008-12-161-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
| | * | | | | x86: cosmetic changes apic-related files.Mike Travis2008-12-1616-129/+127
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch simply changes cpumask_t to struct cpumask and similar trivial modernizations. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
| | * | | | | x86: fixup_irqs() doesnt need an argument.Mike Travis2008-12-164-15/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup, remove on-stack cpumask. The "map" arg is always cpu_online_mask. Importantly, set_affinity always ands the argument with cpu_online_mask anyway, so we don't need to do it in fixup_irqs(), avoiding a temporary. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
| | * | | | | xen: convert to cpumask_var_t and new cpumask primitives.Mike Travis2008-12-163-5/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Simple change, and eventual space saving when NR_CPUS >> nr_cpu_ids. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
| | * | | | | x86: Update io_apic.c to use new cpumask APIMike Travis2008-12-161-157/+145
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup, consolidate patches, use new API Consolidate the following into a single patch to adapt to new sparseirq code in arch/x86/kernel/io_apic.c, add allocation of cpumask_var_t's in domain and old_domain, and reduce further merge conflicts. Only one file (arch/x86/kernel/io_apic.c) is changed in all of these patches. 0006-x86-io_apic-change-irq_cfg-domain-old_domain-to.patch 0007-x86-io_apic-set_desc_affinity.patch 0008-x86-io_apic-send_cleanup_vector.patch 0009-x86-io_apic-eliminate-remaining-cpumask_ts-from-st.patch 0021-x86-final-cleanups-in-io_apic-to-use-new-cpumask-AP.patch Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
| | * | | | | x86: update add-cpu_mask_to_apicid_and to use struct cpumask*Mike Travis2008-12-1611-58/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: use updated APIs Various API updates for x86:add-cpu_mask_to_apicid_and (Note: separate because previous patch has been "backported" to 2.6.27.) Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
| | * | | | | x86: Add cpu_mask_to_apicid_andMike Travis2008-12-1612-0/+198
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: new API Add a helper function that takes two cpumask's, and's them and then returns the apicid of the result. This removes a need in io_apic.c that uses a temporary cpumask to hold (mask & cfg->domain). Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| | * | | | | x86: move and enhance debug printk for nr_cpu_ids etc.Mike Travis2008-12-161-6/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup, better debugging This has proven useful in debugging, *before* we try to use for_each_possible_cpu(). It also now shows nr_cpumask_bits. Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| | * | | | | x86 smp: modify send_IPI_mask interface to accept cpumask_t pointersMike Travis2008-12-1630-273/+380
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup, change parameter passing * Change genapic interfaces to accept cpumask_t pointers where possible. * Modify external callers to use cpumask_t pointers in function calls. * Create new send_IPI_mask_allbutself which is the same as the send_IPI_mask functions but removes smp_processor_id() from list. This removes another common need for a temporary cpumask_t variable. * Functions that used a temp cpumask_t variable for: cpumask_t allbutme = cpu_online_map; cpu_clear(smp_processor_id(), allbutme); if (!cpus_empty(allbutme)) ... become: if (!cpus_equal(cpu_online_map, cpumask_of_cpu(cpu))) ... * Other minor code optimizations (like using cpus_clear instead of CPU_MASK_NONE, etc.) Applies to linux-2.6.tip/master. Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Ingo Molnar <mingo@elte.hu>
OpenPOWER on IntegriCloud