summaryrefslogtreecommitdiffstats
path: root/arch/s390/kernel/smp.c
Commit message (Collapse)AuthorAgeFilesLines
* [S390] Remove smp_cpu_not_running.Heiko Carstens2009-09-111-15/+19
| | | | | | | | smp_cpu_not_running() and cpu_stopped() are doing the same. Remove one and also get rid of the last hard_smp_processor_id() leftover. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Limit cpu detection to 256 physical cpus.Heiko Carstens2009-09-111-2/+3
| | | | | | | Saves us more than 65k pointless IPIs. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] vdso: fix per cpu area allocationHeiko Carstens2009-07-241-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | vdso per cpu area allocation in smp_prepare_cpus() happens with GFP_KERNEL but irqs disabled. Triggers this one: Badness at kernel/lockdep.c:2280 Modules linked in: CPU: 0 Not tainted 2.6.30 #2 Process swapper (pid: 1, task: 000000003fe88000, ksp: 000000003fe87eb8) Krnl PSW : 0400c00180000000 0000000000083360 (lockdep_trace_alloc+0xec/0xf8) [...] Call Trace: ([<00000000000832b6>] lockdep_trace_alloc+0x42/0xf8) [<00000000000b1880>] __alloc_pages_internal+0x3e8/0x5c4 [<00000000000b1b4a>] __get_free_pages+0x3a/0xb0 [<0000000000026546>] vdso_alloc_per_cpu+0x6a/0x18c [<00000000005eff82>] smp_prepare_cpus+0x322/0x594 [<00000000005e8232>] kernel_init+0x76/0x398 [<000000000001bb1e>] kernel_thread_starter+0x6/0xc [<000000000001bb18>] kernel_thread_starter+0x0/0xc Fix this by moving the allocation out of the irqs disabled section. Reported-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] lockless idle time accountingMartin Schwidefsky2009-06-221-9/+19
| | | | | | | Replace the spinlock used in the idle time accounting with a sequence counter mechanism analog to seqlock. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] s390: hibernation support for s390Hans-Joachim Picht2009-06-161-1/+37
| | | | | | | | | | | This patch introduces the hibernation backend support to the s390 architecture. Now it is possible to suspend a mainframe Linux guest using the following command: echo disk > /sys/power/state Signed-off-by: Hans-Joachim Picht <hans@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] ftrace: add dynamic ftrace supportHeiko Carstens2009-06-121-0/+1
| | | | | | | Dynamic ftrace support for s390. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] merge cpu.h into cputime.hMartin Schwidefsky2009-06-121-1/+1
| | | | | | | All definition in cpu.h have to do with cputime accounting. Move them to cputime.h and remove the header file. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] smp: fix cpu_possible_map initializationHeiko Carstens2009-04-141-1/+2
| | | | | | | | | | | | | | | | | The cpu_possible_map by default is initialized with all ones in s390. If the kernel paramert possible_cpus=<x> is passed the cpu_possible_map is supposed to have x bits set. However the current code just sets the x bits without clearing the NR_CPUS bits that were already set. So we end up with an unchanged map that has all bits set. To fix this just clear the map before setting any new bits. This broke with def6cfb70bab83c0094bc0cedd27c4eda563043e "[S390] cpumask: Use accessors code." Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] s390: move machine flags to lowcoreChristian Ehrhardt2009-04-141-0/+1
| | | | | | | | | | | Currently the storage of the machine flags is a globally exported unsigned long long variable. By moving the storage location into the lowcore struct we allow assembler code to check machine_flags directly even without needing a register. Addtionally the lowcore and therefore the machine flags too will be in cache most of the time. Signed-off-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] cpumask: Use accessors code.Rusty Russell2009-03-261-3/+2
| | | | | | | | | | | Impact: use new API Use the accessors rather than frobbing bits directly. Most of this is in arch code I haven't even compiled, but is straightforward. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] cpumask: prepare for iterators to only go to nr_cpu_ids/nr_cpumask_bits.Rusty Russell2009-03-261-9/+9
| | | | | | | | | | | | | | | | | Impact: cleanup, futureproof In fact, all cpumask ops will only be valid (in general) for bit numbers < nr_cpu_ids. So use that instead of NR_CPUS in various places (I also updated the immediate sites to use the new cpumask_ operators). This is always safe: no cpu number can be >= nr_cpu_ids, and nr_cpu_ids is initialized to NR_CPUS at boot. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] smp: perform initial cpu reset before starting a cpuHeiko Carstens2009-03-261-7/+16
| | | | | | | | Performing an initial cpu reset makes sure all registers and tlbs of the targeted cpu are initialized and flushed. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] smp: fix memory leak on __cpu_upHeiko Carstens2009-03-261-3/+3
| | | | | | | | If sigp_set_prefix fails on __cpu_up we leak the lowcore structures and async+panic stacks for the failed cpu. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] zfcpdump: Prevent zcore from beeing built as a kernel module.Michael Holzheu2009-03-261-2/+2
| | | | | | | | | The zcore code switches to real addressing mode when creating a kernel dump. This is not possible, if it is built as a kernel module. With this patch zcore (zfcpdump) can't be built as a kernel module any more. Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] eliminate ipl_device from lowcoreMartin Schwidefsky2009-03-261-1/+0
| | | | Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] eliminate cpuinfo_S390 structureMartin Schwidefsky2009-03-261-4/+4
| | | | Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] lockdep: trace hardirq off in smp_send_stopChristian Borntraeger2009-03-261-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | With lockdep we got the following trace after a panic: Badness at /home/autobuild/BUILD/linux-2.6.28-20090204/kernel/lockdep.c:2878 [...] Call Trace: [<0000000000176334>] lock_acquire+0x54/0xbc [<000000000050b4fe>] __atomic_notifier_call_chain+0x6e/0xdc [<000000000050b59c>] atomic_notifier_call_chain+0x30/0x44 [<0000000000504274>] panic+0xd0/0x1e8 [...] INFO: lockdep is turned off. Last Breaking-Event-Address: [<0000000000170e62>] check_flags+0xae/0x15c possible reason: unannotated irqs-off. lockdep is right. We missed a trace_hardirq_off in our smp_send_stop function and smp_send_stop is called before the panic call chain. Reported-by: Mijo <Safradin mijo@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Automatic IPL after dumpFrank Munzert2009-03-261-9/+0
| | | | | | | | Provide new shutdown action "dump_reipl" for automatic ipl after dump. Signed-off-by: Frank Munzert <munzert@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
* [S390] vdso: compile fixHeiko Carstens2009-01-091-1/+2
| | | | | | | | | | | | | !CONFIG_SMP: arch/s390/kernel/vdso.c: In function 'vdso_init': arch/s390/kernel/vdso.c:325: error: incompatible type for argument 2 of 'vdso_alloc_per_cpu' Also move the code out of the BUG_ON statement since it won't be executed on !CONFIG_BUG. And that would be a bug. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* Merge branch 'cputime' of git://git390.osdl.marist.edu/pub/scm/linux-2.6Linus Torvalds2009-01-031-12/+22
|\ | | | | | | | | | | | | | | | | | | * 'cputime' of git://git390.osdl.marist.edu/pub/scm/linux-2.6: [PATCH] fast vdso implementation for CLOCK_THREAD_CPUTIME_ID [PATCH] improve idle cputime accounting [PATCH] improve precision of idle time detection. [PATCH] improve precision of process accounting. [PATCH] idle cputime accounting [PATCH] fix scaled & unscaled cputime accounting
| * [PATCH] fast vdso implementation for CLOCK_THREAD_CPUTIME_IDMartin Schwidefsky2008-12-311-0/+9
| | | | | | | | | | | | | | | | | | | | | | The extract cpu time instruction (ectg) instruction allows the user process to get the current thread cputime without calling into the kernel. The code that uses the instruction needs to switch to the access registers mode to get access to the per-cpu info page that contains the two base values that are needed to calculate the current cputime from the CPU timer with the ectg instruction. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * [PATCH] improve precision of idle time detection.Martin Schwidefsky2008-12-311-12/+13
| | | | | | | | | | | | | | Increase the precision of the idle time calculation that is exported to user space via /sys/devices/system/cpu/cpu<x>/idle_time_us Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* | Merge branch 'cpus4096-for-linus-2' of ↵Linus Torvalds2009-01-021-6/+0
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'cpus4096-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (66 commits) x86: export vector_used_by_percpu_irq x86: use logical apicid in x2apic_cluster's x2apic_cpu_mask_to_apicid_and() sched: nominate preferred wakeup cpu, fix x86: fix lguest used_vectors breakage, -v2 x86: fix warning in arch/x86/kernel/io_apic.c sched: fix warning in kernel/sched.c sched: move test_sd_parent() to an SMP section of sched.h sched: add SD_BALANCE_NEWIDLE at MC and CPU level for sched_mc>0 sched: activate active load balancing in new idle cpus sched: bias task wakeups to preferred semi-idle packages sched: nominate preferred wakeup cpu sched: favour lower logical cpu number for sched_mc balance sched: framework for sched_mc/smt_power_savings=N sched: convert BALANCE_FOR_xx_POWER to inline functions x86: use possible_cpus=NUM to extend the possible cpus allowed x86: fix cpu_mask_to_apicid_and to include cpu_online_mask x86: update io_apic.c to the new cpumask code x86: Introduce topology_core_cpumask()/topology_thread_cpumask() x86: xen: use smp_call_function_many() x86: use work_on_cpu in x86/kernel/cpu/mcheck/mce_amd_64.c ... Fixed up trivial conflict in kernel/time/tick-sched.c manually
| * cpumask: centralize cpu_online_map and cpu_possible_mapRusty Russell2008-12-131-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup Each SMP arch defines these themselves. Move them to a central location. Twists: 1) Some archs (m32, parisc, s390) set possible_map to all 1, so we add a CONFIG_INIT_ALL_POSSIBLE for this rather than break them. 2) mips and sparc32 '#define cpu_possible_map phys_cpu_present_map'. Those archs simply have phys_cpu_present_map replaced everywhere. 3) Alpha defined cpu_possible_map to cpu_present_map; this is tricky so I just manipulate them both in sync. 4) IA64, cris and m32r have gratuitous 'extern cpumask_t cpu_possible_map' declarations. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Reviewed-by: Grant Grundler <grundler@parisc-linux.org> Tested-by: Tony Luck <tony.luck@intel.com> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: Mike Travis <travis@sgi.com> Cc: ink@jurassic.park.msu.ru Cc: rmk@arm.linux.org.uk Cc: starvik@axis.com Cc: tony.luck@intel.com Cc: takata@linux-m32r.org Cc: ralf@linux-mips.org Cc: grundler@parisc-linux.org Cc: paulus@samba.org Cc: schwidefsky@de.ibm.com Cc: lethal@linux-sh.org Cc: wli@holomorphy.com Cc: davem@davemloft.net Cc: jdike@addtoit.com Cc: mingo@redhat.com
* | [S390] convert cpu related printks to pr_xxx macros.Martin Schwidefsky2008-12-251-9/+8
| | | | | | | | Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* | [S390] panic_stack leak in smp_alloc_lowcoreMartin Schwidefsky2008-12-251-5/+2
| | | | | | | | | | | | Fix freeing of the panic_stack if the allocation of async_stack failed. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* | [S390] Remove config options.Martin Schwidefsky2008-12-251-2/+0
| | | | | | | | | | | | | | On s390 we always want to run with precise cputime accounting. Remove the config options VIRT_TIMER and VIRT_CPU_ACCOUNTING. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* | [S390] convert s390 to generic IPI infrastructureHeiko Carstens2008-12-251-156/+19
|/ | | | | | | | Since etr/stp don't need the old smp_call_function semantics anymore we can convert s390 to the generic IPI infrastructure. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Fix sysdev class file creation.Heiko Carstens2008-10-281-15/+9
| | | | | | | | | | | | | | Use sysdev_class_create_file() to create create sysdev class attributes instead of sysfs_create_file(). Using sysfs_create_file() wasn't a very good idea since the show and store functions have a different amount of parameters for sysfs files and sysdev class files. In particular the pointer to the buffer is the last argument and therefore accesses to random memory regions happened. Still worked surprisingly well until we got a kernel panic. Cc: stable@kernel.org Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* kernel/cpu.c: create a CPU_STARTING cpu_chain notifierManfred Spraul2008-09-081-0/+2
| | | | | | | | | | | | | | | | | Right now, there is no notifier that is called on a new cpu, before the new cpu begins processing interrupts/softirqs. Various kernel function would need that notification, e.g. kvm works around by calling smp_call_function_single(), rcu polls cpu_online_map. The patch adds a CPU_STARTING notification. It also adds a helper function that sends the message to all cpu_chain handlers. Tested on x86-64. All other archs are untested. Especially on sparc, I'm not sure if I got it right. Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* [S390] Remove unneeded spinlock initialization.Heiko Carstens2008-08-211-2/+0
| | | | | | | | Remove the now unneeded s390_idle.lock spinlock initialization after Josef Sipek did it the right way in arch/s390/kernel/process.c. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* sysdev: Pass the attribute to the low level sysdev show/store functionAndi Kleen2008-07-211-12/+24
| | | | | | | | | | | | | | | | | | | | | | This allow to dynamically generate attributes and share show/store functions between attributes. Right now most attributes are generated by special macros and lots of duplicated code. With the attribute passed it's instead possible to attach some data to the attribute and then use that in shared low level functions to do different things. I need this for the dynamically generated bank attributes in the x86 machine check code, but it'll allow some further cleanups. I converted all users in tree to the new show/store prototype. It's a single huge patch to avoid unbisectable sections. Runtime tested: x86-32, x86-64 Compiled only: ia64, powerpc Not compile tested/only grep converted: sh, arm, avr32 Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* on_each_cpu(): kill unused 'retry' parameterJens Axboe2008-06-261-3/+3
| | | | | | | | | It's not even passed on to smp_call_function() anymore, since that was removed. So kill it. Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* smp_call_function: get rid of the unused nonatomic/retry argumentJens Axboe2008-06-261-10/+6
| | | | | | | | It's never used and the comments refer to nonatomic and retry interchangably. So get rid of it. Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* [S390] Fix build failure in __cpu_up()Segher Boessenkool2008-06-101-1/+1
| | | | | | | | | | | | The first argument to __ctl_store() should be the array to store stuff in, not just the first element of that array. With the current code in __cpu_up(), mainline GCC dies with an internal compiler error. I didn't diagnose that further, but just fixed the kernel bug. Signed-off-by: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
* [S390] Fix section mismatch warnings.Heiko Carstens2008-05-301-1/+1
| | | | | | | | | | | | | | | | | | This fixes the last remaining section mismatch warnings in s390 architecture code. It reveals also a real bug introduced by... me with git commit 2069e978d5a6e7b45d58027e3de7f879b8c5e488 ("[S390] sparsemem vmemmap: initialize memmap.") Calling the generic vmemmap_alloc_block() function to get initialized memory is a nice idea, however that function is __meminit annotated and therefore the function might be gone if we try to call it later. This can happen if a DCSS segment gets added. So basically revert the patch and clear the memmap explicitly to fix the original bug. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] smp: __smp_call_function_map vs cpu_online_map fix.Heiko Carstens2008-05-151-8/+8
| | | | | | | | | | | | Both smp_call_function() and __smp_call_function_map() access cpu_online_map. Both functions run with preemption disabled which protects for cpus going offline. However new cpus can be added and therefore the cpu_online_map can change unexpectedly. So use the call_lock to protect against changes to the cpu_online_map in start_secondary() and all smp_call_* functions. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Automatically detect added cpus.Heiko Carstens2008-04-301-5/+14
| | | | | Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] smp: Fix locking order.Heiko Carstens2008-04-301-6/+6
| | | | | | | | | | | On some smp sysfs store attributes get_online_cpus() may block on cpu_hotplug.lock, but we hold already smp_cpu_state_mutex. Since the locking order on cpu hotplug via arch_update_cpu_topology is inverse this might lead to deadlocks. So make sure locking order is always the same. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Fix a lot of sparse warnings.Heiko Carstens2008-04-171-1/+2
| | | | | | | | | | | Most noteable part of this commit is the new local header file entry.h which contains all the function declarations of functions that get only called from asm code or are arch internal. That way we can avoid extern declarations in C files. This is more or less the same that was done for sparc64. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
* [S390] Convert monitor calls to function calls.Heiko Carstens2008-04-171-1/+0
| | | | | | | | | | Remove the program check generating monitor calls and use function calls instead. Theres is no real advantage in using monitor calls, but they do make debugging harder, because of all the program checks it generates. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
* [S390] Vertical cpu management.Heiko Carstens2008-04-171-2/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If vertical cpu polarization is active then the hypervisor will dispatch certain cpus for a longer time than other cpus for maximum performance. For example if a guest would have three virtual cpus, each of them with a share of 33 percent, then in case of vertical cpu polarization all of the processing time would be combined to a single cpu which would run all the time, while the other two cpus would get nearly no cpu time. There are three different types of vertical cpus: high, medium and low. Low cpus hardly get any real cpu time, while high cpus get a full real cpu. Medium cpus get something in between. In order to switch between the two possible modes (default is horizontal) a 0 for horizontal polarization or a 1 for vertical polarization must be written to the dispatching sysfs attribute: /sys/devices/system/cpu/dispatching The polarization of each single cpu can be figured out by the polarization sysfs attribute of each cpu: /sys/devices/system/cpu/cpuX/polarization horizontal, vertical:high, vertical:medium, vertical:low or unknown. When switching polarization the polarization attribute may contain the value unknown until the configuration change is done and the kernel has figured out the new polarization of each cpu. Note that running a system with different types of vertical cpus may result in significant performance regressions. If possible only one type of vertical cpus should be used. All other cpus should be offlined. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
* [S390] cpu topology support for s390.Heiko Carstens2008-04-171-3/+1
| | | | | | | | Add s390 backend so we can give the scheduler some hints about the cpu topology. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
* [S390] Get rid of memcpy gcc warning workaround.Heiko Carstens2008-03-051-8/+2
| | | | | | | | | | Compile smp.o with -Wno-nonnull so gcc stops warning about memcpy being used with a null parameter. Also remove the workaround code and use a char * cast instead of a void * cast to do computations. Cc: Bastian Blank <bastian@waldi.eu.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Initialize per cpu lowcores on cpu hotplug.Heiko Carstens2008-02-191-15/+38
| | | | | | | | | | | | | Just copy the first 512 read-only bytes of the current cpu lowcore if a new cpu gets onlined. The rest is zeroed out and must be explicitly initialized. Current code just copies the entire lowcore and initializes the needed fields. This should reveal bugs in future enhancements quite early. Also when the lowcore of the first cpu is replaced this is now done atomically (no interrupts, no machine checks). Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Fix couple of section mismatches.Heiko Carstens2008-02-051-3/+3
| | | | | | | | | Fix couple of section mismatches. And since we touch the code anyway change the IPL code to use C99 initializers. Cc: Michael Holzheu <holzheu@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Fix smp_call_function_mask semantics.Heiko Carstens2008-02-051-4/+3
| | | | | | | | Make sure func isn't called on the local cpu just like on all other architectures that implement this function. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] replace lock_cpu_hotplug with get_online_cpusMartin Schwidefsky2008-01-261-6/+6
| | | | | | | Git commit 86ef5c9a8edd78e6bf92879f32329d89b2d55b5a forgot a few lock_cpu_hotplug/unlock_cpu_hotplug pairs in arch/s390/kernel/smp.c Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] add smp_call_function_maskCarsten Otte2008-01-261-0/+27
| | | | | | | | | This patch adds the s390 variant for smp_call_function_mask(). The implementation is pretty straight forward using the wrapper __smp_call_function_map() which already takes a cpumask_t argument. Signed-off-by: Carsten Otte <cotte@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Allocate and free cpu lowcores and stacks when needed/possible.Heiko Carstens2008-01-261-34/+72
| | | | | | | | No need to preallocate the per cpu lowcores and stacks. Savings are 28-32k per offline cpu. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
OpenPOWER on IntegriCloud