| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While synchronization, count register will go backwards for the master.
If synchronise_count_master() runs before synchronise_count_slave(),
skew becomes even more. The skew is very harmful for CPU hotplug (CPU0
do synchronization with CPU1, then CPU0 do synchronization with CPU2
and CPU0's count goes backwards, so it will be out of sync with CPU1).
After the commit cf9bfe55f24973a8f40e2 (MIPS: Synchronize MIPS count one
CPU at a time), we needn't evaluate count_reference at the beginning of
synchronise_count_master() any more. Thus, we evaluate the initcount (It
seems like count_reference is redundant) in the 2nd loop. Since we write
the count register in the last loop, we don't need additional barriers
(the existing memory barriers are enough).
Moreover, I think we loop 3 times is enough to get a primed instruction
cache, this can also get less skew than looping 5 times.
Comments are also updated in this patch.
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Patchwork: https://patchwork.linux-mips.org/patch/12163/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Nobody is maintaining SMTC anymore and there also seems to be no userbase.
Which is a pity - the SMTC technology primarily developed by Kevin D.
Kissell <kevink@paralogos.com> is an ingenious demonstration for the MT
ASE's power and elegance.
Based on Markos Chandras <Markos.Chandras@imgtec.com> patch
https://patchwork.linux-mips.org/patch/6719/ which while very similar did
no longer apply cleanly when I tried to merge it plus some additional
post-SMTC cleanup - SMTC was a feature as tricky to remove as it was to
merge once upon a time.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
|
|
|
|
|
|
|
|
|
| |
None of these files are actually using any __init type directives
and hence don't need to include <linux/init.h>. Most are just a
left over from __devinit and __cpuinit removal, or simply due to
code getting copied from one driver to the next.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: John Crispin <blogic@openwrt.org>
Patchwork: http://patchwork.linux-mips.org/patch/6320/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
commit 3747069b25e419f6b51395f48127e9812abc3596 upstream.
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
and are flagged as __cpuinit -- so if we remove the __cpuinit from
the arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
related content into no-ops as early as possible, since that will get
rid of these warnings. In any case, they are temporary and harmless.
Here, we remove all the MIPS __cpuinit from C code and __CPUINIT
from asm files. MIPS is interesting in this respect, because there
are also uasm users hiding behind their own renamed versions of the
__cpuinit macros.
[1] https://lkml.org/lkml/2013/5/20/589
[ralf@linux-mips.org: Folded in Paul's followup fix.]
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/5494/
Patchwork: https://patchwork.linux-mips.org/patch/5495/
Patchwork: https://patchwork.linux-mips.org/patch/5509/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
|
|
|
|
|
|
| |
Having received another series of whitespace patches I decided to do this
once and for all rather than dealing with this kind of patches trickling
in forever.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current implementation of synchronise_count_{master,slave} blocks
slave CPUs in early boot until all of them come up. This no longer
works because blocking a CPU with interrupts off after notifying the
CPU to be online causes problems with the current kernel.
Specifically, after the workqueue changes
(commit a08489c569dc1 "Pull workqueue changes from Tejun Heo")
the CPU_ONLINE notification callback workqueue_cpu_up_callback()
will hang on wait_for_completion(&idle_rebind.done), if the slave
CPUs are blocked for synchronize_count_slave().
The changes are to update synchronize_count_{master,slave}() to handle
one CPU at a time and to call synchronise_count_master() in __cpu_up()
so that the CPU_ONLINE notification goes out only after the COP0 COUNT
register is synchronized.
[ralf@linux-mips.org: This matter only to those few platforms which are
using the cp0 counter as their clocksource which are XLP, XLR and MIPS'
CMP solution.]
Signed-off-by: Jayachandran C <jchandra@broadcom.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/4216/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
|
|
|
|
|
|
|
|
| |
Since we have delayed irq enabling to ->smp_finish()
Signed-off-by: Yong Zhang <yong.zhang0@gmail.com>
Cc: Sergei Shtylyov <sshtylyov@mvista.com>
Cc: David Daney <david.daney@cavium.com>
Acked-by: David Daney <david.daney@cavium.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows us to move duplicated code in <asm/atomic.h>
(atomic_inc_not_zero() for now) to <linux/atomic.h>
Signed-off-by: Arun Sharma <asharma@fb.com>
Reviewed-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: David Miller <davem@davemloft.net>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
|
|
|
|
|
|
| |
This revises the sync-4k so it will boot and operate since the removal of
expirelo from the timer code.
Signed-off-by: Tim Anderson <tanderson@mvista.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Signed-off-by: Chris Dearman <chris@mips.com>
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|