diff options
author | Chris Metcalf <cmetcalf@tilera.com> | 2013-09-06 08:56:45 -0400 |
---|---|---|
committer | Chris Metcalf <cmetcalf@tilera.com> | 2013-09-06 13:06:25 -0400 |
commit | 6dc9658fa1af9f58d358692b68135f464c167e10 (patch) | |
tree | a112036cab626e664489d87e40012d59331e1404 /arch/tile/lib | |
parent | b40f451d56de69477a2244a0b4082f644699673f (diff) | |
download | op-kernel-dev-6dc9658fa1af9f58d358692b68135f464c167e10.zip op-kernel-dev-6dc9658fa1af9f58d358692b68135f464c167e10.tar.gz |
tile: rework <asm/cmpxchg.h>
The macrology in cmpxchg.h was designed to allow arbitrary pointer
and integer values to be passed through the routines. To support
cmpxchg() on 64-bit values on the 32-bit tilepro architecture, we
used the idiom "(typeof(val))(typeof(val-val))". This way, in the
"size 8" branch of the switch, when the underlying cmpxchg routine
returns a 64-bit quantity, we cast it first to a typeof(val-val)
quantity (i.e. size_t if "val" is a pointer) with no warnings about
casting between pointers and integers of different sizes, then cast
onwards to typeof(val), again with no warnings. If val is not a
pointer type, the additional cast is a no-op. We can't replace the
typeof(val-val) cast with (for example) unsigned long, since then if
"val" is really a 64-bit type, we cast away the high bits.
HOWEVER, this fails with current gcc (through 4.7 at least) if "val"
is a pointer to an incomplete type. Unfortunately gcc isn't smart
enough to realize that "val - val" will always be a size_t type
even if it's an incomplete type pointer.
Accordingly, I've reworked the way we handle the casting. We have
given up the ability to use cmpxchg() on 64-bit values on tilepro,
which is OK in the kernel since we should use cmpxchg64() explicitly
on such values anyway. As a result, I can just use simple "unsigned
long" casts internally.
As I reworked it, I realized it would be cleaner to move the
architecture-specific conditionals for cmpxchg and xchg out of the
atomic.h headers and into cmpxchg, and then use the cmpxchg() and
xchg() primitives directly in atomic.h and elsewhere. This allowed
the cmpxchg.h header to stand on its own without relying on the
implicit include of it that is performed by <asm/atomic.h>.
It also allowed collapsing the atomic_xchg/atomic_cmpxchg routines
from atomic_{32,64}.h into atomic.h.
I improved the tests that guard the allowed size of the arguments
to the routines to use a __compiletime_error() test. (By avoiding
the use of BUILD_BUG, I could include cmpxchg.h into bitops.h as
well and use the macros there, which is otherwise impossible due
to include order dependency issues.)
The tilepro _atomic_xxx internal methods were previously set up to
take atomic_t and atomic64_t arguments, which isn't as convenient
with the new model, so I modified them to take int or u64 arguments,
which is consistent with how they used the arguments internally
anyway, so provided some nice simplification there too.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
Diffstat (limited to 'arch/tile/lib')
-rw-r--r-- | arch/tile/lib/atomic_32.c | 34 |
1 files changed, 16 insertions, 18 deletions
diff --git a/arch/tile/lib/atomic_32.c b/arch/tile/lib/atomic_32.c index 5d91d18..759efa3 100644 --- a/arch/tile/lib/atomic_32.c +++ b/arch/tile/lib/atomic_32.c @@ -59,33 +59,32 @@ static inline int *__atomic_setup(volatile void *v) return __atomic_hashed_lock(v); } -int _atomic_xchg(atomic_t *v, int n) +int _atomic_xchg(int *v, int n) { - return __atomic_xchg(&v->counter, __atomic_setup(v), n).val; + return __atomic_xchg(v, __atomic_setup(v), n).val; } EXPORT_SYMBOL(_atomic_xchg); -int _atomic_xchg_add(atomic_t *v, int i) +int _atomic_xchg_add(int *v, int i) { - return __atomic_xchg_add(&v->counter, __atomic_setup(v), i).val; + return __atomic_xchg_add(v, __atomic_setup(v), i).val; } EXPORT_SYMBOL(_atomic_xchg_add); -int _atomic_xchg_add_unless(atomic_t *v, int a, int u) +int _atomic_xchg_add_unless(int *v, int a, int u) { /* * Note: argument order is switched here since it is easier * to use the first argument consistently as the "old value" * in the assembly, as is done for _atomic_cmpxchg(). */ - return __atomic_xchg_add_unless(&v->counter, __atomic_setup(v), u, a) - .val; + return __atomic_xchg_add_unless(v, __atomic_setup(v), u, a).val; } EXPORT_SYMBOL(_atomic_xchg_add_unless); -int _atomic_cmpxchg(atomic_t *v, int o, int n) +int _atomic_cmpxchg(int *v, int o, int n) { - return __atomic_cmpxchg(&v->counter, __atomic_setup(v), o, n).val; + return __atomic_cmpxchg(v, __atomic_setup(v), o, n).val; } EXPORT_SYMBOL(_atomic_cmpxchg); @@ -108,33 +107,32 @@ unsigned long _atomic_xor(volatile unsigned long *p, unsigned long mask) EXPORT_SYMBOL(_atomic_xor); -u64 _atomic64_xchg(atomic64_t *v, u64 n) +u64 _atomic64_xchg(u64 *v, u64 n) { - return __atomic64_xchg(&v->counter, __atomic_setup(v), n); + return __atomic64_xchg(v, __atomic_setup(v), n); } EXPORT_SYMBOL(_atomic64_xchg); -u64 _atomic64_xchg_add(atomic64_t *v, u64 i) +u64 _atomic64_xchg_add(u64 *v, u64 i) { - return __atomic64_xchg_add(&v->counter, __atomic_setup(v), i); + return __atomic64_xchg_add(v, __atomic_setup(v), i); } EXPORT_SYMBOL(_atomic64_xchg_add); -u64 _atomic64_xchg_add_unless(atomic64_t *v, u64 a, u64 u) +u64 _atomic64_xchg_add_unless(u64 *v, u64 a, u64 u) { /* * Note: argument order is switched here since it is easier * to use the first argument consistently as the "old value" * in the assembly, as is done for _atomic_cmpxchg(). */ - return __atomic64_xchg_add_unless(&v->counter, __atomic_setup(v), - u, a); + return __atomic64_xchg_add_unless(v, __atomic_setup(v), u, a); } EXPORT_SYMBOL(_atomic64_xchg_add_unless); -u64 _atomic64_cmpxchg(atomic64_t *v, u64 o, u64 n) +u64 _atomic64_cmpxchg(u64 *v, u64 o, u64 n) { - return __atomic64_cmpxchg(&v->counter, __atomic_setup(v), o, n); + return __atomic64_cmpxchg(v, __atomic_setup(v), o, n); } EXPORT_SYMBOL(_atomic64_cmpxchg); |