summaryrefslogtreecommitdiffstats
path: root/fs/dcache.c
diff options
context:
space:
mode:
authorNikolay Borisov <nborisov@suse.com>2018-02-14 14:37:26 +0200
committerDavid Sterba <dsterba@suse.com>2018-03-31 01:26:51 +0200
commit2e32ef87b074cb8098436634b649b4b2b523acbe (patch)
treefdd5d24ad40034addd3bf1eb19c8a21e2d291e1e /fs/dcache.c
parent7c829b722dffb22aaf9e3ea1b1d88dac49bd0768 (diff)
downloadop-kernel-dev-2e32ef87b074cb8098436634b649b4b2b523acbe.zip
op-kernel-dev-2e32ef87b074cb8098436634b649b4b2b523acbe.tar.gz
btrfs: Relax memory barrier in btrfs_tree_unlock
When performing an unlock on an extent buffer we'd like to order the decrement of extent_buffer::blocking_writers with waking up any waiters. In such situations it's sufficient to use smp_mb__after_atomic rather than the heavy smp_mb. On architectures where atomic operations are fully ordered (such as x86 or s390) unconditionally executing a heavyweight smp_mb instruction causes a severe hit to performance while bringin no improvements in terms of correctness. The better thing is to use the appropriate smp_mb__after_atomic routine which will do the correct thing (invoke a full smp_mb or in the case of ordered atomics insert a compiler barrier). Put another way, an RMW atomic op + smp_load__after_atomic equals, in terms of semantics, to a full smp_mb. This ensures that none of the problems described in the accompanying comment of waitqueue_active occur. No functional changes. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Diffstat (limited to 'fs/dcache.c')
0 files changed, 0 insertions, 0 deletions
OpenPOWER on IntegriCloud