summaryrefslogtreecommitdiffstats
path: root/fs/mbcache.c
diff options
context:
space:
mode:
authorEric Biggers <ebiggers@google.com>2016-12-03 15:13:15 -0500
committerTheodore Ts'o <tytso@mit.edu>2016-12-03 15:13:15 -0500
commit918b7306edacbcc8a9ca318a5a34d73954e1705d (patch)
treeea77374a7f3da47a8327d75ef6f4f46f2089c2aa /fs/mbcache.c
parent4db0d88e2ebc4f47092adc01f9885a43ad748995 (diff)
downloadop-kernel-dev-918b7306edacbcc8a9ca318a5a34d73954e1705d.zip
op-kernel-dev-918b7306edacbcc8a9ca318a5a34d73954e1705d.tar.gz
mbcache: correctly handle 'e_referenced' bit
mbcache entries have an 'e_referenced' bit which users can set with mb_cache_entry_touch() to indicate that an entry should be given another pass through the LRU list before the shrinker can delete it. However, mb_cache_shrink() actually would, when seeing an e_referenced entry at the front of the list (the least-recently used end), place it right at the front of the list again. The next iteration would then remove the entry from the list and delete it. Consequently, e_referenced had essentially no effect, so ext2/ext4 xattr blocks would sometimes not be reused as often as expected. Fix this by making the shrinker move e_referenced entries to the back of the list rather than the front. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Jan Kara <jack@suse.cz>
Diffstat (limited to 'fs/mbcache.c')
-rw-r--r--fs/mbcache.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/fs/mbcache.c b/fs/mbcache.c
index c5bd19f..31e54c2 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -286,7 +286,7 @@ static unsigned long mb_cache_shrink(struct mb_cache *cache,
struct mb_cache_entry, e_list);
if (entry->e_referenced) {
entry->e_referenced = 0;
- list_move_tail(&cache->c_list, &entry->e_list);
+ list_move_tail(&entry->e_list, &cache->c_list);
continue;
}
list_del_init(&entry->e_list);
OpenPOWER on IntegriCloud