diff options
author | Dave Chinner <dchinner@redhat.com> | 2010-09-24 19:59:04 +1000 |
---|---|---|
committer | Alex Elder <aelder@sgi.com> | 2010-10-18 15:07:56 -0500 |
commit | 74f75a0cb7033918eb0fa4a50df25091ac75c16e (patch) | |
tree | 3885c0b357c760152d14df03ef88839fdbf5f964 /fs/xfs/xfs_mount.c | |
parent | 69b491c214d7fd4d4df972ae5377be99ca3753db (diff) | |
download | op-kernel-dev-74f75a0cb7033918eb0fa4a50df25091ac75c16e.zip op-kernel-dev-74f75a0cb7033918eb0fa4a50df25091ac75c16e.tar.gz |
xfs: convert buffer cache hash to rbtree
The buffer cache hash is showing typical hash scalability problems.
In large scale testing the number of cached items growing far larger
than the hash can efficiently handle. Hence we need to move to a
self-scaling cache indexing mechanism.
I have selected rbtrees for indexing becuse they can have O(log n)
search scalability, and insert and remove cost is not excessive,
even on large trees. Hence we should be able to cache large numbers
of buffers without incurring the excessive cache miss search
penalties that the hash is imposing on us.
To ensure we still have parallel access to the cache, we need
multiple trees. Rather than hashing the buffers by disk address to
select a tree, it seems more sensible to separate trees by typical
access patterns. Most operations use buffers from within a single AG
at a time, so rather than searching lots of different lists,
separate the buffer indexes out into per-AG rbtrees. This means that
searches during metadata operation have a much higher chance of
hitting cache resident nodes, and that updates of the tree are less
likely to disturb trees being accessed on other CPUs doing
independent operations.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Alex Elder <aelder@sgi.com>
Diffstat (limited to 'fs/xfs/xfs_mount.c')
-rw-r--r-- | fs/xfs/xfs_mount.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index 59859c3..cfa2fb4 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -479,6 +479,8 @@ xfs_initialize_perag( rwlock_init(&pag->pag_ici_lock); mutex_init(&pag->pag_ici_reclaim_lock); INIT_RADIX_TREE(&pag->pag_ici_root, GFP_ATOMIC); + spin_lock_init(&pag->pag_buf_lock); + pag->pag_buf_tree = RB_ROOT; if (radix_tree_preload(GFP_NOFS)) goto out_unwind; |