summaryrefslogtreecommitdiffstats
path: root/fs/xfs/xfs_trans_priv.h
diff options
context:
space:
mode:
authorDave Chinner <dchinner@redhat.com>2011-09-30 04:45:03 +0000
committerAlex Elder <aelder@sgi.com>2011-10-11 21:15:09 -0500
commit670ce93fef93bba8c8a422a79747385bec8e846a (patch)
tree2f358f3c38f847cd12caf5f5f1eb3c36d586c546 /fs/xfs/xfs_trans_priv.h
parent3815832a2aa4df9815d15dac05227e0c8551833f (diff)
downloadop-kernel-dev-670ce93fef93bba8c8a422a79747385bec8e846a.zip
op-kernel-dev-670ce93fef93bba8c8a422a79747385bec8e846a.tar.gz
xfs: reduce the number of log forces from tail pushing
The AIL push code will issue a log force on ever single push loop that it exits and has encountered pinned items. It doesn't rescan these pinned items until it revisits the AIL from the start. Hence we only need to force the log once per walk from the start of the AIL to the target LSN. This results in numbers like this: xs_push_ail_flush..... 1456 xs_log_force......... 1485 For an 8-way 50M inode create workload - almost all the log forces are coming from the AIL pushing code. Reduce the number of log forces by only forcing the log if the previous walk found pinned buffers. This reduces the numbers to: xs_push_ail_flush..... 665 xs_log_force......... 682 For the same test. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alex Elder <aelder@sgi.com>
Diffstat (limited to 'fs/xfs/xfs_trans_priv.h')
-rw-r--r--fs/xfs/xfs_trans_priv.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/fs/xfs/xfs_trans_priv.h b/fs/xfs/xfs_trans_priv.h
index 212946b..0a6eec6 100644
--- a/fs/xfs/xfs_trans_priv.h
+++ b/fs/xfs/xfs_trans_priv.h
@@ -71,6 +71,7 @@ struct xfs_ail {
struct delayed_work xa_work;
xfs_lsn_t xa_last_pushed_lsn;
unsigned long xa_flags;
+ int xa_log_flush;
};
#define XFS_AIL_PUSHING_BIT 0
OpenPOWER on IntegriCloud