diff options
author | Jan Kara <jack@suse.cz> | 2011-07-01 13:31:25 -0600 |
---|---|---|
committer | Wu Fengguang <fengguang.wu@intel.com> | 2011-07-24 10:51:52 +0800 |
commit | bcff25fc8aa47a13faff8b4b992589813f7b450a (patch) | |
tree | ae93e2b8ba1417bf6327f79154c69b9afc8328bb /mm/rmap.c | |
parent | fcc5c22218a18509a7412bf074fc9a7a5d874a8a (diff) | |
download | op-kernel-dev-bcff25fc8aa47a13faff8b4b992589813f7b450a.zip op-kernel-dev-bcff25fc8aa47a13faff8b4b992589813f7b450a.tar.gz |
mm: properly reflect task dirty limits in dirty_exceeded logic
We set bdi->dirty_exceeded (and thus ratelimiting code starts to
call balance_dirty_pages() every 8 pages) when a per-bdi limit is
exceeded or global limit is exceeded. But per-bdi limit also depends
on the task. Thus different tasks reach the limit on that bdi at
different levels of dirty pages. The result is that with current code
bdi->dirty_exceeded ping-ponged between 1 and 0 depending on which task
just got into balance_dirty_pages().
We fix the issue by clearing bdi->dirty_exceeded only when per-bdi amount
of dirty pages drops below the threshold (7/8 * bdi_dirty_limit) where task
limits already do not have any influence.
Impact: The end result is, the dirty pages are kept more tightly under
control, with the average number slightly lowered than before. This
reduces the risk to throttle light dirtiers and hence more responsive.
However it may add overheads by enforcing balance_dirty_pages() calls
on every 8 pages when there are 2+ heavy dirtiers.
CC: Andrew Morton <akpm@linux-foundation.org>
CC: Christoph Hellwig <hch@infradead.org>
CC: Dave Chinner <david@fromorbit.com>
CC: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Diffstat (limited to 'mm/rmap.c')
0 files changed, 0 insertions, 0 deletions