diff options
author | Tejun Heo <tj@kernel.org> | 2012-01-10 15:08:28 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-01-10 16:30:45 -0800 |
commit | 1ebb7044c9142c67d1d2b04d84010b4810a43fd8 (patch) | |
tree | bfacd21f2aabf0b727d678797e2764d010918f29 /mm/mempool.c | |
parent | 0565d317768cc66b13e37184f29d9f270c2886dc (diff) | |
download | op-kernel-dev-1ebb7044c9142c67d1d2b04d84010b4810a43fd8.zip op-kernel-dev-1ebb7044c9142c67d1d2b04d84010b4810a43fd8.tar.gz |
mempool: fix first round failure behavior
mempool modifies gfp_mask so that the backing allocator doesn't try too
hard or trigger warning message when there's pool to fall back on. In
addition, for the first try, it removes __GFP_WAIT and IO, so that it
doesn't trigger reclaim or wait when allocation can be fulfilled from
pool; however, when that allocation fails and pool is empty too, it waits
for the pool to be replenished before retrying.
Allocation which could have succeeded after a bit of reclaim has to wait
on the reserved items and it's not like mempool doesn't retry with
__GFP_WAIT and IO. It just does that *after* someone returns an element,
pointlessly delaying things.
Fix it by retrying immediately if the first round of allocation attempts
w/o __GFP_WAIT and IO fails.
[akpm@linux-foundation.org: shorten the lock hold time]
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/mempool.c')
-rw-r--r-- | mm/mempool.c | 13 |
1 files changed, 11 insertions, 2 deletions
diff --git a/mm/mempool.c b/mm/mempool.c index e3a802a..d904981 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -221,14 +221,23 @@ repeat_alloc: return element; } - /* We must not sleep in the GFP_ATOMIC case */ + /* + * We use gfp mask w/o __GFP_WAIT or IO for the first round. If + * alloc failed with that and @pool was empty, retry immediately. + */ + if (gfp_temp != gfp_mask) { + spin_unlock_irqrestore(&pool->lock, flags); + gfp_temp = gfp_mask; + goto repeat_alloc; + } + + /* We must not sleep if !__GFP_WAIT */ if (!(gfp_mask & __GFP_WAIT)) { spin_unlock_irqrestore(&pool->lock, flags); return NULL; } /* Let's wait for someone else to return an element to @pool */ - gfp_temp = gfp_mask; init_wait(&wait); prepare_to_wait(&pool->wait, &wait, TASK_UNINTERRUPTIBLE); |