diff options
author | Paul Jackson <pj@sgi.com> | 2006-12-06 20:31:38 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.osdl.org> | 2006-12-07 08:39:20 -0800 |
commit | 0798e5193cd70f6c867ec176d7730589f944c627 (patch) | |
tree | abe3ada0b04080729418a0c301d8b55b4363b56e /mm | |
parent | a2ce774096110ccc5c02cbdc05897d005fcd3db8 (diff) | |
download | op-kernel-dev-0798e5193cd70f6c867ec176d7730589f944c627.zip op-kernel-dev-0798e5193cd70f6c867ec176d7730589f944c627.tar.gz |
[PATCH] memory page alloc minor cleanups
- s/freeliest/freelist/ spelling fix
- Check for NULL *z zone seems useless - even if it could happen, so
what? Perhaps we should have a check later on if we are faced with an
allocation request that is not allowed to fail - shouldn't that be a
serious kernel error, passing an empty zonelist with a mandate to not
fail?
- Initializing 'z' to zonelist->zones can wait until after the first
get_page_from_freelist() fails; we only use 'z' in the wakeup_kswapd()
loop, so let's initialize 'z' there, in a 'for' loop. Seems clearer.
- Remove superfluous braces around a break
- Fix a couple errant spaces
- Adjust indentation on the cpuset_zone_allowed() check, to match the
lines just before it -- seems easier to read in this case.
- Add another set of braces to the zone_watermark_ok logic
From: Paul Jackson <pj@sgi.com>
Backout one item from a previous "memory page_alloc minor cleanups" patch.
Until and unless we are certain that no one can ever pass an empty zonelist
to __alloc_pages(), this check for an empty zonelist (or some BUG
equivalent) is essential. The code in get_page_from_freelist() blow ups if
passed an empty zonelist.
Signed-off-by: Paul Jackson <pj@sgi.com>
Acked-by: Christoph Lameter <clameter@sgi.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Paul Jackson <pj@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/page_alloc.c | 20 |
1 files changed, 10 insertions, 10 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index aa6fcc7..08360aa 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -486,7 +486,7 @@ static void free_one_page(struct zone *zone, struct page *page, int order) spin_lock(&zone->lock); zone->all_unreclaimable = 0; zone->pages_scanned = 0; - __free_one_page(page, zone ,order); + __free_one_page(page, zone, order); spin_unlock(&zone->lock); } @@ -926,7 +926,7 @@ int zone_watermark_ok(struct zone *z, int order, unsigned long mark, } /* - * get_page_from_freeliest goes through the zonelist trying to allocate + * get_page_from_freelist goes through the zonelist trying to allocate * a page. */ static struct page * @@ -948,8 +948,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, zone->zone_pgdat != zonelist->zones[0]->zone_pgdat)) break; if ((alloc_flags & ALLOC_CPUSET) && - !cpuset_zone_allowed(zone, gfp_mask)) - continue; + !cpuset_zone_allowed(zone, gfp_mask)) + continue; if (!(alloc_flags & ALLOC_NO_WATERMARKS)) { unsigned long mark; @@ -959,17 +959,18 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, mark = zone->pages_low; else mark = zone->pages_high; - if (!zone_watermark_ok(zone , order, mark, - classzone_idx, alloc_flags)) + if (!zone_watermark_ok(zone, order, mark, + classzone_idx, alloc_flags)) { if (!zone_reclaim_mode || !zone_reclaim(zone, gfp_mask, order)) continue; + } } page = buffered_rmqueue(zonelist, zone, order, gfp_mask); - if (page) { + if (page) break; - } + } while (*(++z) != NULL); return page; } @@ -1005,9 +1006,8 @@ restart: if (page) goto got_pg; - do { + for (z = zonelist->zones; *z; z++) wakeup_kswapd(*z, order); - } while (*(++z)); /* * OK, we're below the kswapd watermark and have kicked background |