diff options
author | Paul Jackson <pj@sgi.com> | 2005-09-06 15:18:12 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@g5.osdl.org> | 2005-09-07 16:57:40 -0700 |
commit | 9bf2229f8817677127a60c177aefce1badd22d7b (patch) | |
tree | 06e95863a26b197233081db1dafd869dfd231950 /mm/page_alloc.c | |
parent | f90b1d2f1aaaa40c6519a32e69615edc25bb97d5 (diff) | |
download | op-kernel-dev-9bf2229f8817677127a60c177aefce1badd22d7b.zip op-kernel-dev-9bf2229f8817677127a60c177aefce1badd22d7b.tar.gz |
[PATCH] cpusets: formalize intermediate GFP_KERNEL containment
This patch makes use of the previously underutilized cpuset flag
'mem_exclusive' to provide what amounts to another layer of memory placement
resolution. With this patch, there are now the following four layers of
memory placement available:
1) The whole system (interrupt and GFP_ATOMIC allocations can use this),
2) The nearest enclosing mem_exclusive cpuset (GFP_KERNEL allocations can use),
3) The current tasks cpuset (GFP_USER allocations constrained to here), and
4) Specific node placement, using mbind and set_mempolicy.
These nest - each layer is a subset (same or within) of the previous.
Layer (2) above is new, with this patch. The call used to check whether a
zone (its node, actually) is in a cpuset (in its mems_allowed, actually) is
extended to take a gfp_mask argument, and its logic is extended, in the case
that __GFP_HARDWALL is not set in the flag bits, to look up the cpuset
hierarchy for the nearest enclosing mem_exclusive cpuset, to determine if
placement is allowed. The definition of GFP_USER, which used to be identical
to GFP_KERNEL, is changed to also set the __GFP_HARDWALL bit, in the previous
cpuset_gfp_hardwall_flag patch.
GFP_ATOMIC and GFP_KERNEL allocations will stay within the current tasks
cpuset, so long as any node therein is not too tight on memory, but will
escape to the larger layer, if need be.
The intended use is to allow something like a batch manager to handle several
jobs, each job in its own cpuset, but using common kernel memory for caches
and such. Swapper and oom_kill activity is also constrained to Layer (2). A
task in or below one mem_exclusive cpuset should not cause swapping on nodes
in another non-overlapping mem_exclusive cpuset, nor provoke oom_killing of a
task in another such cpuset. Heavy use of kernel memory for i/o caching and
such by one job should not impact the memory available to jobs in other
non-overlapping mem_exclusive cpusets.
This patch enables providing hardwall, inescapable cpusets for memory
allocations of each job, while sharing kernel memory allocations between
several jobs, in an enclosing mem_exclusive cpuset.
Like Dinakar's patch earlier to enable administering sched domains using the
cpu_exclusive flag, this patch also provides a useful meaning to a cpuset flag
that had previously done nothing much useful other than restrict what cpuset
configurations were allowed.
Signed-off-by: Paul Jackson <pj@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm/page_alloc.c')
-rw-r--r-- | mm/page_alloc.c | 16 |
1 files changed, 10 insertions, 6 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 14d7032..3974fd8 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -806,11 +806,14 @@ __alloc_pages(unsigned int __nocast gfp_mask, unsigned int order, classzone_idx = zone_idx(zones[0]); restart: - /* Go through the zonelist once, looking for a zone with enough free */ + /* + * Go through the zonelist once, looking for a zone with enough free. + * See also cpuset_zone_allowed() comment in kernel/cpuset.c. + */ for (i = 0; (z = zones[i]) != NULL; i++) { int do_reclaim = should_reclaim_zone(z, gfp_mask); - if (!cpuset_zone_allowed(z)) + if (!cpuset_zone_allowed(z, __GFP_HARDWALL)) continue; /* @@ -845,6 +848,7 @@ zone_reclaim_retry: * * This is the last chance, in general, before the goto nopage. * Ignore cpuset if GFP_ATOMIC (!wait) rather than fail alloc. + * See also cpuset_zone_allowed() comment in kernel/cpuset.c. */ for (i = 0; (z = zones[i]) != NULL; i++) { if (!zone_watermark_ok(z, order, z->pages_min, @@ -852,7 +856,7 @@ zone_reclaim_retry: gfp_mask & __GFP_HIGH)) continue; - if (wait && !cpuset_zone_allowed(z)) + if (wait && !cpuset_zone_allowed(z, gfp_mask)) continue; page = buffered_rmqueue(z, order, gfp_mask); @@ -867,7 +871,7 @@ zone_reclaim_retry: if (!(gfp_mask & __GFP_NOMEMALLOC)) { /* go through the zonelist yet again, ignoring mins */ for (i = 0; (z = zones[i]) != NULL; i++) { - if (!cpuset_zone_allowed(z)) + if (!cpuset_zone_allowed(z, gfp_mask)) continue; page = buffered_rmqueue(z, order, gfp_mask); if (page) @@ -903,7 +907,7 @@ rebalance: gfp_mask & __GFP_HIGH)) continue; - if (!cpuset_zone_allowed(z)) + if (!cpuset_zone_allowed(z, gfp_mask)) continue; page = buffered_rmqueue(z, order, gfp_mask); @@ -922,7 +926,7 @@ rebalance: classzone_idx, 0, 0)) continue; - if (!cpuset_zone_allowed(z)) + if (!cpuset_zone_allowed(z, __GFP_HARDWALL)) continue; page = buffered_rmqueue(z, order, gfp_mask); |