diff options
author | Christoph Lameter <clameter@sgi.com> | 2008-04-14 19:13:29 +0300 |
---|---|---|
committer | Pekka Enberg <penberg@cs.helsinki.fi> | 2008-04-27 18:28:40 +0300 |
commit | c124f5b54f879e5870befcc076addbd5d614663f (patch) | |
tree | bedc4ce7ae68ea6de4c8aa6696b30801fedb15f6 /mm | |
parent | 9b2cd506e5f2117f94c28a0040bf5da058105316 (diff) | |
download | op-kernel-dev-c124f5b54f879e5870befcc076addbd5d614663f.zip op-kernel-dev-c124f5b54f879e5870befcc076addbd5d614663f.tar.gz |
slub: pack objects denser
Since we now have more orders available use a denser packing.
Increase slab order if more than 1/16th of a slab would be wasted.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/slub.c | 4 |
1 files changed, 2 insertions, 2 deletions
@@ -1818,7 +1818,7 @@ static int slub_nomerge; * system components. Generally order 0 allocations should be preferred since * order 0 does not cause fragmentation in the page allocator. Larger objects * be problematic to put into order 0 slabs because there may be too much - * unused space left. We go to a higher order if more than 1/8th of the slab + * unused space left. We go to a higher order if more than 1/16th of the slab * would be wasted. * * In order to reach satisfactory performance we must ensure that a minimum @@ -1883,7 +1883,7 @@ static inline int calculate_order(int size) if (!min_objects) min_objects = 4 * (fls(nr_cpu_ids) + 1); while (min_objects > 1) { - fraction = 8; + fraction = 16; while (fraction >= 4) { order = slab_order(size, min_objects, slub_max_order, fraction); |