summaryrefslogtreecommitdiffstats
path: root/mm
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'core-memblock-for-linus' of ↵Linus Torvalds2010-10-214-316/+631
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'core-memblock-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (74 commits) x86-64: Only set max_pfn_mapped to 512 MiB if we enter via head_64.S xen: Cope with unmapped pages when initializing kernel pagetable memblock, bootmem: Round pfn properly for memory and reserved regions memblock: Annotate memblock functions with __init_memblock memblock: Allow memblock_init to be called early memblock/arm: Fix memblock_region_is_memory() typo x86, memblock: Remove __memblock_x86_find_in_range_size() memblock: Fix wraparound in find_region() x86-32, memblock: Make add_highpages honor early reserved ranges x86, memblock: Fix crashkernel allocation arm, memblock: Fix the sparsemem build memblock: Fix section mismatch warnings powerpc, memblock: Fix memblock API change fallout memblock, microblaze: Fix memblock API change fallout x86: Remove old bootmem code x86, memblock: Use memblock_memory_size()/memblock_free_memory_size() to get correct dma_reserve x86: Remove not used early_res code x86, memblock: Replace e820_/_early string with memblock_ x86: Use memblock to replace early_res x86, memblock: Use memblock_debug to control debug message print out ... Fix up trivial conflicts in arch/x86/kernel/setup.c and kernel/Makefile
| * Merge branch 'x86/urgent' into core/memblockH. Peter Anvin2010-10-113-11/+15
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reason for merge: Forward-port urgent change to arch/x86/mm/srat_64.c to the memblock tree. Resolved Conflicts: arch/x86/mm/srat_64.c Originally-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | memblock: Annotate memblock functions with __init_memblockYinghai Lu2010-10-111-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Stephen found WARNING: mm/built-in.o(.text+0x25ab8): Section mismatch in reference from the function memblock_find_base() to the function .init.text:memblock_find_region() The function memblock_find_base() references the function __init memblock_find_region(). This is often because memblock_find_base lacks a __init annotation or the annotation of memblock_find_region is wrong. So let memblock_find_region() to use __init_memblock instead of __init directly. Also fix one function that did not have __init* to be __init_memblock. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Yinghai Lu <yinghai@kernel.org> LKML-Reference: <4CB366B1.40405@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | memblock: Allow memblock_init to be called earlyJeremy Fitzhardinge2010-10-111-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Xen setup code needs to call memblock_x86_reserve_range() very early, so allow it to initialize the memblock subsystem before doing so. The second memblock_init() is ignored. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> LKML-Reference: <4CACFDAD.3090900@goop.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | Merge commit 'v2.6.36-rc7' into core/memblockIngo Molnar2010-10-0820-181/+269
| |\ \ | | | | | | | | | | | | | | | | | | | | Merge reason: Update from -rc3 to -rc7. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | memblock: Fix wraparound in find_region()Yinghai Lu2010-10-051-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When trying to find huge range for crashkernel, get [ 0.000000] ------------[ cut here ]------------ [ 0.000000] WARNING: at arch/x86/mm/memblock.c:248 memblock_x86_reserve_range+0x40/0x7a() [ 0.000000] Hardware name: Sun Fire x4800 [ 0.000000] memblock_x86_reserve_range: wrong range [0xffffffff37000000, 0x137000000) [ 0.000000] Modules linked in: [ 0.000000] Pid: 0, comm: swapper Not tainted 2.6.36-rc5-tip-yh-01876-g1cac214-dirty #59 [ 0.000000] Call Trace: [ 0.000000] [<ffffffff82816f7e>] ? memblock_x86_reserve_range+0x40/0x7a [ 0.000000] [<ffffffff81078c2d>] warn_slowpath_common+0x85/0x9e [ 0.000000] [<ffffffff81078d38>] warn_slowpath_fmt+0x6e/0x70 [ 0.000000] [<ffffffff8281e77c>] ? memblock_find_region+0x40/0x78 [ 0.000000] [<ffffffff8281eb1f>] ? memblock_find_base+0x9a/0xb9 [ 0.000000] [<ffffffff82816f7e>] memblock_x86_reserve_range+0x40/0x7a [ 0.000000] [<ffffffff8280452c>] setup_arch+0x99d/0xb2a [ 0.000000] [<ffffffff810a3e02>] ? trace_hardirqs_off+0xd/0xf [ 0.000000] [<ffffffff81cec7d8>] ? _raw_spin_unlock_irqrestore+0x3d/0x4c [ 0.000000] [<ffffffff827ffcec>] start_kernel+0xde/0x3f1 [ 0.000000] [<ffffffff827ff2d4>] x86_64_start_reservations+0xa0/0xa4 [ 0.000000] [<ffffffff827ff3de>] x86_64_start_kernel+0x106/0x10d [ 0.000000] ---[ end trace a7919e7f17c0a725 ]--- [ 0.000000] Reserving 8192MB of memory at 17592186041200MB for crashkernel (System RAM: 526336MB) This is caused by a wraparound in the test due to size > end; explicitly check for this condition and fail. Signed-off-by: Yinghai Lu <yinghai@kernel.org> LKML-Reference: <4CAA4DD3.1080401@kernel.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| * | | memblock: Fix section mismatch warningsYinghai Lu2010-09-151-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Stephen found a bunch of section mismatch warnings with the new memblock changes. Use __init_memblock to replace __init in memblock.c and remove __init in memblock.h. We should not use __init in header files. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Tested-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Yinghai Lu <Yinghai@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> LKML-Reference: <4C912709.2090201@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | Merge commit 'v2.6.36-rc3' into x86/memblockIngo Molnar2010-08-3132-1520/+2369
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/x86/kernel/trampoline.c mm/memblock.c Merge reason: Resolve the conflicts, update to latest upstream. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | | x86, memblock: Replace e820_/_early string with memblock_Yinghai Lu2010-08-271-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1.include linux/memblock.h directly. so later could reduce e820.h reference. 2 this patch is done by sed scripts mainly -v2: use MEMBLOCK_ERROR instead of -1ULL or -1UL Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| * | | | x86: Use memblock to replace early_resYinghai Lu2010-08-273-46/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1. replace find_e820_area with memblock_find_in_range 2. replace reserve_early with memblock_x86_reserve_range 3. replace free_early with memblock_x86_free_range. 4. NO_BOOTMEM will switch to use memblock too. 5. use _e820, _early wrap in the patch, in following patch, will replace them all 6. because memblock_x86_free_range support partial free, we can remove some special care 7. Need to make sure that memblock_find_in_range() is called after memblock_x86_fill() so adjust some calling later in setup.c::setup_arch() -- corruption_check and mptable_update -v2: Move reserve_brk() early Before fill_memblock_area, to avoid overlap between brk and memblock_find_in_range() that could happen We have more then 128 RAM entry in E820 tables, and memblock_x86_fill() could use memblock_find_in_range() to find a new place for memblock.memory.region array. and We don't need to use extend_brk() after fill_memblock_area() So move reserve_brk() early before fill_memblock_area(). -v3: Move find_smp_config early To make sure memblock_find_in_range not find wrong place, if BIOS doesn't put mptable in right place. -v4: Treat RESERVED_KERN as RAM in memblock.memory. and they are already in memblock.reserved already.. use __NOT_KEEP_MEMBLOCK to make sure memblock related code could be freed later. -v5: Generic version __memblock_find_in_range() is going from high to low, and for 32bit active_region for 32bit does include high pages need to replace the limit with memblock.default_alloc_limit, aka get_max_mapped() -v6: Use current_limit instead -v7: check with MEMBLOCK_ERROR instead of -1ULL or -1L -v8: Set memblock_can_resize early to handle EFI with more RAM entries -v9: update after kmemleak changes in mainline Suggested-by: David S. Miller <davem@davemloft.net> Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| * | | | memblock: Add find_memory_core_early()Yinghai Lu2010-08-271-0/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | According to node range in early_node_map[] with __memblock_find_in_range to find free range. Will be used by memblock_x86_find_in_range_node() memblock_x86_find_in_range_node will be used to find right buffer for NODE_DATA Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| * | | | bootmem, x86: Add weak version of reserve_bootmem_genericYinghai Lu2010-08-271-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It will be used memblock_x86_to_bootmem converting It is an wrapper for reserve_bootmem, and x86 64bit is using special one. Also clean up that version for x86_64. We don't need to take care of numa path for that, bootmem can handle it how Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| * | | | memblock: Add memblock_free/reserve_reserved_regions()Yinghai Lu2010-08-271-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | So we can avoid export memblock_reserved_init_regions() Suggested by Ben. -v2: use __init_memblock attribute Signed-off-by: Yinghai Lu <yinghai@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| * | | | memblock: Add memblock_find_in_range()Yinghai Lu2010-08-051-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a wrapper for memblock_find_base() using slightly different arguments (start,end instead of start,size for example) in order to make it easier to convert existing arch/x86 code. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Option for the architecture to put memblock into the .init sectionYinghai Lu2010-08-051-24/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Arch code can define ARCH_DISCARD_MEMBLOCK in asm/memblock.h, which in turns causes memblock code and data to go respectively into the .init and .initdata sections. This will be used by the x86 architecture. If ARCH_DISCARD_MEMBLOCK is defined, the debugfs files to inspect the memblock arrays after boot are not created. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Make MEMBLOCK_ERROR be 0Benjamin Herrenschmidt2010-08-051-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | And ensure we don't hand out 0 as a valid allocation. We put the low limit at PAGE_SIZE arbitrarily. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Export MEMBLOCK_ERRORYinghai Lu2010-08-051-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | will used by x86 memblock_x86_find_in_range_node and nobootmem replacement Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Improve debug output when resizing the reserve arrayYinghai Lu2010-08-051-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Print out the location info in addition to which array is being resized. Also use memblocK_dbg() to put that under control of the memblock_debug flag. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Expose some memblock bits for use by x86Yinghai Lu2010-08-051-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This exposes memblock_debug and associated memblock_dbg() macro, along with memblock_can_resize so that x86 can use these when ported to use memblock Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Add debugfs files to dump the arrays contentBenjamin Herrenschmidt2010-08-051-0/+51
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Make memblock_alloc_try_nid() fallback to MEMBLOCK_ALLOC_ANYWHEREBenjamin Herrenschmidt2010-08-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | memblock_alloc_nid() used to fallback to allocating anywhere by using memblock_alloc() as a fallback. However, some of my previous patches limit memblock_alloc() to the region covered by MEMBLOCK_ALLOC_ACCESSIBLE which is not quite what we want for memblock_alloc_try_nid(). So we fix it by explicitely using MEMBLOCK_ALLOC_ANYWHERE. Not that so far only sparc uses memblock_alloc_nid() and it hasn't been updated to clamp the accessible zone yet. Thus the temporary "breakage" should have no effect. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Separate memblock_alloc_nid() and memblock_alloc_try_nid()Benjamin Herrenschmidt2010-08-051-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The former is now strict, it will fail if it cannot honor the allocation within the node, while the later implements the previous semantic which falls back to allocating anywhere. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: NUMA allocate can now use early_pfn_mapBenjamin Herrenschmidt2010-08-051-1/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We now provide a default (weak) implementation of memblock_nid_range() which uses the early_pfn_map[] if CONFIG_ARCH_POPULATES_NODE_MAP is set. Sparc still needs to use its own method due to the way the pages can be scattered between nodes. This implementation is inefficient due to our main algorithm and callback construct wanting to work on an ascending addresses bases while early_pfn_map[] would rather work with nid's (it's unsorted at that stage). But it should work and we can look into improving it subsequently, possibly using arch compile options to chose a different algorithm alltogether. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Add "start" argument to memblock_find_base()Benjamin Herrenschmidt2010-08-051-11/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To constraint the search of a region between two boundaries, which will be used by the new NUMA aware allocator among others. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Add arch function to control coalescing of memblock memory regionsBenjamin Herrenschmidt2010-08-051-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some archs such as ARM want to avoid coalescing accross things such as the lowmem/highmem boundary or similar. This provides the option to control it via an arch callback for which a weak default is provided which always allows coalescing. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Add array resizing supportBenjamin Herrenschmidt2010-08-051-2/+102
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When one of the array gets full, we resize it. After much thinking and a few iterations of that code, I went back to on-demand resizing using the (new) internal memblock_find_base() function, which is pretty much what Yinghai initially proposed, though there some differences in the details. To work this relies on the default alloc limit being set sensibly by the architecture. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Move functions around into a more sensible orderBenjamin Herrenschmidt2010-08-051-142/+159
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some shuffling is needed for doing array resize so we may as well put some sense into the ordering of the functions in the whole memblock.c file. No code change. Added some comments. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: split memblock_find_base() out of __memblock_alloc_base()Benjamin Herrenschmidt2010-08-051-20/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This will be used by the array resize code and might prove useful to some arch code as well at which point it can be made non-static. Also add comment as to why aligning size is important Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> --- v2. Fix loss of size alignment v3. Fix result code
| * | | | memblock: Move memblock_init() to the bottom of the fileBenjamin Herrenschmidt2010-08-051-27/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's a real PITA to have to search for it in the middle Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Define MEMBLOCK_ERROR internally instead of using ~(phys_addr_t)0Benjamin Herrenschmidt2010-08-051-5/+7
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Make memblock_find_region() out of memblock_alloc_region()Benjamin Herrenschmidt2010-08-051-11/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This function will be used to locate a free area to put the new memblock arrays when attempting to resize them. memblock_alloc_region() is gone, the two callsites now call memblock_add_region(). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> --- v2. Fix membase_alloc_nid_region() conversion
| * | | | memblock: Add debug markers at the end of the arrayBenjamin Herrenschmidt2010-08-051-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since we allocate one more than needed, why not do a bit of sanity checking here to ensure we don't walk past the end of the array ? Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Move memblock arrays to static storage in memblock.c and make ↵Benjamin Herrenschmidt2010-08-051-1/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | their size a variable This is in preparation for having resizable arrays. Note that we still allocate one more than needed, this is unchanged from the previous implementation. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Remove memblock_type.size and add memblock.memory_size insteadBenjamin Herrenschmidt2010-08-051-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Right now, both the "memory" and "reserved" memblock_type structures have a "size" member. It represents the calculated memory size in the former case and is unused in the latter. This moves it out to the main memblock structure instead Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Change u64 to phys_addr_tBenjamin Herrenschmidt2010-08-051-58/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Let's not waste space and cycles on archs that don't support >32-bit physical address space. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Remove rmo_size, burry it in arch/powerpc where it belongsBenjamin Herrenschmidt2010-08-051-8/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The RMA (RMO is a misnomer) is a concept specific to ppc64 (in fact server ppc64 though I hijack it on embedded ppc64 for similar purposes) and represents the area of memory that can be accessed in real mode (aka with MMU off), or on embedded, from the exception vectors (which is bolted in the TLB) which pretty much boils down to the same thing. We take that out of the generic MEMBLOCK data structure and move it into arch/powerpc where it belongs, renaming it to "RMA" while at it. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Introduce default allocation limit and use it to replace explicit onesBenjamin Herrenschmidt2010-08-051-8/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This introduce memblock.current_limit which is used to limit allocations from memblock_alloc() or memblock_alloc_base(..., MEMBLOCK_ALLOC_ACCESSIBLE). The old MEMBLOCK_ALLOC_ANYWHERE changes value from 0 to ~(u64)0 and can still be used with memblock_alloc_base() to allocate really anywhere. It is -no-longer- cropped to MEMBLOCK_REAL_LIMIT which disappears. Note to archs: I'm leaving the default limit to MEMBLOCK_ALLOC_ANYWHERE. I strongly recommend that you ensure that you set an appropriate limit during boot in order to guarantee that an memblock_alloc() at any time results in something that is accessible with a simple __va(). The reason is that a subsequent patch will introduce the ability for the array to resize itself by reallocating itself. The MEMBLOCK core will honor the current limit when performing those allocations. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Expose MEMBLOCK_ALLOC_ANYWHEREBenjamin Herrenschmidt2010-08-051-2/+0
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Factor the lowest level alloc functionBenjamin Herrenschmidt2010-08-051-32/+27
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Remove nid_range argument, arch provides memblock_nid_range() insteadBenjamin Herrenschmidt2010-08-051-5/+8
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Remove memblock_find()Benjamin Herrenschmidt2010-08-051-32/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nobody uses it anymore. It's semantics were ... weird Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Implement memblock_is_memory and memblock_is_region_memoryBenjamin Herrenschmidt2010-08-041-8/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To make it fast, we steal ARM's binary search for memblock_is_memory() and we use that to also the replace existing implementation of memblock_is_reserved(). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Rename memblock_region to memblock_type and memblock_property to ↵Benjamin Herrenschmidt2010-08-041-85/+83
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | memblock_region Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | memblock: Fix memblock_is_region_reserved() to return a booleanBenjamin Herrenschmidt2010-08-041-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All callers expect a boolean result which is true if the region overlaps a reserved region. However, the implementation actually returns -1 if there is no overlap, and a region index (0 based) if there is. Make it behave as callers (and common sense) expect. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | | | | Merge branch 'x86-mm-for-linus' of ↵Linus Torvalds2010-10-212-1/+10
|\ \ \ \ \ | |_|_|_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86-32, percpu: Correct the ordering of the percpu readmostly section x86, mm: Enable ARCH_DMA_ADDR_T_64BIT with X86_64 || HIGHMEM64G x86: Spread tlb flush vector between nodes percpu: Introduce a read-mostly percpu API x86, mm: Fix incorrect data type in vmalloc_sync_all() x86, mm: Hold mm->page_table_lock while doing vmalloc_sync x86, mm: Fix bogus whitespace in sync_global_pgds() x86-32: Fix sparse warning for the __PHYSICAL_MASK calculation x86, mm: Add RESERVE_BRK_ARRAY() helper mm, x86: Saving vmcore with non-lazy freeing of vmas x86, kdump: Change copy_oldmem_page() to use cached addressing x86, mm: fix uninitialized addr in kernel_physical_mapping_init() x86, kmemcheck: Remove double test x86, mm: Make spurious_fault check explicitly check the PRESENT bit x86-64, mem: Update all PGDs for direct mapping and vmemmap mapping changes x86, mm: Separate x86_64 vmalloc_sync_all() into separate functions x86, mm: Avoid unnecessary TLB flush
| * | | | mm, x86: Saving vmcore with non-lazy freeing of vmasCliff Wickman2010-09-171-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During the reading of /proc/vmcore the kernel is doing ioremap()/iounmap() repeatedly. And the buildup of un-flushed vm_area_struct's is causing a great deal of overhead. (rb_next() is chewing up most of that time). This solution is to provide function set_iounmap_nonlazy(). It causes a subsequent call to iounmap() to immediately purge the vma area (with try_purge_vmap_area_lazy()). With this patch we have seen the time for writing a 250MB compressed dump drop from 71 seconds to 44 seconds. Signed-off-by: Cliff Wickman <cpw@sgi.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: kexec@lists.infradead.org Cc: <stable@kernel.org> LKML-Reference: <E1OwHZ4-0005WK-Tw@eag09.americas.sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | | x86, mm: Avoid unnecessary TLB flushShaohua Li2010-08-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In x86, access and dirty bits are set automatically by CPU when CPU accesses memory. When we go into the code path of below flush_tlb_fix_spurious_fault(), we already set dirty bit for pte and don't need flush tlb. This might mean tlb entry in some CPUs hasn't dirty bit set, but this doesn't matter. When the CPUs do page write, they will automatically check the bit and no software involved. On the other hand, flush tlb in below position is harmful. Test creates CPU number of threads, each thread writes to a same but random address in same vma range and we measure the total time. Under a 4 socket system, original time is 1.96s, while with the patch, the time is 0.8s. Under a 2 socket system, there is 20% time cut too. perf shows a lot of time are taking to send ipi/handle ipi for tlb flush. Signed-off-by: Shaohua Li <shaohua.li@intel.com> LKML-Reference: <20100816011655.GA362@sli10-desk.sh.intel.com> Acked-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Andrea Archangeli <aarcange@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* | | | | Merge branch 'hwpoison-fixes' of ↵Linus Torvalds2010-10-071-6/+6
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-mce-2.6 * 'hwpoison-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-mce-2.6: HWPOISON: Stop shrinking at right page count HWPOISON: Report correct address granuality for AO huge page errors HWPOISON: Copy si_addr_lsb to user page-types.c: fix name of unpoison interface
| * | | | | HWPOISON: Stop shrinking at right page countAndi Kleen2010-10-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we call the slab shrinker to free a page we need to stop at page count one because the caller always holds a single reference, not zero. This avoids useless looping over slab shrinkers and freeing too much memory. Reviewed-by: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Andi Kleen <ak@linux.intel.com>
| * | | | | HWPOISON: Report correct address granuality for AO huge page errorsAndi Kleen2010-10-071-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The SIGBUS user space signalling is supposed to report the address granuality of a corruption. Pass this information correctly for huge pages by querying the hpage order. Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reviewed-by: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Andi Kleen <ak@linux.intel.com>
OpenPOWER on IntegriCloud