diff options
author | Kirill A. Shutemov <kirill.shutemov@linux.intel.com> | 2016-03-01 09:45:14 +0530 |
---|---|---|
committer | Michael Ellerman <mpe@ellerman.id.au> | 2016-03-03 21:18:29 +1100 |
commit | ff20c2e0acc5ad7e27c68592ade135efee399549 (patch) | |
tree | ba2078e9b2def032e9722112e5ab9d82b5ac0de3 /mm | |
parent | 368ced78e6ed3d72c2acc61233b58487071ec289 (diff) | |
download | op-kernel-dev-ff20c2e0acc5ad7e27c68592ade135efee399549.zip op-kernel-dev-ff20c2e0acc5ad7e27c68592ade135efee399549.tar.gz |
mm: Some arch may want to use HPAGE_PMD related values as variables
With next generation power processor, we are having a new mmu model
[1] that require us to maintain a different linux page table format.
Inorder to support both current and future ppc64 systems with a single
kernel we need to make sure kernel can select between different page
table format at runtime. With the new MMU (radix MMU) added, we will
have two different pmd hugepage size 16MB for hash model and 2MB for
Radix model. Hence make HPAGE_PMD related values as a variable.
Actual conversion of HPAGE_PMD to a variable for ppc64 happens in a
followup patch.
[1] http://ibm.biz/power-isa3 (Needs registration).
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/huge_memory.c | 17 |
1 files changed, 14 insertions, 3 deletions
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index aea8f7a..36c22a8 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -83,7 +83,7 @@ unsigned long transparent_hugepage_flags __read_mostly = (1<<TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG); /* default scan 8*512 pte (or vmas) every 30 second */ -static unsigned int khugepaged_pages_to_scan __read_mostly = HPAGE_PMD_NR*8; +static unsigned int khugepaged_pages_to_scan __read_mostly; static unsigned int khugepaged_pages_collapsed; static unsigned int khugepaged_full_scans; static unsigned int khugepaged_scan_sleep_millisecs __read_mostly = 10000; @@ -98,7 +98,7 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait); * it would have happened if the vma was large enough during page * fault. */ -static unsigned int khugepaged_max_ptes_none __read_mostly = HPAGE_PMD_NR-1; +static unsigned int khugepaged_max_ptes_none __read_mostly; static int khugepaged(void *none); static int khugepaged_slab_init(void); @@ -660,6 +660,18 @@ static int __init hugepage_init(void) return -EINVAL; } + khugepaged_pages_to_scan = HPAGE_PMD_NR * 8; + khugepaged_max_ptes_none = HPAGE_PMD_NR - 1; + /* + * hugepages can't be allocated by the buddy allocator + */ + MAYBE_BUILD_BUG_ON(HPAGE_PMD_ORDER >= MAX_ORDER); + /* + * we use page->mapping and page->index in second tail page + * as list_head: assuming THP order >= 2 + */ + MAYBE_BUILD_BUG_ON(HPAGE_PMD_ORDER < 2); + err = hugepage_init_sysfs(&hugepage_kobj); if (err) goto err_sysfs; @@ -764,7 +776,6 @@ void prep_transhuge_page(struct page *page) * we use page->mapping and page->indexlru in second tail page * as list_head: assuming THP order >= 2 */ - BUILD_BUG_ON(HPAGE_PMD_ORDER < 2); INIT_LIST_HEAD(page_deferred_list(page)); set_compound_page_dtor(page, TRANSHUGE_PAGE_DTOR); |