summaryrefslogtreecommitdiffstats
path: root/mm
Commit message (Expand)AuthorAgeFilesLines
* mm, THP: clean up return value of madvise_free_huge_pmdHuang Ying2016-07-281-7/+8
* mm/zsmalloc: use helper to clear page->flags bitGanesh Mahendran2016-07-281-2/+2
* mm/zsmalloc: add __init,__exit attributeGanesh Mahendran2016-07-281-1/+1
* mm/zsmalloc: keep comments consistent with codeGanesh Mahendran2016-07-281-4/+3
* mm/zsmalloc: avoid calculate max objects of zspage twiceGanesh Mahendran2016-07-281-16/+10
* mm/zsmalloc: use class->objs_per_zspage to get num of max objectsGanesh Mahendran2016-07-281-11/+7
* mm/zsmalloc: take obj index back from find_alloced_objGanesh Mahendran2016-07-281-2/+6
* mm/zsmalloc: use obj_index to keep consistent with othersGanesh Mahendran2016-07-281-7/+7
* mm: bail out in shrink_inactive_list()Minchan Kim2016-07-281-0/+27
* mm, vmscan: account for skipped pages as a partial scanMel Gorman2016-07-281-2/+18
* mm: consider whether to decivate based on eligible zones inactive ratioMel Gorman2016-07-281-5/+29
* mm: remove reclaim and compaction retry approximationsMel Gorman2016-07-286-58/+37
* mm, vmscan: remove highmem_file_pagesMel Gorman2016-07-281-8/+4
* mm: add per-zone lru list statMinchan Kim2016-07-283-9/+15
* mm, vmscan: release/reacquire lru_lock on pgdat changeMel Gorman2016-07-281-11/+10
* mm, vmscan: remove redundant check in shrink_zones()Mel Gorman2016-07-281-3/+0
* mm, vmscan: Update all zone LRU sizes before updating memcgMel Gorman2016-07-282-11/+34
* mm: show node_pages_scanned per node, not zoneMinchan Kim2016-07-281-3/+3
* mm, pagevec: release/reacquire lru_lock on pgdat changeMel Gorman2016-07-281-10/+10
* mm, page_alloc: fix dirtyable highmem calculationMinchan Kim2016-07-281-6/+10
* mm, vmstat: remove zone and node double accounting by approximating retriesMel Gorman2016-07-286-42/+67
* mm, vmstat: print node-based stats in zoneinfo fileMel Gorman2016-07-281-0/+24
* mm: vmstat: account per-zone stalls and pages skipped during reclaimMel Gorman2016-07-282-3/+15
* mm: vmstat: replace __count_zone_vm_events with a zone id equivalentMel Gorman2016-07-281-1/+1
* mm: page_alloc: cache the last node whose dirty limit is reachedMel Gorman2016-07-281-2/+11
* mm, page_alloc: remove fair zone allocation policyMel Gorman2016-07-283-78/+2
* mm, vmscan: add classzone information to tracepointsMel Gorman2016-07-281-5/+9
* mm, vmscan: Have kswapd reclaim from all zones if reclaiming and buffer_heads...Mel Gorman2016-07-281-8/+14
* mm, vmscan: avoid passing in `remaining' unnecessarily to prepare_kswapd_sleep()Mel Gorman2016-07-281-8/+4
* mm, vmscan: avoid passing in classzone_idx unnecessarily to compaction_readyMel Gorman2016-07-281-20/+7
* mm, vmscan: avoid passing in classzone_idx unnecessarily to shrink_nodeMel Gorman2016-07-281-11/+9
* mm: convert zone_reclaim to node_reclaimMel Gorman2016-07-284-53/+60
* mm, page_alloc: wake kswapd based on the highest eligible zoneMel Gorman2016-07-281-1/+1
* mm, vmscan: only wakeup kswapd once per node for the requested classzoneMel Gorman2016-07-282-4/+17
* mm: move vmscan writes and file write accounting to the nodeMel Gorman2016-07-283-9/+9
* mm: move most file-based accounting to the nodeMel Gorman2016-07-2812-117/+107
* mm: rename NR_ANON_PAGES to NR_ANON_MAPPEDMel Gorman2016-07-282-5/+5
* mm: move page mapped accounting to the nodeMel Gorman2016-07-284-13/+13
* mm, page_alloc: consider dirtyable memory in terms of nodesMel Gorman2016-07-282-45/+72
* mm, workingset: make working set detection node-awareMel Gorman2016-07-282-40/+23
* mm, memcg: move memcg limit enforcement from zones to nodesMel Gorman2016-07-283-120/+95
* mm, vmscan: make shrink_node decisions more node-centricMel Gorman2016-07-284-32/+41
* mm: vmscan: do not reclaim from kswapd if there is any eligible zoneMel Gorman2016-07-281-32/+27
* mm, vmscan: remove duplicate logic clearing node congestion and dirty stateMel Gorman2016-07-281-12/+12
* mm, vmscan: by default have direct reclaim only shrink once per nodeMel Gorman2016-07-281-8/+14
* mm, vmscan: simplify the logic deciding whether kswapd sleepsMel Gorman2016-07-283-54/+54
* mm, vmscan: remove balance gapMel Gorman2016-07-281-11/+8
* mm, vmscan: make kswapd reclaim in terms of nodesMel Gorman2016-07-281-191/+101
* mm, vmscan: have kswapd only scan based on the highest requested zoneMel Gorman2016-07-281-5/+2
* mm, vmscan: begin reclaiming pages on a per-node basisMel Gorman2016-07-281-24/+55
OpenPOWER on IntegriCloud