summaryrefslogtreecommitdiffstats
path: root/mm/vmscan.c
Commit message (Expand)AuthorAgeFilesLines
* mm: slowly shrink slabs with a relatively small number of objectsRoman Gushchin2018-09-201-0/+11
* mm: fix page_freeze_refs and page_unfreeze_refs in commentsJiang Biao2018-08-221-1/+1
* mm: check shrinker is memcg-aware in register_shrinker_prepared()Kirill Tkhai2018-08-221-1/+2
* mm: use special value SHRINKER_REGISTERING instead of list_empty() checkKirill Tkhai2018-08-171-22/+21
* mm/vmscan.c: move check for SHRINKER_NUMA_AWARE to do_shrink_slab()Kirill Tkhai2018-08-171-3/+3
* mm/vmscan.c: clear shrinker bit if there are no objects related to memcgKirill Tkhai2018-08-171-2/+24
* mm: add SHRINK_EMPTY shrinker methods return valueKirill Tkhai2018-08-171-3/+9
* mm/vmscan.c: generalize shrink_slab() calls in shrink_node()Vladimir Davydov2018-08-171-15/+6
* mm/vmscan.c: iterate only over charged shrinkers during memcg shrink_slab()Kirill Tkhai2018-08-171-9/+75
* mm, memcg: assign memcg-aware shrinkers bitmap to memcgKirill Tkhai2018-08-171-1/+7
* mm: assign id to every memcg-aware shrinkerKirill Tkhai2018-08-171-0/+63
* mm/vmscan.c: condense scan_controlGreg Thelen2018-08-171-12/+20
* memcg: introduce memory.minRoman Gushchin2018-06-071-1/+17
* lockdep: fix fs_reclaim annotationOmar Sandoval2018-06-071-7/+13
* mm: fix the NULL mapping case in __isolate_lru_page()Hugh Dickins2018-06-021-1/+1
* mm,vmscan: Allow preallocating memory for register_shrinker().Tetsuo Handa2018-04-161-1/+20
* page cache: use xa_lockMatthew Wilcox2018-04-111-6/+6
* mm: memcg: make sure memory.events is uptodate when waking pollersJohannes Weiner2018-04-111-1/+1
* mm, vmscan, tracing: use pointer to reclaim_stat struct in trace eventSteven Rostedt2018-04-111-17/+1
* mm/vmscan: don't mess with pgdat->flags in memcg reclaimAndrey Ryabinin2018-04-111-24/+72
* mm/vmscan: don't change pgdat state on base of a single LRU list stateAndrey Ryabinin2018-04-111-51/+75
* mm/vmscan: remove redundant current_may_throttle() checkAndrey Ryabinin2018-04-111-1/+1
* mm/vmscan: update stale commentsAndrey Ryabinin2018-04-111-5/+5
* mm, page_alloc: wakeup kcompactd even if kswapd cannot free more memoryDavid Rientjes2018-04-051-9/+23
* mm,vmscan: don't pretend forward progress upon shrinker_rwsem contentionTetsuo Handa2018-04-051-9/+1
* mm: fix races between address_space dereference and free in page_evicatableHuang Ying2018-04-051-1/+7
* mm/vmscan: wake up flushers for legacy cgroups tooAndrey Ryabinin2018-03-221-15/+16
* mm, mlock, vmscan: no more skipping pagevecsShakeel Butt2018-02-211-58/+1
* mm: docs: add blank lines to silence sphinx "Unexpected indentation" errorsMike Rapoport2018-02-061-0/+1
* mm: pin address_space before dereferencing it while isolating an LRU pageMel Gorman2018-01-311-2/+12
* mm: remove unused pgdat_reclaimable_pages()Jan Kara2018-01-311-16/+0
* mm: do not stall register_shrinker()Minchan Kim2018-01-311-0/+9
* mm: use sc->priority for slab shrink targetsJosef Bacik2018-01-311-34/+13
* mm,vmscan: Make unregister_shrinker() no-op if register_shrinker() failed.Tetsuo Handa2017-12-181-0/+3
* mm: remove cold parameter from free_hot_cold_page*Mel Gorman2017-11-151-3/+3
* mm: remove unused pgdat->inactive_ratioAndrey Ryabinin2017-11-151-1/+1
* Merge branch 'for-4.15/block' of git://git.kernel.dk/linux-blockLinus Torvalds2017-11-141-1/+1
|\
| * fs: kill 'nr_pages' argument from wakeup_flusher_threads()Jens Axboe2017-10-031-1/+1
* | License cleanup: add SPDX GPL-2.0 license identifier to files with no licenseGreg Kroah-Hartman2017-11-021-0/+1
|/
* mm, THP, swap: add THP swapping out fallback countingHuang Ying2017-09-061-0/+3
* mm, THP, swap: delay splitting THP after swapped outHuang Ying2017-09-061-43/+52
* mm, vmscan: do not loop on too_many_isolated for everMichal Hocko2017-09-061-1/+7
* mm: track actual nr_scanned during shrink_slab()Chris Wilson2017-09-061-3/+4
* locking/lockdep: Rework FS_RECLAIM annotationPeter Zijlstra2017-08-101-7/+6
* mm, tree wide: replace __GFP_REPEAT by __GFP_RETRY_MAYFAIL with more useful s...Michal Hocko2017-07-121-4/+4
* mm, vmscan: avoid thrashing anon lru when free + file is lowDavid Rientjes2017-07-101-2/+11
* mm: vmstat: move slab statistics from zone to node countersJohannes Weiner2017-07-061-1/+1
* mm: per-cgroup memory reclaim statsRoman Gushchin2017-07-061-7/+23
* mm, THP, swap: enable THP swap optimization only if has compound mapHuang Ying2017-07-061-4/+13
* mm, THP, swap: check whether THP can be split firstlyHuang Ying2017-07-061-0/+4
OpenPOWER on IntegriCloud