summaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>2009-03-31 15:19:37 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2009-04-01 08:59:12 -0700
commitbd775c42ea5f7c766d03a287083837cf05e7e738 (patch)
tree40084f399068bed56c3061afd5e1175c679160df /mm
parent9786bf841da57fac3457a1dac41acb4c1f2eced6 (diff)
downloadop-kernel-dev-bd775c42ea5f7c766d03a287083837cf05e7e738.zip
op-kernel-dev-bd775c42ea5f7c766d03a287083837cf05e7e738.tar.gz
mm: add comment why mark_page_accessed() would be better than pte_mkyoung() in follow_page()
At first look, mark_page_accessed() in follow_page() seems a bit strange. It seems pte_mkyoung() would be better consistent with other kernel code. However, it is intentional. The commit log said: ------------------------------------------------ commit 9e45f61d69be9024a2e6bef3831fb04d90fac7a8 Author: akpm <akpm> Date: Fri Aug 15 07:24:59 2003 +0000 [PATCH] Use mark_page_accessed() in follow_page() Touching a page via follow_page() counts as a reference so we should be either setting the referenced bit in the pte or running mark_page_accessed(). Altering the pte is tricky because we haven't implemented an atomic pte_mkyoung(). And mark_page_accessed() is better anyway because it has more aging state: it can move the page onto the active list. BKrev: 3f3c8acbplT8FbwBVGtth7QmnqWkIw ------------------------------------------------ The atomic issue is still true nowadays. adding comment help to understand code intention and it would be better. [akpm@linux-foundation.org: clarify text] Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Hugh Dickins <hugh@veritas.com> Cc: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/memory.c5
1 files changed, 5 insertions, 0 deletions
diff --git a/mm/memory.c b/mm/memory.c
index 0017111..5b4ad5e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1151,6 +1151,11 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
if ((flags & FOLL_WRITE) &&
!pte_dirty(pte) && !PageDirty(page))
set_page_dirty(page);
+ /*
+ * pte_mkyoung() would be more correct here, but atomic care
+ * is needed to avoid losing the dirty bit: it is easier to use
+ * mark_page_accessed().
+ */
mark_page_accessed(page);
}
unlock:
OpenPOWER on IntegriCloud