diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2008-07-28 17:54:21 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2008-07-28 17:54:21 -0700 |
commit | 9b79022ca909b66e2cd0cfd9248f832fc165f77f (patch) | |
tree | bcd8c3204886fcbc8422aeb482f8e42c0b5b6124 /arch | |
parent | 34ee55014283a60efa3534c06e010579ffdd3756 (diff) | |
download | op-kernel-dev-9b79022ca909b66e2cd0cfd9248f832fc165f77f.zip op-kernel-dev-9b79022ca909b66e2cd0cfd9248f832fc165f77f.tar.gz |
Fix 'get_user_pages_fast()' with non-page-aligned start address
Alexey Dobriyan reported trouble with LTP with the new fast-gup code,
and Johannes Weiner debugged it to non-page-aligned addresses, where the
new get_user_pages_fast() code would do all the wrong things, including
just traversing past the end of the requested area due to 'addr' never
matching 'end' exactly.
This is not a pretty fix, and we may actually want to move the alignment
into generic code, leaving just the core code per-arch, but Alexey
verified that the vmsplice01 LTP test doesn't crash with this.
Reported-and-tested-by: Alexey Dobriyan <adobriyan@gmail.com>
Debugged-by: Johannes Weiner <hannes@saeurebad.de>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/mm/gup.c | 9 |
1 files changed, 6 insertions, 3 deletions
diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c index 3085f25..007bb06 100644 --- a/arch/x86/mm/gup.c +++ b/arch/x86/mm/gup.c @@ -223,14 +223,17 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write, struct page **pages) { struct mm_struct *mm = current->mm; - unsigned long end = start + (nr_pages << PAGE_SHIFT); - unsigned long addr = start; + unsigned long addr, len, end; unsigned long next; pgd_t *pgdp; int nr = 0; + start &= PAGE_MASK; + addr = start; + len = (unsigned long) nr_pages << PAGE_SHIFT; + end = start + len; if (unlikely(!access_ok(write ? VERIFY_WRITE : VERIFY_READ, - start, nr_pages*PAGE_SIZE))) + start, len))) goto slow_irqon; /* |