summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2009-06-22 10:25:25 -0700
committerGreg Kroah-Hartman <gregkh@suse.de>2009-07-30 14:40:13 -0700
commit4752849e0a4b09acb68b029b01d8eb2250e92f6c (patch)
treec63818ed73225ab12e66b9a79d8252d7bd292fdd
parent0e52a8524e849c00034c8cb2422e40da6fac5e08 (diff)
downloadlwn-4752849e0a4b09acb68b029b01d8eb2250e92f6c.tar.gz
lwn-4752849e0a4b09acb68b029b01d8eb2250e92f6c.zip
x86: don't use 'access_ok()' as a range check in get_user_pages_fast()
[ Upstream commit 7f8189068726492950bf1a2dcfd9b51314560abf - modified for stable to not use the sloppy __VIRTUAL_MASK_SHIFT ] It's really not right to use 'access_ok()', since that is meant for the normal "get_user()" and "copy_from/to_user()" accesses, which are done through the TLB, rather than through the page tables. Why? access_ok() does both too few, and too many checks. Too many, because it is meant for regular kernel accesses that will not honor the 'user' bit in the page tables, and because it honors the USER_DS vs KERNEL_DS distinction that we shouldn't care about in GUP. And too few, because it doesn't do the 'canonical' check on the address on x86-64, since the TLB will do that for us. So instead of using a function that isn't meant for this, and does something else and much more complicated, just do the real rules: we don't want the range to overflow, and on x86-64, we want it to be a canonical low address (on 32-bit, all addresses are canonical). Acked-by: Ingo Molnar <mingo@elte.hu> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-rw-r--r--arch/x86/mm/gup.c9
1 files changed, 7 insertions, 2 deletions
diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c
index 6340cef6798a..312e8ebd284b 100644
--- a/arch/x86/mm/gup.c
+++ b/arch/x86/mm/gup.c
@@ -247,10 +247,15 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write,
start &= PAGE_MASK;
addr = start;
len = (unsigned long) nr_pages << PAGE_SHIFT;
+
end = start + len;
- if (unlikely(!access_ok(write ? VERIFY_WRITE : VERIFY_READ,
- (void __user *)start, len)))
+ if (end < start)
+ goto slow_irqon;
+
+#ifdef CONFIG_X86_64
+ if (end >> 47)
goto slow_irqon;
+#endif
/*
* XXX: batch / limit 'nr', to avoid large irq off latency