summaryrefslogtreecommitdiff
path: root/arch/x86
diff options
context:
space:
mode:
authorDexuan Cui <decui@microsoft.com>2014-10-29 03:53:37 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2014-11-14 08:47:54 -0800
commit04157ab004f712e514bdfbd4f59c3a45f562496c (patch)
tree37b04eeab1f994116de4d17d2b9269176974a5ab /arch/x86
parent621be26198eba8272e702ccbd27648c6aae01bc0 (diff)
downloadlwn-04157ab004f712e514bdfbd4f59c3a45f562496c.tar.gz
lwn-04157ab004f712e514bdfbd4f59c3a45f562496c.zip
x86, pageattr: Prevent overflow in slow_virt_to_phys() for X86_PAE
commit d1cd1210834649ce1ca6bafe5ac25d2f40331343 upstream. pte_pfn() returns a PFN of long (32 bits in 32-PAE), so "long << PAGE_SHIFT" will overflow for PFNs above 4GB. Due to this issue, some Linux 32-PAE distros, running as guests on Hyper-V, with 5GB memory assigned, can't load the netvsc driver successfully and hence the synthetic network device can't work (we can use the kernel parameter mem=3000M to work around the issue). Cast pte_pfn() to phys_addr_t before shifting. Fixes: "commit d76565344512: x86, mm: Create slow_virt_to_phys()" Signed-off-by: Dexuan Cui <decui@microsoft.com> Cc: K. Y. Srinivasan <kys@microsoft.com> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: gregkh@linuxfoundation.org Cc: linux-mm@kvack.org Cc: olaf@aepfle.de Cc: apw@canonical.com Cc: jasowang@redhat.com Cc: dave.hansen@intel.com Cc: riel@redhat.com Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/1414580017-27444-1-git-send-email-decui@microsoft.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'arch/x86')
-rw-r--r--arch/x86/mm/pageattr.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index bb32480c2d71..aabdf762f592 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -389,7 +389,7 @@ phys_addr_t slow_virt_to_phys(void *__virt_addr)
psize = page_level_size(level);
pmask = page_level_mask(level);
offset = virt_addr & ~pmask;
- phys_addr = pte_pfn(*pte) << PAGE_SHIFT;
+ phys_addr = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT;
return (phys_addr | offset);
}
EXPORT_SYMBOL_GPL(slow_virt_to_phys);