summaryrefslogtreecommitdiff
path: root/arch/powerpc/kernel/iommu.c
diff options
context:
space:
mode:
authorMichael Ellerman <mpe@ellerman.id.au>2017-08-08 17:06:32 +1000
committerMichael Ellerman <mpe@ellerman.id.au>2017-08-15 20:30:58 +1000
commit63b85621d9aa6bdc410f01b22f7821cea3d7bdc6 (patch)
tree05fbafb3604884b73e7b90caafa8954059c46b83 /arch/powerpc/kernel/iommu.c
parent7efbae90892b7858f1d4873d34ffffbeb460ed8b (diff)
downloadlwn-63b85621d9aa6bdc410f01b22f7821cea3d7bdc6.tar.gz
lwn-63b85621d9aa6bdc410f01b22f7821cea3d7bdc6.zip
powerpc/iommu: Avoid undefined right shift in iommu_range_alloc()
In iommu_range_alloc() we generate a mask by right shifting ~0, however if the specified alignment is 0 then we right shift by 64, which is undefined. UBSAN tells us so: UBSAN: Undefined behaviour in ../arch/powerpc/kernel/iommu.c:193:35 shift exponent 64 is too large for 64-bit type 'long unsigned int' We can avoid it by instead generating the mask with: align_mask = (1ull << align_order) - 1; That will also generate an undefined shift if align_order is 64 or greater, but that shouldn't be a problem for a while. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diffstat (limited to 'arch/powerpc/kernel/iommu.c')
-rw-r--r--arch/powerpc/kernel/iommu.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index 0e49a4560cff..e0af6cd7ba4f 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -190,7 +190,7 @@ static unsigned long iommu_range_alloc(struct device *dev,
unsigned int pool_nr;
struct iommu_pool *pool;
- align_mask = 0xffffffffffffffffl >> (64 - align_order);
+ align_mask = (1ull << align_order) - 1;
/* This allocator was derived from x86_64's bit string search */