summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGrygorii Strashko <grygorii.strashko@ti.com>2014-01-21 15:50:12 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2014-01-21 16:19:46 -0800
commit79f40fab0b3a78e0e41fac79a65a9870f4b05652 (patch)
tree279b0416e82fee4f65646304da075d4e7668540d
parent869a84e1ca163b737236dae997db4a6a1e230b9b (diff)
downloadlwn-79f40fab0b3a78e0e41fac79a65a9870f4b05652.tar.gz
lwn-79f40fab0b3a78e0e41fac79a65a9870f4b05652.zip
mm/memblock: drop WARN and use SMP_CACHE_BYTES as a default alignment
Don't produce warning and interpret 0 as "default align" equal to SMP_CACHE_BYTES in case if caller of memblock_alloc_base_nid() doesn't specify alignment for the block (align == 0). This is done in preparation of introducing common memblock alloc interface to make code behavior consistent. More details are in below thread : https://lkml.org/lkml/2013/10/13/117. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Tejun Heo <tj@kernel.org> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Paul Walmsley <paul@pwsan.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Russell King <linux@arm.linux.org.uk> Cc: Tony Lindgren <tony@atomide.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r--mm/memblock.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/memblock.c b/mm/memblock.c
index de4d9c352fd6..6aca54812db0 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -969,8 +969,8 @@ static phys_addr_t __init memblock_alloc_base_nid(phys_addr_t size,
{
phys_addr_t found;
- if (WARN_ON(!align))
- align = __alignof__(long long);
+ if (!align)
+ align = SMP_CACHE_BYTES;
/* align @size to avoid excessive fragmentation on reserved array */
size = round_up(size, align);