diff options
author | Minchan Kim <minchan@kernel.org> | 2012-07-31 16:43:53 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-07-31 18:42:45 -0700 |
commit | 2cfed0752808625d30aca7fc9f383af386fd8a13 (patch) | |
tree | 5e92da86e42e55b253cc04e10545e05b6b472542 /mm/page_alloc.c | |
parent | ee6f509c3274014d1f52e7a7a10aee9f85393c5e (diff) | |
download | lwn-2cfed0752808625d30aca7fc9f383af386fd8a13.tar.gz lwn-2cfed0752808625d30aca7fc9f383af386fd8a13.zip |
mm: fix free page check in zone_watermark_ok()
__zone_watermark_ok currently compares free_pages which is a signed type
with z->lowmem_reserve[classzone_idx] which is unsigned which might lead
to sign overflow if free_pages doesn't satisfy the given order (or it came
as negative already) and then we rely on the following order loop to fix
it (which doesn't work for order-0). Let's fix the type conversion and do
not rely on the given value of free_pages or follow up fixups.
This patch fixes it because "memory-hotplug: fix kswapd looping forever
problem" depends on this.
As benefit of this patch, it doesn't rely on the loop to exit
__zone_watermark_ok in case of high order check and make the first test
effective.(ie, if (free_pages <= min + lowmem_reserve))
Aaditya reported this problem when he test my hotplug patch.
Reported-off-by: Aaditya Kumar <aaditya.kumar@ap.sony.com>
Tested-by: Aaditya Kumar <aaditya.kumar@ap.sony.com>
Signed-off-by: Aaditya Kumar <aaditya.kumar@ap.sony.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Mel Gorman <mgorman@suse.de>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page_alloc.c')
-rw-r--r-- | mm/page_alloc.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 228194728ccd..2e6635993558 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1595,6 +1595,7 @@ static bool __zone_watermark_ok(struct zone *z, int order, unsigned long mark, { /* free_pages my go negative - that's OK */ long min = mark; + long lowmem_reserve = z->lowmem_reserve[classzone_idx]; int o; free_pages -= (1 << order) - 1; @@ -1603,7 +1604,7 @@ static bool __zone_watermark_ok(struct zone *z, int order, unsigned long mark, if (alloc_flags & ALLOC_HARDER) min -= min / 4; - if (free_pages <= min + z->lowmem_reserve[classzone_idx]) + if (free_pages <= min + lowmem_reserve) return false; for (o = 0; o < order; o++) { /* At the next order, this order's pages become unavailable */ |