diff options
author | Matt Fleming <matt@console-pimps.org> | 2009-10-06 21:22:34 +0000 |
---|---|---|
committer | Paul Mundt <lethal@linux-sh.org> | 2009-10-09 11:26:35 +0900 |
commit | a2767cfb1d9d97c3f861743f1ad595a80b75ec99 (patch) | |
tree | a00fa2f5873c331656410bd90b42ac5f7f6a63b2 /arch/sh | |
parent | 2bea7ea7d57fd0022f4cd08ed3d4eb2d39a2920d (diff) | |
download | lwn-a2767cfb1d9d97c3f861743f1ad595a80b75ec99.tar.gz lwn-a2767cfb1d9d97c3f861743f1ad595a80b75ec99.zip |
sh: Don't allocate smaller sized mappings on every iteration
Currently, we've got the less than ideal situation where if we need to
allocate a 256MB mapping we'll allocate four entries like so,
entry 1: 128MB
entry 2: 64MB
entry 3: 16MB
entry 4: 16MB
This is because as we execute the loop in pmb_remap() we will
progressively try mapping the remaining address space with smaller and
smaller sizes. This isn't good because the size we use on one iteration
may be the perfect size to use on the next iteration, for instance when
the initial size is divisible by one of the PMB mapping sizes.
With this patch, we now only need two entries in the PMB to map 256MB of
address space,
entry 1: 128MB
entry 2: 128MB
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Diffstat (limited to 'arch/sh')
-rw-r--r-- | arch/sh/mm/pmb.c | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/arch/sh/mm/pmb.c b/arch/sh/mm/pmb.c index 58f935896b44..aade31102112 100644 --- a/arch/sh/mm/pmb.c +++ b/arch/sh/mm/pmb.c @@ -269,6 +269,13 @@ again: pmbp->link = pmbe; pmbp = pmbe; + + /* + * Instead of trying smaller sizes on every iteration + * (even if we succeed in allocating space), try using + * pmb_sizes[i].size again. + */ + i--; } if (size >= 0x1000000) |