diff options
author | Kaitao Cheng <pilgrimtao@gmail.com> | 2020-01-30 22:13:42 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2020-01-31 10:30:38 -0800 |
commit | 92855270ff0827865109447b59e7e5df9c2bef68 (patch) | |
tree | f1e71292138614bee4992993a33c30d0dc9f90b5 /mm/memcontrol.c | |
parent | 10c8d69f314d557d94d74ec492575ae6a4f1eb1c (diff) | |
download | lwn-92855270ff0827865109447b59e7e5df9c2bef68.tar.gz lwn-92855270ff0827865109447b59e7e5df9c2bef68.zip |
mm/memcontrol.c: cleanup some useless code
Compound pages handling in mem_cgroup_migrate is more convoluted than
necessary. The state is duplicated in compound variable and the same
could be achieved by PageTransHuge check which is trivial and
hpage_nr_pages is already PageTransHuge aware.
It is much simpler to just use hpage_nr_pages for nr_pages and replace
the local variable by PageTransHuge check directly
Link: http://lkml.kernel.org/r/20191210160450.3395-1-pilgrimtao@gmail.com
Signed-off-by: Kaitao Cheng <pilgrimtao@gmail.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/memcontrol.c')
-rw-r--r-- | mm/memcontrol.c | 7 |
1 files changed, 3 insertions, 4 deletions
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 27c231bf4565..6f6dc8712e39 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6633,7 +6633,6 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) { struct mem_cgroup *memcg; unsigned int nr_pages; - bool compound; unsigned long flags; VM_BUG_ON_PAGE(!PageLocked(oldpage), oldpage); @@ -6655,8 +6654,7 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) return; /* Force-charge the new page. The old one will be freed soon */ - compound = PageTransHuge(newpage); - nr_pages = compound ? hpage_nr_pages(newpage) : 1; + nr_pages = hpage_nr_pages(newpage); page_counter_charge(&memcg->memory, nr_pages); if (do_memsw_account()) @@ -6666,7 +6664,8 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) commit_charge(newpage, memcg, false); local_irq_save(flags); - mem_cgroup_charge_statistics(memcg, newpage, compound, nr_pages); + mem_cgroup_charge_statistics(memcg, newpage, PageTransHuge(newpage), + nr_pages); memcg_check_events(memcg, newpage); local_irq_restore(flags); } |