summaryrefslogtreecommitdiff
path: root/drivers/android/binder_alloc.h
AgeCommit message (Collapse)Author
2024-12-24binder: rename alloc->buffer to vm_startCarlos Llamas
The alloc->buffer field in struct binder_alloc stores the starting address of the mapped vma, rename this field to alloc->vm_start to better reflect its purpose. It also avoids confusion with the binder buffer concept, e.g. transaction->buffer. No functional changes in this patch. Reviewed-by: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20241210143114.661252-7-cmllamas@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-24binder: replace alloc->vma with alloc->mappedCarlos Llamas
It is unsafe to use alloc->vma outside of the mmap_sem. Instead, add a new boolean alloc->mapped to save the vma state (mapped or unmmaped) and use this as a replacement for alloc->vma to validate several paths. Using the alloc->vma caused several performance and security issues in the past. Now that it has been replaced with either vm_lookup() or the alloc->mapped state, we can finally remove it. Cc: Minchan Kim <minchan@kernel.org> Cc: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Suren Baghdasaryan <surenb@google.com> Reviewed-by: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20241210143114.661252-6-cmllamas@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-24binder: store shrinker metadata under page->privateCarlos Llamas
Instead of pre-allocating an entire array of struct binder_lru_page in alloc->pages, install the shrinker metadata under page->private. This ensures the memory is allocated and released as needed alongside pages. By converting the alloc->pages[] into an array of struct page pointers, we can access these pages directly and only reference the shrinker metadata where it's being used (e.g. inside the shrinker's callback). Rename struct binder_lru_page to struct binder_shrinker_mdata to better reflect its purpose. Add convenience functions that wrap the allocation and freeing of pages along with their shrinker metadata. Note I've reworked this patch to avoid using page->lru and page->index directly, as Matthew pointed out that these are being removed [1]. Link: https://lore.kernel.org/all/ZzziucEm3np6e7a0@casper.infradead.org/ [1] Cc: Matthew Wilcox <willy@infradead.org> Cc: Liam R. Howlett <Liam.Howlett@oracle.com> Reviewed-by: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20241210143114.661252-5-cmllamas@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-24Revert "binder: switch alloc->mutex to spinlock_t"Carlos Llamas
This reverts commit 7710e2cca32e7f3958480e8bd44f50e29d0c2509. In preparation for concurrent page installations, restore the original alloc->mutex which will serialize zap_page_range_single() against page installations in subsequent patches (instead of the mmap_sem). Resolved trivial conflicts with commit 2c10a20f5e84a ("binder_alloc: Fix sleeping function called from invalid context") and commit da0c02516c50 ("mm/list_lru: simplify the list_lru walk callback function"). Cc: Mukesh Ojha <quic_mojha@quicinc.com> Reviewed-by: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20241210143114.661252-2-cmllamas@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-11-11mm/list_lru: simplify the list_lru walk callback functionKairui Song
Now isolation no longer takes the list_lru global node lock, only use the per-cgroup lock instead. And this lock is inside the list_lru_one being walked, no longer needed to pass the lock explicitly. Link: https://lkml.kernel.org/r/20241104175257.60853-7-ryncsn@gmail.com Signed-off-by: Kairui Song <kasong@tencent.com> Cc: Chengming Zhou <zhouchengming@bytedance.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Shakeel Butt <shakeel.butt@linux.dev> Cc: Waiman Long <longman@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-12-05binder: switch alloc->mutex to spinlock_tCarlos Llamas
The alloc->mutex is a highly contended lock that causes performance issues on Android devices. When a low-priority task is given this lock and it sleeps, it becomes difficult for the task to wake up and complete its work. This delays other tasks that are also waiting on the mutex. The problem gets worse when there is memory pressure in the system, because this increases the contention on the alloc->mutex while the shrinker reclaims binder pages. Switching to a spinlock helps to keep the waiters running and avoids the overhead of waking up tasks. This significantly improves the transaction latency when the problematic scenario occurs. The performance impact of this patchset was measured by stress-testing the binder alloc contention. In this test, several clients of different priorities send thousands of transactions of different sizes to a single server. In parallel, pages get reclaimed using the shinker's debugfs. The test was run on a Pixel 8, Pixel 6 and qemu machine. The results were similar on all three devices: after: | sched | prio | average | max | min | |--------+------+---------+-----------+---------| | fifo | 99 | 0.135ms | 1.197ms | 0.022ms | | fifo | 01 | 0.136ms | 5.232ms | 0.018ms | | other | -20 | 0.180ms | 7.403ms | 0.019ms | | other | 19 | 0.241ms | 58.094ms | 0.018ms | before: | sched | prio | average | max | min | |--------+------+---------+-----------+---------| | fifo | 99 | 0.350ms | 248.730ms | 0.020ms | | fifo | 01 | 0.357ms | 248.817ms | 0.024ms | | other | -20 | 0.399ms | 249.906ms | 0.020ms | | other | 19 | 0.477ms | 297.756ms | 0.022ms | The key metrics above are the average and max latencies (wall time). These improvements should roughly translate to p95-p99 latencies on real workloads. The response time is up to 200x faster in these scenarios and there is no penalty in the regular path. Note that it is only possible to convert this lock after a series of changes made by previous patches. These mainly include refactoring the sections that might_sleep() and changing the locking order with the mmap_lock amongst others. Reviewed-by: Alice Ryhl <aliceryhl@google.com> Signed-off-by: Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20231201172212.1813387-29-cmllamas@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-12-05binder: rename lru shrinker utilitiesCarlos Llamas
Now that the page allocation step is done separately we should rename the binder_free_page_range() and binder_allocate_page_range() functions to provide a more accurate description of what they do. Lets borrow the freelist concept used in other parts of the kernel for this. No functional change here. Signed-off-by: Carlos Llamas <cmllamas@google.com> Reviewed-by: Alice Ryhl <aliceryhl@google.com> Link: https://lore.kernel.org/r/20231201172212.1813387-23-cmllamas@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-12-05binder: remove pid param in binder_alloc_new_buf()Carlos Llamas
Binder attributes the buffer allocation to the current->tgid everytime. There is no need to pass this as a parameter so drop it. Also add a few touchups to follow the coding guidelines. No functional changes are introduced in this patch. Reviewed-by: Alice Ryhl <aliceryhl@google.com> Signed-off-by: Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20231201172212.1813387-13-cmllamas@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-12-05binder: keep vma addresses type as unsigned longCarlos Llamas
The vma addresses in binder are currently stored as void __user *. This requires casting back and forth between the mm/ api which uses unsigned long. Since we also do internal arithmetic on these addresses we end up having to cast them _again_ to an integer type. Lets stop all the unnecessary casting which kills code readability and store the virtual addresses as the native unsigned long from mm/. Note that this approach is preferred over uintptr_t as Linus explains in [1]. Opportunistically add a few cosmetic touchups. Link: https://lore.kernel.org/all/CAHk-=wj2OHy-5e+srG1fy+ZU00TmZ1NFp6kFLbVLMXHe7A1d-g@mail.gmail.com/ [1] Signed-off-by: Carlos Llamas <cmllamas@google.com> Reviewed-by: Alice Ryhl <aliceryhl@google.com> Link: https://lore.kernel.org/r/20231201172212.1813387-10-cmllamas@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-12-05binder: remove extern from function prototypesCarlos Llamas
The kernel coding style does not require 'extern' in function prototypes in .h files, so remove them from drivers/android/binder_alloc.h as they are not needed. No functional changes in this patch. Reviewed-by: Alice Ryhl <aliceryhl@google.com> Signed-off-by: Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20231201172212.1813387-9-cmllamas@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-04binder: fix memory leak in binder_init()Qi Zheng
In binder_init(), the destruction of binder_alloc_shrinker_init() is not performed in the wrong path, which will cause memory leaks. So this commit introduces binder_alloc_shrinker_exit() and calls it in the wrong path to fix that. Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> Acked-by: Carlos Llamas <cmllamas@google.com> Fixes: f2517eb76f1f ("android: binder: Add global lru shrinker to binder") Cc: stable <stable@kernel.org> Link: https://lore.kernel.org/r/20230625154937.64316-1-qi.zheng@linux.dev Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-05-13Revert "android: binder: stop saving a pointer to the VMA"Carlos Llamas
This reverts commit a43cfc87caaf46710c8027a8c23b8a55f1078f19. This patch fixed an issue reported by syzkaller in [1]. However, this turned out to be only a band-aid in binder. The root cause, as bisected by syzkaller, was fixed by commit 5789151e48ac ("mm/mmap: undo ->mmap() when mas_preallocate() fails"). We no longer need the patch for binder. Reverting such patch allows us to have a lockless access to alloc->vma in specific cases where the mmap_lock is not required. This approach avoids the contention that caused a performance regression. [1] https://lore.kernel.org/all/0000000000004a0dbe05e1d749e0@google.com [cmllamas: resolved conflicts with rework of alloc->mm and removal of binder_alloc_set_vma() also fixed comment section] Fixes: a43cfc87caaf ("android: binder: stop saving a pointer to the VMA") Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: stable@vger.kernel.org Signed-off-by: Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20230502201220.1756319-2-cmllamas@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-06binder: fix binder_alloc kernel-doc warningsCarlos Llamas
Update the kernel-doc section of struct binder_alloc to fix the following warnings reported by ./scripts/kernel-doc: warning: Function parameter or member 'mutex' not described in 'binder_alloc' warning: Function parameter or member 'vma_addr' not described in 'binder_alloc' No functional changes in this patch. Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org> Acked-by: Todd Kjos <tkjos@google.com> Signed-off-by: Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20220906135948.3048225-4-cmllamas@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-06binder: rename alloc->vma_vm_mm to alloc->mmCarlos Llamas
Rename ->vma_vm_mm to ->mm to reflect the fact that we no longer cache this reference from vma->vm_mm but from current->mm instead. No functional changes in this patch. Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org> Acked-by: Todd Kjos <tkjos@google.com> Signed-off-by: Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20220906135948.3048225-2-cmllamas@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-01binder: remove unused binder_alloc->buffer_freeCarlos Llamas
The ->buffer_free member was introduced in the first revision of the driver under staging but it appears like it was never actually used according to git's history. Remove it from binder_alloc. Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org> Acked-by: Todd Kjos <tkjos@google.com> Signed-off-by: Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20220829201254.1814484-6-cmllamas@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-01binder: fix trivial kernel-doc typoCarlos Llamas
Correct the misspelling of 'invariant' in kernel-doc section. No functional changes in this patch. Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org> Acked-by: Todd Kjos <tkjos@google.com> Signed-off-by: Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20220829201254.1814484-3-cmllamas@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-07-29android: binder: stop saving a pointer to the VMALiam R. Howlett
Do not record a pointer to a VMA outside of the mmap_lock for later use. This is unsafe and there are a number of failure paths *after* the recorded VMA pointer may be freed during setup. There is no callback to the driver to clear the saved pointer from generic mm code. Furthermore, the VMA pointer may become stale if any number of VMA operations end up freeing the VMA so saving it was fragile to being with. Instead, change the binder_alloc struct to record the start address of the VMA and use vma_lookup() to get the vma when needed. Add lockdep mmap_lock checks on updates to the vma pointer to ensure the lock is held and depend on that lock for synchronization of readers and writers - which was already the case anyways, so the smp_wmb()/smp_rmb() was not necessary. [akpm@linux-foundation.org: fix drivers/android/binder_alloc_selftest.c] Link: https://lkml.kernel.org/r/20220621140212.vpkio64idahetbyf@revolver Fixes: da1b9564e85b ("android: binder: fix the race mmap and alloc_new_buf_locked") Reported-by: syzbot+58b51ac2b04e388ab7b0@syzkaller.appspotmail.com Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Christian Brauner (Microsoft) <brauner@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hridya Valsaraju <hridya@google.com> Cc: Joel Fernandes <joel@joelfernandes.org> Cc: Martijn Coenen <maco@android.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Todd Kjos <tkjos@android.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2021-04-10binder: tell userspace to dump current backtrace when detected oneway spammingHang Lu
When async binder buffer got exhausted, some normal oneway transactions will also be discarded and may cause system or application failures. By that time, the binder debug information we dump may not be relevant to the root cause. And this issue is difficult to debug if without the backtrace of the thread sending spam. This change will send BR_ONEWAY_SPAM_SUSPECT to userspace when oneway spamming is detected, request to dump current backtrace. Oneway spamming will be reported only once when exceeding the threshold (target process dips below 80% of its oneway space, and current process is responsible for either more than 50 transactions, or more than 50% of the oneway space). And the detection will restart when the async buffer has returned to a healthy state. Acked-by: Todd Kjos <tkjos@google.com> Signed-off-by: Hang Lu <hangl@codeaurora.org> Link: https://lore.kernel.org/r/1617961246-4502-3-git-send-email-hangl@codeaurora.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-12-09binder: add flag to clear buffer on txn completeTodd Kjos
Add a per-transaction flag to indicate that the buffer must be cleared when the transaction is complete to prevent copies of sensitive data from being preserved in memory. Signed-off-by: Todd Kjos <tkjos@google.com> Link: https://lore.kernel.org/r/20201120233743.3617529-1-tkjos@google.com Cc: stable <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-03binder: print warnings when detecting oneway spamming.Martijn Coenen
The most common cause of the binder transaction buffer filling up is a client rapidly firing oneway transactions into a process, before it has a chance to handle them. Yet the root cause of this is often hard to debug, because either the system or the app will stop, and by that time binder debug information we dump in bugreports is no longer relevant. This change warns as soon as a process dips below 80% of its oneway space (less than 100kB available in the configuration), when any one process is responsible for either more than 50 transactions, or more than 50% of the oneway space. Signed-off-by: Martijn Coenen <maco@android.com> Acked-by: Todd Kjos <tkjos@google.com> Link: https://lore.kernel.org/r/20200821122544.1277051-1-maco@android.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-07-01binder: return errors from buffer copy functionsTodd Kjos
The buffer copy functions assumed the caller would ensure correct alignment and that the memory to be copied was completely within the binder buffer. There have been a few cases discovered by syzkallar where a malformed transaction created by a user could violated the assumptions and resulted in a BUG_ON. The fix is to remove the BUG_ON and always return the error to be handled appropriately by the caller. Acked-by: Martijn Coenen <maco@android.com> Reported-by: syzbot+3ae18325f96190606754@syzkaller.appspotmail.com Fixes: bde4a19fc04f ("binder: use userspace pointer as base of buffer space") Suggested-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Todd Kjos <tkjos@google.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-05treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 282Thomas Gleixner
Based on 1 normalized pattern(s): this software is licensed under the terms of the gnu general public license version 2 as published by the free software foundation and may be copied distributed and modified under those terms this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 285 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190529141900.642774971@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12binder: use userspace pointer as base of buffer spaceTodd Kjos
Now that alloc->buffer points to the userspace vm_area rename buffer->data to buffer->user_data and rename local pointers that hold user addresses. Also use the "__user" tag to annotate all user pointers so sparse can flag cases where user pointer vaues are copied to kernel pointers. Refactor code to use offsets instead of user pointers. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12binder: remove user_buffer_offsetTodd Kjos
Remove user_buffer_offset since there is no kernel buffer pointer anymore. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12binder: add functions to copy to/from binder buffersTodd Kjos
Avoid vm_area when copying to or from binder buffers. Instead, new copy functions are added that copy from kernel space to binder buffer space. These use kmap_atomic() and kunmap_atomic() to create temporary mappings and then memcpy() is used to copy within that page. Also, kmap_atomic() / kunmap_atomic() use the appropriate cache flushing to support VIVT cache architectures. Allow binder to build if CPU_CACHE_VIVT is defined. Several uses of the new functions are added here. More to follow in subsequent patches. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12binder: create userspace-to-binder-buffer copy functionTodd Kjos
The binder driver uses a vm_area to map the per-process binder buffer space. For 32-bit android devices, this is now taking too much vmalloc space. This patch removes the use of vm_area when copying the transaction data from the sender to the buffer space. Instead of using copy_from_user() for multi-page copies, it now uses binder_alloc_copy_user_to_buffer() which uses kmap() and kunmap() to map each page, and uses copy_from_user() for copying to that page. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-12-06binder: fix kerneldoc header for struct binder_bufferTodd Kjos
Fix the incomplete kerneldoc header for struct binder_buffer. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-26binder: fix race that allows malicious free of live bufferTodd Kjos
Malicious code can attempt to free buffers using the BC_FREE_BUFFER ioctl to binder. There are protections against a user freeing a buffer while in use by the kernel, however there was a window where BC_FREE_BUFFER could be used to free a recently allocated buffer that was not completely initialized. This resulted in a use-after-free detected by KASAN with a malicious test program. This window is closed by setting the buffer's allow_user_free attribute to 0 when the buffer is allocated or when the user has previously freed it instead of waiting for the caller to set it. The problem was that when the struct buffer was recycled, allow_user_free was stale and set to 1 allowing a free to go through. Signed-off-by: Todd Kjos <tkjos@google.com> Acked-by: Arve Hjønnevåg <arve@android.com> Cc: stable <stable@vger.kernel.org> # 4.14 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-18android: binder: Check for errors in binder_alloc_shrinker_init().Tetsuo Handa
Both list_lru_init() and register_shrinker() might return an error. Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Sherry Yang <sherryy@android.com> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-28ANDROID: binder: show high watermark of alloc->pages.Martijn Coenen
Show the high watermark of the index into the alloc->pages array, to facilitate sizing the buffer on a per-process basis. Signed-off-by: Martijn Coenen <maco@android.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-21android: binder: Don't get mm from taskSherry Yang
Use binder_alloc struct's mm_struct rather than getting a reference to the mm struct through get_task_mm to avoid a potential deadlock between lru lock, task lock and dentry lock, since a thread can be holding the task lock and the dentry lock while trying to acquire the lru lock. Acked-by: Arve Hjønnevåg <arve@android.com> Signed-off-by: Sherry Yang <sherryy@android.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-09-01android: binder: Add page usage in binder statsSherry Yang
Add the number of active, lru, and free pages for each binder process in binder stats Signed-off-by: Sherry Yang <sherryy@android.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-08-28android: binder: Add global lru shrinker to binderSherry Yang
Hold on to the pages allocated and mapped for transaction buffers until the system is under memory pressure. When that happens, use linux shrinker to free pages. Without using shrinker, patch "android: binder: Move buffer out of area shared with user space" will cause a significant slow down for small transactions that fit into the first page because free list buffer header used to be inlined with buffer data. In addition to prevent the performance regression for small transactions, this patch improves the performance for transactions that take up more than one page. Modify alloc selftest to work with the shrinker change. Test: Run memory intensive applications (Chrome and Camera) to trigger shrinker callbacks. Binder frees memory as expected. Test: Run binderThroughputTest with high memory pressure option enabled. Signed-off-by: Sherry Yang <sherryy@android.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-08-28android: binder: Move buffer out of area shared with user spaceSherry Yang
Binder driver allocates buffer meta data in a region that is mapped in user space. These meta data contain pointers in the kernel. This patch allocates buffer meta data on the kernel heap that is not mapped in user space, and uses a pointer to refer to the data mapped. Signed-off-by: Sherry Yang <sherryy@android.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-08-28android: binder: Add allocator selftestSherry Yang
binder_alloc_selftest tests that alloc_new_buf handles page allocation and deallocation properly when allocate and free buffers. The test allocates 5 buffers of various sizes to cover all possible page alignment cases, and frees the buffers using a list of exhaustive freeing order. Test: boot the device with ANDROID_BINDER_IPC_SELFTEST config option enabled. Allocator selftest passes. Signed-off-by: Sherry Yang <sherryy@android.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17binder: protect against two threads freeing bufferTodd Kjos
Adds protection against malicious user code freeing the same buffer at the same time which could cause a crash. Cannot happen under normal use. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17binder: move binder_alloc to separate fileTodd Kjos
Move the binder allocator functionality to its own file Continuation of splitting the binder allocator from the binder driver. Split binder_alloc functions from normal binder functions. Add kernel doc comments to functions declared extern in binder_alloc.h Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>