diff options
author | Miklos Szeredi <mszeredi@redhat.com> | 2018-10-01 10:07:06 +0200 |
---|---|---|
committer | Miklos Szeredi <mszeredi@redhat.com> | 2018-10-01 10:07:06 +0200 |
commit | e52a8250480acd3b26534793c61816e30d85fbb6 (patch) | |
tree | be2ae4d947e7fb876eae27926f70cf9c702243b2 /fs/fuse/file.c | |
parent | 5da784cce4308ae10a79e3c8c41b13fb9568e4e0 (diff) | |
download | lwn-e52a8250480acd3b26534793c61816e30d85fbb6.tar.gz lwn-e52a8250480acd3b26534793c61816e30d85fbb6.zip |
fuse: realloc page array
Writeback caching currently allocates requests with the maximum number of
possible pages, while the actual number of pages per request depends on a
couple of factors that cannot be determined when the request is allocated
(whether page is already under writeback, whether page is contiguous with
previous pages already added to a request).
This patch allows such requests to start with no page allocation (all pages
inline) and grow the page array on demand.
If the max_pages tunable remains the default value, then this will mean
just one allocation that is the same size as before. If the tunable is
larger, then this adds at most 3 additional memory allocations (which is
generously compensated by the improved performance from the larger
request).
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Diffstat (limited to 'fs/fuse/file.c')
-rw-r--r-- | fs/fuse/file.c | 8 |
1 files changed, 7 insertions, 1 deletions
diff --git a/fs/fuse/file.c b/fs/fuse/file.c index 035843b501fe..f5507198ea00 100644 --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -1827,7 +1827,13 @@ static int fuse_writepages_fill(struct page *page, data->orig_pages[req->num_pages - 1]->index + 1 != page->index)) { fuse_writepages_send(data); data->req = NULL; + } else if (req && req->num_pages == req->max_pages) { + if (!fuse_req_realloc_pages(fc, req, GFP_NOFS)) { + fuse_writepages_send(data); + req = data->req = NULL; + } } + err = -ENOMEM; tmp_page = alloc_page(GFP_NOFS | __GFP_HIGHMEM); if (!tmp_page) @@ -1850,7 +1856,7 @@ static int fuse_writepages_fill(struct page *page, struct fuse_inode *fi = get_fuse_inode(inode); err = -ENOMEM; - req = fuse_request_alloc_nofs(fc->max_pages); + req = fuse_request_alloc_nofs(FUSE_REQ_INLINE_PAGES); if (!req) { __free_page(tmp_page); goto out_unlock; |