summaryrefslogtreecommitdiff
path: root/fs/fuse/fuse_i.h
diff options
context:
space:
mode:
authorMiklos Szeredi <mszeredi@redhat.com>2018-10-01 10:07:06 +0200
committerMiklos Szeredi <mszeredi@redhat.com>2018-10-01 10:07:06 +0200
commite52a8250480acd3b26534793c61816e30d85fbb6 (patch)
treebe2ae4d947e7fb876eae27926f70cf9c702243b2 /fs/fuse/fuse_i.h
parent5da784cce4308ae10a79e3c8c41b13fb9568e4e0 (diff)
downloadlwn-e52a8250480acd3b26534793c61816e30d85fbb6.tar.gz
lwn-e52a8250480acd3b26534793c61816e30d85fbb6.zip
fuse: realloc page array
Writeback caching currently allocates requests with the maximum number of possible pages, while the actual number of pages per request depends on a couple of factors that cannot be determined when the request is allocated (whether page is already under writeback, whether page is contiguous with previous pages already added to a request). This patch allows such requests to start with no page allocation (all pages inline) and grow the page array on demand. If the max_pages tunable remains the default value, then this will mean just one allocation that is the same size as before. If the tunable is larger, then this adds at most 3 additional memory allocations (which is generously compensated by the improved performance from the larger request). Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Diffstat (limited to 'fs/fuse/fuse_i.h')
-rw-r--r--fs/fuse/fuse_i.h4
1 files changed, 4 insertions, 0 deletions
diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
index 3d578745c852..b7d96e7b5e0f 100644
--- a/fs/fuse/fuse_i.h
+++ b/fs/fuse/fuse_i.h
@@ -879,6 +879,10 @@ struct fuse_req *fuse_request_alloc(unsigned npages);
struct fuse_req *fuse_request_alloc_nofs(unsigned npages);
+bool fuse_req_realloc_pages(struct fuse_conn *fc, struct fuse_req *req,
+ gfp_t flags);
+
+
/**
* Free a request
*/