diff options
author | Vishal Verma <vishal.l.verma@intel.com> | 2016-04-21 15:13:46 -0400 |
---|---|---|
committer | Vishal Verma <vishal.l.verma@intel.com> | 2016-05-18 12:16:57 -0600 |
commit | 4b0228fa1d753f77fe0e6cf4c41398ec77dfbd2a (patch) | |
tree | 323a881d26a983060f82477b8391717fa5b10e9d /fs/dax.c | |
parent | 679c8bd3b29428e736eabb7fc66a978f312f0c86 (diff) | |
download | lwn-4b0228fa1d753f77fe0e6cf4c41398ec77dfbd2a.tar.gz lwn-4b0228fa1d753f77fe0e6cf4c41398ec77dfbd2a.zip |
dax: for truncate/hole-punch, do zeroing through the driver if possible
In the truncate or hole-punch path in dax, we clear out sub-page ranges.
If these sub-page ranges are sector aligned and sized, we can do the
zeroing through the driver instead so that error-clearing is handled
automatically.
For sub-sector ranges, we still have to rely on clear_pmem and have the
possibility of tripping over errors.
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Jeff Moyer <jmoyer@redhat.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Jan Kara <jack@suse.cz>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
Diffstat (limited to 'fs/dax.c')
-rw-r--r-- | fs/dax.c | 30 |
1 files changed, 25 insertions, 5 deletions
@@ -947,6 +947,19 @@ int dax_pfn_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf) } EXPORT_SYMBOL_GPL(dax_pfn_mkwrite); +static bool dax_range_is_aligned(struct block_device *bdev, + unsigned int offset, unsigned int length) +{ + unsigned short sector_size = bdev_logical_block_size(bdev); + + if (!IS_ALIGNED(offset, sector_size)) + return false; + if (!IS_ALIGNED(length, sector_size)) + return false; + + return true; +} + int __dax_zero_page_range(struct block_device *bdev, sector_t sector, unsigned int offset, unsigned int length) { @@ -955,11 +968,18 @@ int __dax_zero_page_range(struct block_device *bdev, sector_t sector, .size = PAGE_SIZE, }; - if (dax_map_atomic(bdev, &dax) < 0) - return PTR_ERR(dax.addr); - clear_pmem(dax.addr + offset, length); - wmb_pmem(); - dax_unmap_atomic(bdev, &dax); + if (dax_range_is_aligned(bdev, offset, length)) { + sector_t start_sector = dax.sector + (offset >> 9); + + return blkdev_issue_zeroout(bdev, start_sector, + length >> 9, GFP_NOFS, true); + } else { + if (dax_map_atomic(bdev, &dax) < 0) + return PTR_ERR(dax.addr); + clear_pmem(dax.addr + offset, length); + wmb_pmem(); + dax_unmap_atomic(bdev, &dax); + } return 0; } EXPORT_SYMBOL_GPL(__dax_zero_page_range); |