diff options
author | Junchao Sun <sunjunchao2870@gmail.com> | 2024-06-07 22:30:21 +0800 |
---|---|---|
committer | David Sterba <dsterba@suse.com> | 2024-09-10 16:51:18 +0200 |
commit | 3cce39a8ca4e7565b6ac1fbcf858171bf07c3757 (patch) | |
tree | d4b39b6b9c5029c51ae0c1198821e844e6d5ef11 /fs/btrfs/delayed-ref.c | |
parent | 14ed830d10322007565af3a0da39948f229a72d6 (diff) | |
download | lwn-3cce39a8ca4e7565b6ac1fbcf858171bf07c3757.tar.gz lwn-3cce39a8ca4e7565b6ac1fbcf858171bf07c3757.zip |
btrfs: qgroup: use xarray to track dirty extents in transaction
Use xarray to track dirty extents to reduce the size of the struct
btrfs_qgroup_extent_record from 64 bytes to 40 bytes. The xarray is
more cache line friendly, it also reduces the complexity of insertion
and search code compared to rb tree.
Another change introduced is about error handling. Before this patch,
the result of btrfs_qgroup_trace_extent_nolock() is always a success. In
this patch, because of this function calls the function xa_store() which
has the possibility to fail, so mark qgroup as inconsistent if error
happened and then free preallocated memory. Also we preallocate memory
before spin_lock(), if memory preallcation failed, error handling is the
same the existing code.
Suggested-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Junchao Sun <sunjunchao2870@gmail.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Diffstat (limited to 'fs/btrfs/delayed-ref.c')
-rw-r--r-- | fs/btrfs/delayed-ref.c | 17 |
1 files changed, 14 insertions, 3 deletions
diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c index 0bfa014b796d..ad9ef8312e41 100644 --- a/fs/btrfs/delayed-ref.c +++ b/fs/btrfs/delayed-ref.c @@ -855,11 +855,17 @@ add_delayed_ref_head(struct btrfs_trans_handle *trans, /* Record qgroup extent info if provided */ if (qrecord) { - if (btrfs_qgroup_trace_extent_nolock(trans->fs_info, - delayed_refs, qrecord)) + int ret; + + ret = btrfs_qgroup_trace_extent_nolock(trans->fs_info, + delayed_refs, qrecord); + if (ret) { + /* Clean up if insertion fails or item exists. */ + xa_release(&delayed_refs->dirty_extents, qrecord->bytenr); kfree(qrecord); - else + } else { qrecord_inserted = true; + } } trace_add_delayed_ref_head(trans->fs_info, head_ref, action); @@ -1012,6 +1018,9 @@ static int add_delayed_ref(struct btrfs_trans_handle *trans, record = kzalloc(sizeof(*record), GFP_NOFS); if (!record) goto free_head_ref; + if (xa_reserve(&trans->transaction->delayed_refs.dirty_extents, + generic_ref->bytenr, GFP_NOFS)) + goto free_record; } init_delayed_ref_common(fs_info, node, generic_ref); @@ -1048,6 +1057,8 @@ static int add_delayed_ref(struct btrfs_trans_handle *trans, return btrfs_qgroup_trace_extent_post(trans, record); return 0; +free_record: + kfree(record); free_head_ref: kmem_cache_free(btrfs_delayed_ref_head_cachep, head_ref); free_node: |