summaryrefslogtreecommitdiff
path: root/MAINTAINERS
diff options
context:
space:
mode:
authorNick Piggin <npiggin@suse.de>2009-01-06 14:40:44 +1100
committerLachlan McIlroy <lachlan@redback.melbourne.sgi.com>2009-01-09 17:09:25 +1100
commit958f8c0e4fc311e23a40635a530c01aec366a6e8 (patch)
treebc497954825bc438b3765e641e62c3e1740ed8a8 /MAINTAINERS
parent058652a37dd9eac18d6b8c1a311137c679de9dae (diff)
downloadlwn-958f8c0e4fc311e23a40635a530c01aec366a6e8.tar.gz
lwn-958f8c0e4fc311e23a40635a530c01aec366a6e8.zip
[XFS] remove old vmap cache
XFS's vmap batching simply defers a number (up to 64) of vunmaps, and keeps track of them in a list. To purge the batch, it just goes through the list and calls vunamp on each one. This is pretty poor: a global TLB flush is generally still performed on each vunmap, with the most expensive parts of the operation being the broadcast IPIs and locking involved in the SMP callouts, and the locking involved in the vmap management -- none of these are avoided by just batching up the calls. I'm actually surprised it ever made much difference. (Now that the lazy vmap allocator is upstream, this description is not quite right, but the vunmap batching still doesn't seem to do much) Rip all this logic out of XFS completely. I will improve vmap performance and scalability directly in subsequent patch. Signed-off-by: Nick Piggin <npiggin@suse.de> Reviewed-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: Lachlan McIlroy <lachlan@sgi.com>
Diffstat (limited to 'MAINTAINERS')
0 files changed, 0 insertions, 0 deletions