summaryrefslogtreecommitdiff
path: root/arch/s390
diff options
context:
space:
mode:
authorVitaly Kuznetsov <vkuznets@redhat.com>2021-01-28 13:01:31 -0500
committerPaolo Bonzini <pbonzini@redhat.com>2021-02-09 08:17:08 -0500
commit4fc096a99e01dd06dc55bef76ade7f8d76653245 (patch)
treed02ee91f59c3b01892a974066c6ee3f962dcee89 /arch/s390
parent281d9cd9b471a28382ac79be9b5cd59b72ae5c87 (diff)
downloadlwn-4fc096a99e01dd06dc55bef76ade7f8d76653245.tar.gz
lwn-4fc096a99e01dd06dc55bef76ade7f8d76653245.zip
KVM: Raise the maximum number of user memslots
Current KVM_USER_MEM_SLOTS limits are arch specific (512 on Power, 509 on x86, 32 on s390, 16 on MIPS) but they don't really need to be. Memory slots are allocated dynamically in KVM when added so the only real limitation is 'id_to_index' array which is 'short'. We don't have any other KVM_MEM_SLOTS_NUM/KVM_USER_MEM_SLOTS-sized statically defined structures. Low KVM_USER_MEM_SLOTS can be a limiting factor for some configurations. In particular, when QEMU tries to start a Windows guest with Hyper-V SynIC enabled and e.g. 256 vCPUs the limit is hit as SynIC requires two pages per vCPU and the guest is free to pick any GFN for each of them, this fragments memslots as QEMU wants to have a separate memslot for each of these pages (which are supposed to act as 'overlay' pages). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210127175731.2020089-3-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/s390')
-rw-r--r--arch/s390/include/asm/kvm_host.h1
1 files changed, 0 insertions, 1 deletions
diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index 74f9a036bab2..6bcfc5614bbc 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -28,7 +28,6 @@
#define KVM_S390_BSCA_CPU_SLOTS 64
#define KVM_S390_ESCA_CPU_SLOTS 248
#define KVM_MAX_VCPUS 255
-#define KVM_USER_MEM_SLOTS 32
/*
* These seem to be used for allocating ->chip in the routing table, which we