For KVM to use Transparent Huge Pages (THP) we have to ensure that the
alignment of the userspace address of the KVM memory slot and the IPA
that the guest sees for a memory region have the same offset from the 2M
huge page size boundary.
One way to achieve this is to always align the IPA region at a 2M
boundary and ensure that the mmap alignment is also at 2M.
Unfortunately, we were only doing this for __arm__, not for __aarch64__,
so add this simple condition.
This fixes a performance regression using KVM/ARM on AArch64 platforms
that showed a performance penalty of more than 50%, introduced by the
following commit:
9fac18f (oslib: allocate PROT_NONE pages on top of RAM, 2015-09-10)
We were only lucky before the above commit, because we were allocating
large regions and naturally getting a 2M alignment on those allocations
then.
Cc: qemu-stable@nongnu.org
Reported-by: Shih-Wei Li <shihwei@cs.columbia.edu>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: wrapped long line]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>