mirror of
https://github.com/edk2-porting/linux-next.git
synced 2024-12-19 02:34:01 +08:00
e26a9e00af
virt_to_page() is incredibly inefficient when virt-to-phys patching is enabled. This is because we end up with this calculation: page = &mem_map[asm virt_to_phys(addr) >> 12 - __pv_phys_offset >> 12] in assembly. The asm virt_to_phys() is equivalent this this operation: addr - PAGE_OFFSET + __pv_phys_offset and we can see that because this is assembly, the compiler has no chance to optimise some of that away. This should reduce down to: page = &mem_map[(addr - PAGE_OFFSET) >> 12] for the common cases. Permit the compiler to make this optimisation by giving it more of the information it needs - do this by providing a virt_to_pfn() macro. Another issue which makes this more complex is that __pv_phys_offset is a 64-bit type on all platforms. This is needlessly wasteful - if we store the physical offset as a PFN, we can save a lot of work having to deal with 64-bit values, which sometimes ends up producing incredibly horrid code: a4c: e3009000 movw r9, #0 a4c: R_ARM_MOVW_ABS_NC __pv_phys_offset a50: e3409000 movt r9, #0 ; r9 = &__pv_phys_offset a50: R_ARM_MOVT_ABS __pv_phys_offset a54: e3002000 movw r2, #0 a54: R_ARM_MOVW_ABS_NC __pv_phys_offset a58: e3402000 movt r2, #0 ; r2 = &__pv_phys_offset a58: R_ARM_MOVT_ABS __pv_phys_offset a5c: e5999004 ldr r9, [r9, #4] ; r9 = high word of __pv_phys_offset a60: e3001000 movw r1, #0 a60: R_ARM_MOVW_ABS_NC mem_map a64: e592c000 ldr ip, [r2] ; ip = low word of __pv_phys_offset Reviewed-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> |
||
---|---|---|
.. | ||
asm | ||
debug | ||
uapi/asm |