mirror of
https://sourceware.org/git/glibc.git
synced 2024-11-24 18:23:41 +08:00
b998e16e71
cases: small copies of up to 16 bytes, medium copies of 17..96 bytes which are fully unrolled. Large copies of more than 96 bytes align the destination and use an unrolled loop processing 64 bytes per iteration. In order to share code with memmove, small and medium copies read all data before writing, allowing any kind of overlap. All memmoves except for the large backwards case fall into memcpy for optimal performance. On a random copy test memcpy/memmove are 40% faster on Cortex-A57 and 28% on Cortex-A53. * sysdeps/aarch64/memcpy.S (memcpy): Rewrite of optimized memcpy and memmove. * sysdeps/aarch64/memmove.S (memmove): Remove memmove code (merged into memcpy.S).
2 lines
36 B
ArmAsm
2 lines
36 B
ArmAsm
/* memmove is part of memcpy.S. */
|