mirror of
https://github.com/qemu/qemu.git
synced 2024-11-27 05:43:47 +08:00
migration/ram: Fix populate_read_range()
Unfortunately, commitf7b9dcfbcf
broke populate_read_range(): the loop end condition is very wrong, resulting in that function not populating the full range. Lets' fix that. Fixes:f7b9dcfbcf
("migration/ram: Factor out populating pages readable in ram_block_populate_pages()") Cc: qemu-stable@nongnu.org Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> (cherry picked from commit5f19a44919
) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This commit is contained in:
parent
ee2ec0ac52
commit
eca533b60a
@ -1765,13 +1765,15 @@ out:
|
||||
static inline void populate_read_range(RAMBlock *block, ram_addr_t offset,
|
||||
ram_addr_t size)
|
||||
{
|
||||
const ram_addr_t end = offset + size;
|
||||
|
||||
/*
|
||||
* We read one byte of each page; this will preallocate page tables if
|
||||
* required and populate the shared zeropage on MAP_PRIVATE anonymous memory
|
||||
* where no page was populated yet. This might require adaption when
|
||||
* supporting other mappings, like shmem.
|
||||
*/
|
||||
for (; offset < size; offset += block->page_size) {
|
||||
for (; offset < end; offset += block->page_size) {
|
||||
char tmp = *((char *)block->host + offset);
|
||||
|
||||
/* Don't optimize the read out */
|
||||
|
Loading…
Reference in New Issue
Block a user