These days modification of the TLB is done in notdirty_mem_write,
so the virtual address and env pointer as unnecessary.
The new name of the function, tlb_unprotect_code, is consistent with
tlb_protect_code.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Remove them from the sundry exec-all.h header, since they are only used by
the TCG runtime in exec.c and user-exec.c.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Invoke xen_modified_memory from cpu_physical_memory_set_dirty_range_nocode;
it is akin to DIRTY_MEMORY_MIGRATION, so set it together with that bitmap.
The remaining call from invalidate_and_set_dirty's "else" branch will go
away soon.
Second, fix the second argument to the function in the
cpu_physical_memory_set_dirty_lebitmap call site. That function is only used
by KVM, but it is better to be clean anyway.
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
DIRTY_MEMORY_CODE is only needed for TCG. By adding it directly to
mr->dirty_log_mask, we avoid testing for TCG everywhere a region is
checked for the enabled/disabled state of dirty logging.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When the dirty log mask will also cover other bits than DIRTY_MEMORY_VGA,
some listeners may be interested in the overall zero/non-zero value of
the dirty log mask; others may be interested in the value of single bits.
For this reason, always call log_start/log_stop if bits have respectively
appeared or disappeared, and pass the old and new values of the dirty log
mask so that listeners can distinguish the kinds of change.
For example, KVM checks if dirty logging used to be completely disabled
(in log_start) or is now completely disabled (in log_stop). On the
other hand, Xen has to check manually if DIRTY_MEMORY_VGA changed,
since that is the only bit it cares about.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For now memory regions only track DIRTY_MEMORY_VGA individually, but
this will change soon. To support this, split memory_region_is_logging
in two functions: one that returns a given bit from dirty_log_mask,
and one that returns the entire mask. memory_region_is_logging gets an
extra parameter so that the compiler flags misuse.
While VGA-specific users (including the Xen listener!) will want to keep
checking that bit, KVM and vhost check for "any bit except migration"
(because migration is handled via the global start/stop listener
callbacks).
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
DIRTY_MEMORY_MIGRATION is triggered by memory_global_dirty_log_start
and memory_global_dirty_log_stop, so it cannot be used with
memory_region_set_log.
Specify this in the documentation and assert it.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
At 8k per TLB (for 64-bit host or target), 8 or more modes
make the TLBs bigger than 64k, and some RISC TCG backends do
not like that. On the affected hosts, cut the TLB size in
half---there is still a measurable speedup on PPC with the
next patch.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1424436345-37924-3-git-send-email-pbonzini@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
GICv3 ITS distinguishes between devices by using hardwired device IDs passed on the bus.
This patch implements passing these IDs in qemu.
SMMU is also known to use stream IDs, therefore this addition can also be useful for
implementing platforms with SMMU.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Changes from v1:
- Added bus number to the stream ID
- Added stream ID not only to MSI-X, but also to plain MSI. Some common code was made into
msi_send_message() function.
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Once address_space_translate will be called outside the BQL, the returned
MemoryRegion might disappear as soon as the RCU read-side critical section
ends. Avoid this by moving the critical section to the callers.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1426684909-95030-3-git-send-email-pbonzini@redhat.com>
- next part in the thread-safe address_space_* saga: atomic access
to the bounce buffer and the map_clients list, from Fam
- optional support for linking with tcmalloc, also from Fam
- reapplying Peter Crosthwaite's "Respect as_translate_internal
length clamp" after fixing the SPARC fallout.
- build system fix from Wei Liu
- small acpi-build and ioport cleanup by myself
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAABCAAGBQJVQJd4AAoJEL/70l94x66DYFYH/3ifhqWZsd4dfJri0CGAHI4i
SpPmNeouc8W+F/3lwf6Inrh5NnTgd5QzoUBMQaWVkQKwUiWls8g2mXkT3jo0iDqT
/B40YXnZjNm20MixNaZmk9AsOF6OqPM8EMufau874k5zTlx3tCGAW1QD+I1N7WK7
DfsFsIUD1svo2prn55fSoitMG1TIVPnpcklb4YGJRbAacQYUDhr5KAIhT1quDR2R
93BvToyQmPqRQ4YKqnJLp8HAkL4FaJumfFZVvyh2cZvyaYGN/RVdi2Dw985dJDPX
/z4enE4GCAs4RDw3lZ1RDbiZDqpT2ibFgASg/arX3SxzqHirOGvMdkOjO99r9j4=
=aLjh
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
- miscellaneous cleanups for TCG (Emilio) and NBD (Bogdan)
- next part in the thread-safe address_space_* saga: atomic access
to the bounce buffer and the map_clients list, from Fam
- optional support for linking with tcmalloc, also from Fam
- reapplying Peter Crosthwaite's "Respect as_translate_internal
length clamp" after fixing the SPARC fallout.
- build system fix from Wei Liu
- small acpi-build and ioport cleanup by myself
# gpg: Signature made Wed Apr 29 09:34:00 2015 BST using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (22 commits)
nbd/trivial: fix type cast for ioctl
translate-all: use bitmap helpers for PageDesc's bitmap
target-i386: disable LINT0 after reset
Makefile.target: prepend $libs_softmmu to $LIBS
milkymist: do not modify libs-softmmu
configure: Add support for tcmalloc
exec: Respect as_translate_internal length clamp
ioport: reserve the whole range of an I/O port in the AddressSpace
ioport: loosen assertions on emulation of 16-bit ports
ioport: remove wrong comment
ide: there is only one data port
gus: clean up MemoryRegionPortio
sb16: remove useless mixer_write_indexw
sun4m: fix slavio sysctrl and led register sizes
acpi-build: remove dependency from ram_addr.h
memory: add memory_region_ram_resize
dma-helpers: Fix race condition of continue_after_map_failure and dma_aio_cancel
exec: Notify cpu_register_map_client caller if the bounce buffer is available
exec: Protect map_client_list with mutex
linux-user, bsd-user: Remove two calls to cpu_exec_init_all
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If DMA's owning thread cancels the IO while the bounce buffer's owning thread
is notifying the "cpu client list", a use-after-free happens:
continue_after_map_failure dma_aio_cancel
------------------------------------------------------------------
aio_bh_new
qemu_bh_delete
qemu_bh_schedule (use after free)
Also, the old code doesn't run the bh in the right AioContext.
Fix both problems by passing a QEMUBH to cpu_register_map_client.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1426496617-10702-6-git-send-email-famz@redhat.com>
[Remove unnecessary forward declaration. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add a transaction attribute indicating that a memory access is being
done from user-mode (unprivileged). This corresponds to an equivalent
signal in ARM AMBA buses.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Honour the NS bit in ARM page tables:
* when adding entries to the TLB, include the Secure/NonSecure
transaction attribute
* set the NS bit in the PAR when doing ATS operations
Note that we don't yet correctly use the NSTable bit to
cause the page table walk itself to use the right attributes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Add new address_space_ld*/st* functions which allow transaction
attributes and error reporting for basic load and stores. These
are named to be in line with the address_space_read/write/rw
buffer operations.
The existing ld/st*_phys functions are now wrappers around
the new functions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Make address_space_rw take transaction attributes, rather
than always using the 'unspecified' attributes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Add a MemTxAttrs field to the IOTLB, and allow target-specific
code to set it via a new tlb_set_page_with_attrs() function;
pass the attributes through to the device when making IO accesses.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Make the CPU iotlb a structure rather than a plain hwaddr;
this will allow us to add transaction attributes to it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Rather than retaining io_mem_read/write as simple wrappers around
the memory_region_dispatch_read/write functions, make the latter
public and change all the callers to use them, since we need to
touch all the callsites anyway to add MemTxAttrs and MemTxResult
support. Delete io_mem_read and io_mem_write entirely.
(All the callers currently pass MEMTXATTRS_UNSPECIFIED
and convert the return value back to bool or ignore it.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Define an API so that devices can register MemoryRegionOps whose read
and write callback functions are passed an arbitrary pointer to some
transaction attributes and can return a success-or-failure status code.
This will allow us to model devices which:
* behave differently for ARM Secure/NonSecure memory accesses
* behave differently for privileged/unprivileged accesses
* may return a transaction failure (causing a guest exception)
for erroneous accesses
This patch defines the new API and plumbs the attributes parameter through
to the memory.c public level functions io_mem_read() and io_mem_write(),
where it is currently dummied out.
The success/failure response indication is also propagated out to
io_mem_read() and io_mem_write(), which retain the old-style
boolean true-for-error return.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
This is improved type checking for the translators -- it's no longer
possible to accidentally swap arguments to the branch functions.
Note that the code generating backends still manipulate labels as int.
With notable exceptions, the scope of the change is just a few lines
for each target, so it's not worth building extra machinery to do this
change in per-target increments.
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Cc: Michael Walle <michael@walle.cc>
Cc: Leon Alrae <leon.alrae@imgtec.com>
Cc: Anthony Green <green@moxielogic.com>
Cc: Jia Liu <proljc@gmail.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The global parameter 'ram_size' does not take into account
the hotplugged memory.
In some codes, we use 'ram_size' as current VM's real RAM size,
which is not correct.
Add function 'get_current_ram_size' to calculate VM's current RAM size,
it will enumerate present memory devices and also plus ram_size.
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
- RCU: fix MemoryRegion lifetime issues in PCI; document the rules;
convert of AddressSpaceDispatch and RAMList
- KVM: add kvm_exit reasons for aarch64
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJU4hugAAoJEL/70l94x66DZXEH/i72tOgvKZfAjfq2xmHXNEsr
roCfTFIIjKK7feyW6YgwT5pgex6I5umFsO+uIyI/wbu8nDl/3NYEQBT4fR2cGfli
GKeJOEu8kf+Zt8U+fbxyVQclbuU5S0Ujsg1fX4QXC4swB5fGLT2cRWJ5qd6hKBQs
GflBuLa7h4eOzcTtOPpqRIwZ8mQE0uxv/hKq9kYLKHXJN2aWsiOls8KQ2CXj2yAl
p6bMS5f0H0S/1hvQcQV9EazX7owlPIEet3AmSL1TC2sjJ8hrNGMBoFPtUys1uqjc
B3CwuGi0JtWIduFYV9vZ/Ze4G7Y2iZlqc5vDxIl94d+iFmoHymDOi3mFUZ3H8XQ=
=Lk9p
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
- vhost-scsi: add bootindex property
- RCU: fix MemoryRegion lifetime issues in PCI; document the rules;
convert of AddressSpaceDispatch and RAMList
- KVM: add kvm_exit reasons for aarch64
# gpg: Signature made Mon Feb 16 16:32:32 2015 GMT using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (21 commits)
Convert ram_list to RCU
exec: convert ram_list to QLIST
cosmetic changes preparing for the following patches
exec: protect mru_block with RCU
rcu: add g_free_rcu
rcu: introduce RCU-enabled QLIST
exec: RCUify AddressSpaceDispatch
exec: make iotlb RCU-friendly
exec: introduce cpu_reload_memory_map
docs: clarify memory region lifecycle
pci: split shpc_cleanup and shpc_free
pcie: remove mmconfig memory leak and wrap mmconfig update with transaction
memory: keep the owner of the AddressSpace alive until do_address_space_destroy
rcu: run RCU callbacks under the BQL
rcu: do not let RCU callbacks pile up indefinitely
vhost-scsi: set the bootable value of channel/target/lun
vhost-scsi: add a property for booting
vhost-scsi: expose the TYPE_FW_PATH_PROVIDER interface
vhost-scsi: add bootindex property
qdev: support to get a device firmware path directly
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Allow "unlocked" reads of the ram_list by using an RCU-enabled QLIST.
The ramlist mutex is kept. call_rcu callbacks are run with the iothread
lock taken, but that may change in the future. Writers still take the
ramlist mutex, but they no longer need to assume that the iothread lock
is taken.
Readers of the list, instead, no longer require either the iothread
or ramlist mutex, but they need to use rcu_read_lock() and
rcu_read_unlock().
One place in arch_init.c was downgrading from write side to read side
like this:
qemu_mutex_lock_iothread()
qemu_mutex_lock_ramlist()
...
qemu_mutex_unlock_iothread()
...
qemu_mutex_unlock_ramlist()
and the equivalent idiom is:
qemu_mutex_lock_ramlist()
rcu_read_lock()
...
qemu_mutex_unlock_ramlist()
...
rcu_read_unlock()
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Mike Day <ncmike@ncultra.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
QLIST has RCU-friendly primitives, so switch to it.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Mike Day <ncmike@ncultra.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Note that even after this patch, most callers of address_space_*
functions must still be under the big QEMU lock, otherwise the memory
region returned by address_space_translate can disappear as soon as
address_space_translate returns. This will be fixed in the next part
of this series.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
After the previous patch, TLBs will be flushed on every change to
the memory mapping. This patch augments that with synchronization
of the MemoryRegionSections referred to in the iotlb array.
With this change, it is guaranteed that iotlb_to_region will access
the correct memory map, even once the TLB will be accessed outside
the BQL.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This for now is a simple TLB flush. This can change later for two
reasons:
1) an AddressSpaceDispatch will be cached in the CPUState object
2) it will not be possible to do tlb_flush once the TCG-generated code
runs outside the BQL.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The previous setup required ops and args to be completely sequential,
and was error prone when it came to both iteration and optimization.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
address_space_destroy_dispatch is called from an RCU callback and hence
outside the iothread mutex (BQL). However, after address_space_destroy
no new accesses can hit the destroyed AddressSpace so it is not necessary
to observe changes to the memory map. Move the memory_listener_unregister
call earlier, to make it thread-safe again.
Reported-by: Alex Williamson <alex.williamson@redhat.com>
Fixes: 374f2981d1
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Support guest CPUs which need 7 MMU index values.
Add a comment about what would be required to raise the limit
further (trivial for 8, TCG backend rework for 9 or more).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Replace the flat_view_mutex with RCU, avoiding futex contention for
dataplane on large systems and many iothreads.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Not all targets define a full set of suffix strings for the
NB_MMU_MODES that they have. In this situation, don't define any
helper functions for that mode, rather than defining helper functions
with no suffix at all. The MMU mode is still functional; it is merely
not directly accessible via cpu_ld*_MODE from target helper functions.
Also add an "NB_MMU_MODES >= 2" check to the definition of the mode 1
helpers -- some targets only define one MMU mode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1421432008-6786-1-git-send-email-peter.maydell@linaro.org
Add documentation of what the cpu_*_* accessors look like.
Correct some minor errors in the existing documentation of the
direct _p accessor family. Remove the near-duplicate comment
on the _p accessors from cpu-all.h and replace it with a reference
to the comment in bswap.h.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-16-git-send-email-peter.maydell@linaro.org
The cpu_ldfq/stfq/ldfl/stfl accessors for loading and storing
float32 and float64 are completely unused, so delete them.
(The union they use for converting from the float32/float64
type to uint32_t or uint64_t is the wrong way to do it anyway:
they should be using make_float* and float*_val.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-15-git-send-email-peter.maydell@linaro.org
The _raw macros and their helpers saddr() and laddr() are now
totally unused -- delete them.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-14-git-send-email-peter.maydell@linaro.org
The ld*_raw and st*_raw macros are now only used within the code
produced by cpu_ldst_template.h, and only in three places.
Expand these out to just call the ld_p and st_p functions directly.
Note that in all the callsites the address argument is a uintptr_t,
so we can drop that part of the double-cast used in the saddr() and
laddr() macros.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-13-git-send-email-peter.maydell@linaro.org
Use inline functions rather than macros for cpu_ld/st accessors
for the *-user configurations, as we already do for softmmu.
This has a two advantages:
* we can actually typecheck our arguments
* we don't need to leak the _raw macros everywhere
Since the _kernel functions were only used by target-i386/seg_helper.c,
put the definitions for them in that file too. (It already has the
similar template include code to define them for the softmmu case,
so it makes sense to have it deal with defining them for user-only.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-12-git-send-email-peter.maydell@linaro.org
The very short ld*/st* defines are now not used anywhere; delete them.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-11-git-send-email-peter.maydell@linaro.org
The ld*_kernel and st*_kernel defines are not used anywhere;
delete them.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-10-git-send-email-peter.maydell@linaro.org
The five ldul_ macros are not used anywhere and are marked up with an XXX
comment. "ldul" is a non-standard prefix for our family of load instructions:
we don't mark 32-bit accesses for signedness because they return a 32 bit
quantity. So just delete them.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1421334118-3287-2-git-send-email-peter.maydell@linaro.org
This makes ROM blocks resizeable. This infrastructure is required for other
functionality we have queued.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJUrme8AAoJECgfDbjSjVRpqmEH/1APnrphAi/CM6rxf2hPyvWj
f5yQDNXfeGxrHaW5vux6DvgHUkTng6KGBxz6XMSiwul6MeyRFNDqwbfMhSHjiIum
QkT//jqb5xux60kyTLXuIBTPok1SsKDtaTxbvZb0VmZrnkdYeI2CLa1Mq3cQUY0a
8DKnchQEM5lic9bxj+OuLiDFx8QYaMpQlUP9iIvNq6GjX+0zNsWvfPtkMTm00t93
lHKPvD2eVmrgfS5g+lkAwLDahLSjqwDc0YuLABOgDUFsZFz9GAUCHSpt0y8HEBwR
1NhGCfbnyyRl/1OSULtARGQ4Ddwm5dn1i5I4usoP5rLFS7FV5F7xhBu0IZlwgVA=
=pFmm
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
pc: resizeable ROM blocks
This makes ROM blocks resizeable. This infrastructure is required for other
functionality we have queued.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Thu 08 Jan 2015 11:19:24 GMT using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
* remotes/mst/tags/for_upstream:
acpi-build: make ROMs RAM blocks resizeable
memory: API to allocate resizeable RAM MR
arch_init: support resizing on incoming migration
exec: qemu_ram_alloc_resizeable, qemu_ram_resize
exec: split length -> used_length/max_length
exec: cpu_physical_memory_set/clear_dirty_range
memory: add memory_region_set_size
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add API to allocate resizeable RAM MR.
This looks just like regular RAM generally, but
has a special property that only a portion of it
(used_length) is actually used, and migrated.
This used_length size can change across reboots.
Follow up patches will change used_length for such blocks at migration,
making it easier to extend devices using such RAM (notably ACPI,
but in the future thinkably other ROMs) without breaking migration
compatibility or wasting ROM (guest) memory.
Device is notified on resize, so it can adjust if necessary.
Note: nothing prevents making all RAM resizeable in this way.
However, reviewers felt that only enabling this selectively will
make some class of errors easier to detect.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Add API to allocate "resizeable" RAM.
This looks just like regular RAM generally, but
has a special property that only a portion of it
(used_length) is actually used, and migrated.
This used_length size can change across reboots.
Follow up patches will change used_length for such blocks at migration,
making it easier to extend devices using such RAM (notably ACPI,
but in the future thinkably other ROMs) without breaking migration
compatibility or wasting ROM (guest) memory.
Device is notified on resize, so it can adjust if necessary.
qemu_ram_alloc_resizeable allocates this memory, qemu_ram_resize resizes
it.
Note: nothing prevents making all RAM resizeable in this way.
However, reviewers felt that only enabling this selectively will
make some class of errors easier to detect.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
This patch allows us to distinguish between two
length values for each block:
max_length - length of memory block that was allocated
used_length - length of block used by QEMU/guest
Currently, we set used_length - max_length, unconditionally.
Follow-up patches allow used_length <= max_length.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Make cpu_physical_memory_set/clear_dirty_range
behave symmetrically.
To clear range for a given client type only, add
cpu_physical_memory_clear_dirty_range_type.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Add API to change MR size.
Will be used internally for RAM resize.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently 'info jit' outputs half of the information to monitor and the
rest to qemu log. Dumping opcode counts to monitor as a part of 'info
jit' command doesn't sound useful. Add new monitor command 'info
opcount' that only dumps opcode counters.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
directory, and updating MAINTAINERS.
I've also folded my other MAINTAINERS update patches into this, as
they're small by themselves.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJUkCPEAAoJEB6aO1+FQIO21h8QAI4Y6nGFUlqkjTZFaQMjJG8O
3Ze/GtHSruU0x/fG8yGZN/i1PE+6n//lLTFTKj1YLogyX89LCuQ5MEWAu67XTSll
awRNZuInt56zBMOFi80vmX8rnZKCBD3Q+CT+OV0wQyI3sM8bBIjL0usFEv/fuXXw
pcbzKu3VVrfVtXt2wy8+JUyiSo1fRi/Bf40a/eXVIpXSApc+5GnRK5A9l+8V7d+u
hOxh1seivkyc388Npep5bp+amno/VcJ++/vHwH81ecI8CzKOrUgiDeRfLJhPdJd1
CD9vvnGFTCZUP4MZpM4KKJx6mcg+RqGWxgUBZucxP4rB9CEsXx5G5jMJItqmXWfm
gyW/tJFEHJcr2t1ZQpyZC2T30yiZTYDEXlREUZazu3xLb5ceHoC8tN55zY/eA7Wz
HRkcSU5BU19zQ3h+9ItFAXidHaVmE6nizRr5ls1+xkcy8yd7eNjeNuLrMi2Ks1yX
tcFFGZRSDz38fjtgPE/Bo2LS93LpeBo9z1DcdwvUvS/59W/3u8AoQw7PIly35Dsz
bl71xOfbsAOGc1Ow60q6m8wm0M9mqC/6hRbTc9GY6xn5uJolyGytpb7vo2dEsPFn
o3xIeV/5D/fsVlexAylLpboXElJwkBxHsA29MS7WF8wi69Rk0RiPzLXI06AXbpQ8
TH+gsb7IiTync0uf4jWX
=Kr1U
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/amit-migration/tags/for-2.3-2' into staging
Migration pull for 2.3. Mostly moving the code to the migration/
directory, and updating MAINTAINERS.
I've also folded my other MAINTAINERS update patches into this, as
they're small by themselves.
# gpg: Signature made Tue 16 Dec 2014 12:21:24 GMT using RSA key ID 854083B6
# gpg: Good signature from "Amit Shah <amit@amitshah.net>"
# gpg: aka "Amit Shah <amit@kernel.org>"
# gpg: aka "Amit Shah <amitshah@gmx.net>"
* remotes/amit-migration/tags/for-2.3-2:
MAINTAINERS: Update for migrated migration code
Split the QEMU buffered file code out
Split struct QEMUFile out
Remove migration- pre/post fixes off files in migration/ dir
Start migrating migration code into a migration directory
qmp-command.hx: add missing docs for migration capabilites
cpu: verify that block->host is set
cpu: assert host pointer offset within block
exec: add wrapper for host pointer access
MAINTAINERS: add include files to virtio-serial entry
MAINTAINERS: add entry for virtio-rng
MAINTAINERS: migration: add vmstate static checker files
MAINTAINERS: Add myself to migration maintainers
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If it isn't, access at an offset will cause memory corruption.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amos Kong <akong@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Make accesses safer in case we missed some
check somewhere.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amos Kong <akong@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
host pointer accesses force pointer math, let's
add a wrapper to make them safer.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amos Kong <akong@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
- valgrind/KVM support
- small i386 patches
- PCI SD host controller support
- malloc/free cleanups from Markus (x86/scsi)
- IvyBridge model
- XSAVES support for KVM
- initial patches from record/replay
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJUjw28AAoJEL/70l94x66D9kcH/RBoc4mNjrSt+MLy9Y+Fu1bu
HNhfd1n/yA0MKSHtSYwJPgkiuoxG3jHt0N69gbpZE0kdBcK+PPZZZUpTFIAU6vD/
D0O7l+2viOcl2z7SPuHIp9/O0CChsAYZkH+Zn2XbeStbe4d4f6bFzdy4vblMsirQ
BfMn/Y2Dw1uLknvrO3/QKgGhbK5Nxo/Te7lavRP+w7FgOhAdAUHOhBPfGrPWtG+0
0hVWmxoQyJtk+Ltt2oF4zUkql7czDsgyXkaO82l3TkecCvtqolCuby4lQIFJnq7E
vw0XUDwC/l/MWnXFq/rG97yopfIxkSAthT/xP/+TTJKM/oJEWDTh6I8ghQTdG90=
=ncys
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
- Migration and linuxboot fixes for 2.2 regressions
- valgrind/KVM support
- small i386 patches
- PCI SD host controller support
- malloc/free cleanups from Markus (x86/scsi)
- IvyBridge model
- XSAVES support for KVM
- initial patches from record/replay
# gpg: Signature made Mon 15 Dec 2014 16:35:08 GMT using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (47 commits)
sdhci: Support SDHCI devices on PCI
sdhci: Define SDHCI PCI ids
sdhci: Add "sysbus" to sdhci QOM types and methods
sdhci: Remove class "virtual" methods
sdhci: Set a default frequency clock
serial: only resample THR interrupt on rising edge of IER.THRI
serial: update LSR on enabling/disabling FIFOs
serial: clean up THRE/TEMT handling
serial: reset thri_pending on IER writes with THRI=0
linuxboot: fix loading old kernels
kvm/apic: fix 2.2->2.1 migration
target-i386: add Ivy Bridge CPU model
target-i386: add f16c and rdrand to Haswell and Broadwell
target-i386: add VME to all CPUs
pc: add 2.3 machine types
i386: do not cross the pages boundaries in replay mode
cpus: make icount warp behave well with respect to stop/cont
timer: introduce new QEMU_CLOCK_VIRTUAL_RT clock
cpu-exec: invalidate nocache translation if they are interrupted
icount: introduce cpu_get_icount_raw
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In this case, QEMU might longjmp out of cpu-exec.c and miss the final
cleanup in cpu_exec_nocache. Do this manually through a new compile
flag.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The usual semihosting behaviour is to process the system calls locally and
return; unfortuantelly the initial implementation dinamically changed the
target to GDB during debug sessions, which, for the usual arm-none-eabi-gdb,
is not implemented. The result was that during debug sessions the semihosting
calls were discarded.
This patch adds a configuration variable and an option to set it on the
command line:
-semihosting-config [enable=on|off,]target=native|gdb|auto
This option enables semihosting and defines where the semihosting calls will
be addressed, to QEMU ('native') or to GDB ('gdb'). The default is auto, which
means 'gdb' during debug sessions and 'native' otherwise.
Signed-off-by: Liviu Ionescu <ilg@livius.net>
Message-id: 1416341957-9796-1-git-send-email-ilg@livius.net
[PMM: moved declaration and definition of semihosting_target to
gdbstub.h and gdbstub.c to fix build failure on linux-user]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
introduce memory_region_get_alignment() that returns
underlying memory block alignment or 0 if it's not
relevant/implemented for backend.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The code in invalidate_and_set_dirty() needs to handle addr/length
combinations which cross guest physical page boundaries. This can happen,
for example, when disk I/O reads large blocks into guest RAM which previously
held code that we have cached translations for. Unfortunately we were only
checking the clean/dirty status of the first page in the range, and then
were calling a tb_invalidate function which only handles ranges that don't
cross page boundaries. Fix the function to deal with multipage ranges.
The symptoms of this bug were that guest code would misbehave (eg segfault),
in particular after a guest reboot but potentially any time the guest
reused a page of its physical RAM for new code.
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1416167061-13203-1-git-send-email-peter.maydell@linaro.org
New MIPS features depend on the access type and enum is more convenient than
using the numbers directly.
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
The PCI MMIO might be disabled or the device in the reset state.
Make sure we do not dump these memory regions.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The initial base address is miscalculated in walk_memory_regions().
It has to be shifted TARGET_PAGE_BITS more. Holder variables are
extended to target_ulong size otherwise they don't fit for MIPS N32
(a 32-bit ABI with a 64-bit address space) and qemu won't compile.
The issue led to incorrect debug output of memory maps and a
mis-formed coredumped file.
Signed-off-by: Mikhail Ilyin <m.ilin@samsung.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
- Build: fixing block/iscsi.so and ranlib warnings on Mac OS X
- Migration fixes for x86
- The odd KVM patch.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABAgAGBQJUEXeWAAoJEBvWZb6bTYby4AwP/0Hh55A7QzkkzZ66y65zM+G5
dsgRcLjufHSRQHoNQqm6LOcicV3Ygc/X644EY6jnZCZxFh/fsWuTPqUDGxLAnxEc
2V0PkLRIScAMOPezzxvRy6/9hkG+UYM3ZOL5D9yxA9pGuBtttw7tkts19Vqf9WZc
NYG5TBDuEGM1c596Zpo7t10m+Oiw+Jyi5luLXsb4lh5ikdFPDrtJaf0AnFvR+ym0
HXlj2K/0vHNowUeLoo+oWnZsW8mLE6OyJhgfo1tJtsH1BR+lQJnBnQ4moq4Sl/Wz
+iht/4gtz34XwLILokFR6yiNrPe+MIryyv+FYxOD5loIdGVDtKMx30UkIE2/D933
6/n5i3GBLi9JapeT9gkKTxk/UVRPzJ1PK07RWevgNZNQyTGKAUGp+p48nSzMYX7V
7GFSy3Q8uqOR8g9n+t+RURxkoMNbhhw7v53Z3PPXPCALCMDzg9RARlW/nkfiExcZ
oThUjE/8xfMTQlN1SO5HTyQXEkYjtknZhfC7/KFvkWYMbCG0KBTf212Md0zlTNkj
+C6r8Gq4ZWVIc07QyKkoCMxB+a9Uhvy4T1PKuSlm6iu94zUgZRhdf/PlOXimhFqH
9GL67Tv15kpj05xCS6jDXjeMZ416/UKw91OcsiT1UUHcq7/rc+GBycd0ngV1UgnQ
di5V12IVt8JwdzFxMeCT
=GIKW
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
- Memory: improve error reporting and avoid crashes on hotplug
- Build: fixing block/iscsi.so and ranlib warnings on Mac OS X
- Migration fixes for x86
- The odd KVM patch.
# gpg: Signature made Thu 11 Sep 2014 11:21:10 BST using RSA key ID 9B4D86F2
# gpg: Good signature from "Paolo Bonzini <pbonzini@redhat.com>"
# gpg: aka "Paolo Bonzini <bonzini@gnu.org>"
* remotes/bonzini/tags/for-upstream: (21 commits)
gdbstub: init mon_chr through qemu_chr_alloc
pckbd: adding new fields to vmstate
mc146818rtc: add missed field to vmstate
piix: do not set irq while loading vmstate
serial: fixing vmstate for save/restore
parallel: adding vmstate for save/restore
fdc: adding vmstate for save/restore
cpu: init vmstate for ticks and clock offset
apic_common: vapic_paddr synchronization fix
vl: use QLIST_FOREACH_SAFE to visit change state handlers
exec: add parameter errp to gethugepagesize
exec: report error when memory < hpagesize
hostmem-ram: don't exit qemu if size of memory-backend-ram is way too big
memory: add parameter errp to memory_region_init_rom_device
memory: add parameter errp to memory_region_init_ram
exec: add parameter errp to qemu_ram_alloc and qemu_ram_alloc_from_ptr
rules.mak: Fix DSO build by pulling in archive symbols
util: Don't link host-utils.o if it's empty
util: Move general qemu_getauxval to util/getauxval.c
trace: Only link generated-tracers.o with "simple" backend
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add parameter errp to memory_region_init_rom_device and update all call
sites to propagate the error.
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
[Propagate the error out of realize. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add parameter errp to memory_region_init_ram and update all call sites
to pass in &error_abort.
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add parameter errp to qemu_ram_alloc and qemu_ram_alloc_from_ptr so that
we can handle errors.
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
[Assert ptr != NULL in memory_region_init_ram_ptr. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
A bunch of bugfixes - these will make sense for 2.1.1
Initial Intel IOMMU support.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJUBdygAAoJECgfDbjSjVRpa9cIAJS06we0CpJaVmPrQS5HvC1w
An5Y5bGdfMQtfKjqN1Kehmtu/+wjNKZJw427+6B+KNO7wm9rRUiu927qp9lNGlbH
g3ybrknKYeyqVO/43SJt8c1eODSkmNgHPqyCkRVLbriYo850b2HhjJyMvVNZqeHD
zuTmU95GTNeiYAV8J1c59OrqUz302kCXI4A47loY7LdoEFMbJat4DbkrkspuTgbQ
EVk5sR8p2atKzgaOV6M6yiAtL5uSBNr9KmHvuA7ZBiV21wmOJm5u3y6DpLczUD90
+Ln6BCjmPS5GQ12pzY7U65enr/x/RYo6k01ig9MP3TndNA02XxCaskqfd083jM8=
=4drK
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
pci, pc fixes, features
A bunch of bugfixes - these will make sense for 2.1.1
Initial Intel IOMMU support.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Tue 02 Sep 2014 16:05:04 BST using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
* remotes/mst/tags/for_upstream:
vhost_net: start/stop guest notifiers properly
pci: avoid losing config updates to MSI/MSIX cap regs
virtio-net: don't run bh on vm stopped
ioh3420: remove unused ioh3420_init() declaration
vhost_net: cleanup start/stop condition
intel-iommu: add IOTLB using hash table
intel-iommu: add context-cache to cache context-entry
intel-iommu: add supports for queued invalidation interface
intel-iommu: fix coding style issues around in q35.c and machine.c
intel-iommu: add Intel IOMMU emulation to q35 and add a machine option "iommu" as a switch
intel-iommu: add DMAR table to ACPI tables
intel-iommu: introduce Intel IOMMU (VT-d) emulation
iommu: add is_write as a parameter to the translate function of MemoryRegionIOMMUOps
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
QEMU system mode page table walks are expensive. Taken by running QEMU
qemu-system-x86_64 system mode on Intel PIN , a TLB miss and walking a
4-level page tables in guest Linux OS takes ~450 X86 instructions on
average.
QEMU system mode TLB is implemented using a directly-mapped hashtable.
This structure suffers from conflict misses. Increasing the
associativity of the TLB may not be the solution to conflict misses as
all the ways may have to be walked in serial.
A victim TLB is a TLB used to hold translations evicted from the
primary TLB upon replacement. The victim TLB lies between the main TLB
and its refill path. Victim TLB is of greater associativity (fully
associative in this patch). It takes longer to lookup the victim TLB,
but its likely better than a full page table walk. The memory
translation path is changed as follows :
Before Victim TLB:
1. Inline TLB lookup
2. Exit code cache on TLB miss.
3. Check for unaligned, IO accesses
4. TLB refill.
5. Do the memory access.
6. Return to code cache.
After Victim TLB:
1. Inline TLB lookup
2. Exit code cache on TLB miss.
3. Check for unaligned, IO accesses
4. Victim TLB lookup.
5. If victim TLB misses, TLB refill
6. Do the memory access.
7. Return to code cache
The advantage is that victim TLB can offer more associativity to a
directly mapped TLB and thus potentially fewer page table walks while
still keeping the time taken to flush within reasonable limits.
However, placing a victim TLB before the refill path increase TLB
refill path as the victim TLB is consulted before the TLB refill. The
performance results demonstrate that the pros outweigh the cons.
some performance results taken on SPECINT2006 train
datasets and kernel boot and qemu configure script on an
Intel(R) Xeon(R) CPU E5620 @ 2.40GHz Linux machine are shown in the
Google Doc link below.
https://docs.google.com/spreadsheets/d/1eiItzekZwNQOal_h-5iJmC4tMDi051m9qidi5_nwvH4/edit?usp=sharing
In summary, victim TLB improves the performance of qemu-system-x86_64 by
11% on average on SPECINT2006, kernelboot and qemu configscript and with
highest improvement of in 26% in 456.hmmer. And victim TLB does not result
in any performance degradation in any of the measured benchmarks. Furthermore,
the implemented victim TLB is architecture independent and is expected to
benefit other architectures in QEMU as well.
Although there are measurement fluctuations, the performance
improvement is very significant and by no means in the range of
noises.
Signed-off-by: Xin Tong <trent.tong@gmail.com>
Message-id: 1407202523-23553-1-git-send-email-trent.tong@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a bool variable is_write as a parameter to the translate function of
MemoryRegionIOMMUOps to indicate the operation of the access. It can be
used for correct fault reporting from within the callback.
Change the interface of related functions.
Signed-off-by: Le Tan <tamlokveer@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Build /proc/self/maps doing a match against guest memory translation table.
Output only that map records which are valid for guest memory layout.
Signed-off-by: Mikhail Ilyin <m.ilin@samsung.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Memory changes for QOMification and automatic tracking of MR lifetime.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJT8et9AAoJEBvWZb6bTYbyIJAQAI3AlLSe27xWoUGfQUgWH30z
Rt/pShHz3BJMfQpD79JfTH8u6uBpkQmKtflerNT7FhXN9ULDzNq+b/jRtke8nkuy
ctCt05FhhK00rfWpUoRue4XiCuvbizBU7MK0DI3yCyNdXQyYnFvgnvsJtlqox8Zh
J5HZcBJEmdCiWBxq7UPk0qBitp4PqNoy7jlD/Ex3m7fJN5WK2cyspQIT9zmhehVn
B8Nwp+RitDDbXbwm0r18col5rFr/6Nj6+dW1gr+7sVJDLNsmJEqC2l3Kgk0wbPkG
Uqwbih29me9PC9/L1VLGHY0ApKDQ8JGE0GrYgEg162hbhoxEHkjjoHMhDUfV6Pj8
NkqcjjWl11UUhgkNqrGafayXbBVnOiEglxy8uXCeq14y9Xd/gjK9Fz6MQvRSOjms
PFmaKknhdmpxh0DuZmTix7WBmKim8zOiCE0/vrAPvwx5L+d1bn5xh6yQvtVjBMpU
Sru3Mhdm9bL9dUDBgOM/G6WCxSTVLBlExOblcYkQh03MfabD7bfplcrKYPXt5ull
Y8YLjqkoIfoy5t0ErvtlpdBJjeEz99JXU+wLQ6NYHnzwzTV+oUtSaEph14mAFOcY
XkFKdoPDI9PnyEfvy4193du8z/dSbhu7sWgHWbTCQyrcaNnSaVhlH43NUC+p23YN
8vfEsVLd1X7MFkDBUmWp
=M+/m
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
SCSI changes that enable sending vendor-specific commands via virtio-scsi.
Memory changes for QOMification and automatic tracking of MR lifetime.
# gpg: Signature made Mon 18 Aug 2014 13:03:09 BST using RSA key ID 9B4D86F2
# gpg: Good signature from "Paolo Bonzini <pbonzini@redhat.com>"
# gpg: aka "Paolo Bonzini <bonzini@gnu.org>"
* remotes/bonzini/tags/for-upstream:
mtree: remove write-only field
memory: Use canonical path component as the name
memory: Use memory_region_name for name access
memory: constify memory_region_name
exec: Abstract away ref to memory region names
loader: Abstract away ref to memory region names
tpm_tis: remove instance_finalize callback
memory: remove memory_region_destroy
memory: convert memory_region_destroy to object_unparent
ioport: split deletion and destruction
nic: do not destroy memory regions in cleanup functions
vga: do not dynamically allocate chain4_alias
sysbus: remove unused function sysbus_del_io
qom: object: move unparenting to the child property's release callback
qom: object: delete properties before calling instance_finalize
virtio-scsi: implement parse_cdb
scsi-block, scsi-generic: implement parse_cdb
scsi-block: extract scsi_block_is_passthrough
scsi-bus: introduce parse_cdb in SCSIDeviceClass and SCSIBusInfo
scsi-bus: prepare scsi_req_new for introduction of parse_cdb
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rather than having the name as separate state. This prepares support
for creating a MemoryRegion dynamically (i.e. without
memory_region_init() and friends) and the MemoryRegion still getting
a usable name.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It doesn't change the MR and some prospective call sites will have
const MRs at hand.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The function is empty after the previous patch, so remove it.
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Devices that use address_space_rw to write large areas to memory
(as opposed to address_space_map/unmap) were broken with respect
to migration since fe680d0 (exec: Limit translation limiting in
address_space_translate to xen, 2014-05-07). Such devices include
IDE CD-ROMs.
The reason is that invalidate_and_set_dirty (called by address_space_rw
but not address_space_map/unmap) was only setting the dirty bit for
the first page in the translation.
To fix this, introduce cpu_physical_memory_set_dirty_range_nocode that
is the same as cpu_physical_memory_set_dirty_range except it does not
muck with the DIRTY_MEMORY_CODE bitmap. This function can be used if
the caller invalidates translations with tb_invalidate_phys_page_range.
There is another difference between cpu_physical_memory_set_dirty_range
and cpu_physical_memory_set_dirty_flag; the former includes a call
to xen_modified_memory. This is handled separately in
invalidate_and_set_dirty, and is not needed in other callers of
cpu_physical_memory_set_dirty_range_nocode, so leave it alone.
Just one nit: now that invalidate_and_set_dirty takes care of handling
multiple pages, there is no need for address_space_unmap to wrap it
in a loop. In fact that loop would now be O(n^2).
Reported-by: Dave Gilbert <dgilbert@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
QOM propertyify the .may-overlap and .priority fields. The setters
will re-add the memory as a subregion if needed (i.e. the values change
when the memory region is already contained).
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
[Remove setters. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
QOMify memory regions as an Object. The former init() and destroy()
routines become instance_init() and instance_finalize() resp.
memory_region_init() is re-implemented to be:
object_initialize() + set fields
memory_region_destroy() is re-implemented to call unparent().
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
[Add newly-created MR as child, unparent on destruction. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Old code was affected by memory gaps which resulted in buffer pointers
pointing to address outside of the mapped regions.
Here we are introducing following changes:
- new function qemu_get_ram_block_host_ptr() returns host pointer
to the ram block, it is needed to calculate offset of specific
region in the host memory
- new field mmap_offset is added to the VhostUserMemoryRegion. It
contains offset where specific region starts in the mapped memory.
As there is stil no wider adoption of vhost-user agreement was made
that we will not bump version number due to this change
- other fileds in VhostUserMemoryRegion struct are not changed, as
they are all needed for usermode app implementation
- region data is not taken from ram_list.blocks anymore, instead we
use region data which is alredy calculated for use in vhost-net
- Now multiple regions can have same FD and user applicaton can call
mmap() multiple times with the same FD but with different offset
(user needs to take care for offset page alignment)
Signed-off-by: Damjan Marion <damarion@cisco.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Damjan Marion <damarion@cisco.com>
A new "share" property can be used with the "memory-file" backend to
map memory with MAP_SHARED instead of MAP_PRIVATE.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
And allow preallocation of file-based memory even without -mem-prealloc.
Some care is necessary because -mem-prealloc does not allow disabling
preallocation for hostmem-file.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Right now, -mem-path will fall back to RAM-based allocation in some
cases. This should never happen with "-object memory-file", prepare
the code by adding correct error propagation.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
MST: drop \n at end of error messages
Like the previous patch did in exec.c, split memory_region_init_ram and
memory_region_init_ram_from_file, and push mem_path one step further up.
Other RAM regions than system memory will now be backed by regular RAM.
Also, boards that do not use memory_region_allocate_system_memory will
not support -mem-path anymore. This can be changed before the patches
are merged by migrating boards to use the function.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Prepare for adding more flags. The "_MASK" suffix is unique, kill it.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Split the internal interface in exec.c to a separate function, and
push the check on mem_path up to memory_region_init_ram.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
MST: comment tweaks
which allows to check if MemoryRegion is already mapped.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
See commit fbeadf50 (bitops: unify bitops_ffsl with the one in
host-utils.h, call it bitops_ctzl) on why ctzl should be used instead
of ffsl.
This is also needed for musl libc which does not implement ffsl.
Signed-off-by: Natanael Copa <ncopa@alpinelinux.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Unify pieces of cpu-all.h, exec-all.h, softmmu_exec.h and tcg/tcg.h
into a single new header file with all helpers.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This will collect all load and store helpers soon. For now
it is just a replacement for softmmu_exec.h, which this patch
stops including directly, but we also include it where this will
be necessary in order to simplify the next patch.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
They do not need to be in op_helper.c. Because cputlb.c now includes
softmmu_template.h twice for each size, io_readX must be elided the
second time through.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We will reference it from more files in the next patch. To avoid
ruining the small steps we're making towards multi-target, make
it a method of CPU rather than just a global.
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This preprocessor symbol is already used in softmmu_template.h. We
will use it to distinguish the two "fake" ACCESS_TYPEs
NB_MMU_MODES and NB_MMU_MODES + 1.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Tidying the initialization of the args arrays at the same time.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Rather than special casing them, use the standard mechanisms
for tcg helper generation.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Rather than include helper.h with N values of GEN_HELPER, include a
secondary file that sets up the macros to include helper.h. This
minimizes the files that must be rebuilt when changing the macros
for file N.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Now that the code_gen_buffer is constrained to not cross 256mb
regions, we are assured that we can use J to reach another TB.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJTdOpIAAoJEK0ScMxN0CebAJUIAKmxOlk39ukPD3hn8Ik1MCr8
byJLcCZrdTzKMAjiovQIAmTmMQ4dba0YI6E+G8H4Z21u74P3fgUbhlt3SHpeMNch
kbfIUJ4PYAar9wze858rD4BANOOMB3qLjkE3LH8WF70S8S7yTc7fsCrDqS0+qG0P
+fFHdoHT1w93O5V07ELI9xCDEeCH7gE6znD0RLAc00SNErDWBCZKIpgT45K0bJmG
1uX8nuUHx6U8TUpjLzwUomJc5o3OeutbF3H2XlVQdzbPbBchkjeHEZ9jv2h2q6bC
e3xVzwBL7IP3vVEMLT6WWkNtI3XO1erDOjzbw/4F6hqpMOFy92Lpmpib8Q2W+S0=
=glsl
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-s390-20140515' into staging
tcg/s390 updates
# gpg: Signature made Thu 15 May 2014 17:24:40 BST using RSA key ID 4DD0279B
# gpg: Can't check signature: public key not found
* remotes/rth/tags/pull-tcg-s390-20140515:
tcg-s390: Implement direct chaining of TBs
tcg-s390: Don't force -march=z990
tcg-s390: Improve setcond
tcg-s390: Allow immediate operands to add2 and sub2
tcg-s390: Implement tcg_register_jit
tcg-s390: Use more risbg in the tlb sequence
tcg-s390: Move ldst helpers out of line
tcg-s390: Convert to new ldst opcodes
tcg-s390: Integrate endianness into TCGMemOp
tcg-s390: Convert to TCGMemOp
tcg-s390: Fix off-by-one in wraparound andi
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* remotes/kvm/uq/master:
pc: port 92 reset requires a low->high transition
cpu: make CPU_INTERRUPT_RESET available on all targets
apic: do not accept SIPI on the bootstrap processor
target-i386: preserve FPU and MSR state on INIT
target-i386: fix set of registers zeroed on reset
kvm: forward INIT signals coming from the chipset
kvm: reset state from the CPU's reset method
target-i386: the x86 CPL is stored in CS.selector - auto update hflags accordingly.
target-i386: set eflags prior to calling cpu_x86_load_seg_cache() in seg_helper.c
target-i386: set eflags and cr0 prior to calling cpu_x86_load_seg_cache() in smm_helper.c
target-i386: set eflags prior to calling svm_load_seg_cache() in svm_helper.c
pci-assign: limit # of msix vectors
pci-assign: Fix a bug when map MSI-X table memory failed
kvm: make one_reg helpers available for everyone
target-i386: Remove unused data from local array
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We got the wrong version of stl_p, the one that bswaps as appropriate
for the target. Since x86 is always little-endian, the "_le_" routine
will resolve to what we want.
Signed-off-by: Richard Henderson <rth@twiddle.net>
On the x86, some devices need access to the CPU reset pin (INIT#).
Provide a generic service to do this, using one of the internal
cpu_interrupt targets. Generalize the PPC-specific code for
CPU_INTERRUPT_RESET to other targets.
Since PPC does not support migration across QEMU versions (its
machine types are not versioned yet), I picked the value that
is used on x86, CPU_INTERRUPT_TGT_INT_1. Consequently, TGT_INT_2
and TGT_INT_3 are shifted down by one while keeping their value.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
To be defined by the tcg backend based on the elemental unit of the ISA.
During the transition, allow TCG_TARGET_INSN_UNIT_SIZE to be undefined,
which allows us to default tcg_insn_unit to the current uint8_t.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The code which patches x86 jump instructions assumes it can do an
unaligned write of a uint32_t. This is actually safe on x86, but it's
still undefined behaviour. We have infrastructure for doing efficient
unaligned accesses which doesn't engage in undefined behaviour, so
use it.
This is technically fractionally less efficient, at least with gcc 4.6;
instead of one instruction:
7b2: 89 3e mov %edi,(%rsi)
we get an extra spurious store to the stack slot:
7b2: 89 7c 24 64 mov %edi,0x64(%rsp)
7b6: 89 3e mov %edi,(%rsi)
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
.impl.valid should be .impl.unaligned and the description needs some
fixes.
.old_portio is removed since commit b40acf99b (ioport: Switch
dispatching to memory core layer).
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Implement the DC ZVA instruction, which clears a block of memory.
The fast path obtains a pointer to the underlying RAM via the TCG TLB
data structure so we can do a direct memset(), with fallback to a
simple byte-store loop in the slow path.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Acked-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
The ARM A64 decoder's worst case number of TCG ops per instruction
is 266 (for insn 0x4c800000, a post-indexed ST4 multiple-structures
store). Raise the MAX_OP_PER_INSTR define accordingly.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1394822294-14837-17-git-send-email-peter.maydell@linaro.org
Most targets were using offsetof(CPUFooState, breakpoints) to determine
how much of CPUFooState to clear on reset. Use the next field after
CPU_COMMON instead, if any, or sizeof(CPUFooState) otherwise.
Signed-off-by: Andreas Färber <afaerber@suse.de>
* implement WFE as yield (improves performance with emulated SMP)
* fixes to avoid undefined behaviour shifting left into sign bit
* libvixl format string fixes for 32 bit hosts
* fix build error when intptr_t and tcg_target_long are different
sizes (eg x32)
* implement PMCCNTR register
* fix incorrect setting of E bit in CPSR (broke booting under
KVM on ARM)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABCAAGBQJTHdS1AAoJEDwlJe0UNgze0B8P/3DkFSe5hKQhU4mcqsQ8/bXJ
ub75hyA/vm1anZIsrKQkOz0+ySNjphyhGtQVDCERc/r2hhwYl7v+9KZ/LS8LECYD
Dl0Rf7woTYNwJKwyKuXbFjHkl443gDa5FS0HR1QD1t3gp1T/AMkgq7XsjUOlx6Up
h14m0/3NQB3fgU5VmqYeUC4wVuH6SVhPo9CSVH864HGX4lBL30zfMoJjluZCQ6Y9
Qo3FOPXT+rWmLmiiFgbEVHs96mfN05ooiIbpISCKSKL/9LujoeX6PV2qX6KFyk3T
zhWY4X65Gz6PqW1LZR9DUkl4NWxIjdYVS2DE2cOgQJLz6guy6JEfLoWechHyzBXa
w4evOmoexaVlY713joMtDceRrIeAzPLS69rS1HuRj85MikFpqNrY4g2lEqGgCaWH
oQ3W8UpcEMfvBIKXyAF8i8UJNH8q5YkI+z4KT2TOX3y+KiVcwixDuWizsjW7uLUN
zdLaYWXYRLa026b37AY0Y0k/K2kQt1yXemI5hk4P4jCQM/2/hireO5Z9TYBpD8ym
Igsq0ljH7gpyExhXsyvL73ZmFOTmdO1RwU5ybSR8ZzfpAT6W+e1acQOZ/8WyWVcz
gsrwVeuTlK04nte9fpmrN54MeEHr3Jn2K4bsOfeKFRPD/pvORRAzLIHQJO/SPXoA
Cmn/627VVNTnpBhtlSlc
=SHmz
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20140310' into staging
target-arm queue:
* implement WFE as yield (improves performance with emulated SMP)
* fixes to avoid undefined behaviour shifting left into sign bit
* libvixl format string fixes for 32 bit hosts
* fix build error when intptr_t and tcg_target_long are different
sizes (eg x32)
* implement PMCCNTR register
* fix incorrect setting of E bit in CPSR (broke booting under
KVM on ARM)
# gpg: Signature made Mon 10 Mar 2014 15:05:25 GMT using RSA key ID 14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
* remotes/pmaydell/tags/pull-target-arm-20140310:
target-arm: Implement WFE as a yield operation
hw/arm/musicpal: Avoid shifting left into sign bit
hw/ssi/xilinx_spips.c: Avoid shifting left into sign bit
hw/arm/omap1.c: Avoid shifting left into sign bit
pxa2xx: Don't shift into sign bit
libvixl: Fix format strings for several int64_t values
target-arm: Fix intptr_t vs tcg_target_long
target-arm: Implements the ARM PMCCNTR register
target-arm: Fix incorrect setting of E bit in CPSR
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement WFE to yield our timeslice to the next CPU.
This avoids slowdowns in multicore configurations caused
by one core busy-waiting on a spinlock which can't possibly
be unlocked until the other core has an opportunity to run.
This speeds up my test case A15 dual-core boot by a factor
of three (though it is still four or five times slower than
a single-core boot).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1393339545-22111-1-git-send-email-peter.maydell@linaro.org
Reviewed-by: Richard Henderson <rth@twiddle.net>
Tested-by: Rob Herring <rob.herring@linaro.org>
Windows XP shows COM2 port as non functional in
"Device Manager" although no COM2 port backing device
is present in QEMU.
This regression is really due to
3bb28b7208b349e7a1b326e3c6ef9efac1d462bf?
memory: Provide separate handling of unassigned io ports accesses
That is caused by the fact that QEMU reports to
OSPM that device is present by setting 5th bit in
PII4XPM.pci_conf[0x67] register when COM2 doesn't
exist.
It happens due to memory_region_present(io_as, 0x2f8)
returning false positive since 0x2f8 address eventually
translates into catchall io_as address space.
Fix memory_region_present(parent, addr) by returning
true only if addr maps into a MemoryRegion within
parent (excluding parent itself), to match its
doc comment.
While at it fix copy/paste error in
memory_region_present() doc comment.
Note: this is a temporary hack: we really need better handling for
unassigned regions, we should avoid fallback regions since they are bad
for performance (breaking radix tree assumption that the data structure
is sparsely populated); for memory we need to fix this to implement PCI
master abort properly, anyway.
Cc: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJS8QygAAoJEPSH7xhYctcjk/sQAOhogC2u84L/bNO2gSnibK73
jbqvQGRUSYyUB2O6jmCtO6lQL7KKwZ/hgNuA3bMzIwomdw4nYYgJxV9HqJa4mSku
H5S5TSeggTAc/Y4U1fxME7E8dsN6k0RvgThFasW7chGZxcLs7DiDd0Qd3V8zL3Ev
QB7zkDzm2vK2vT6zzW74K6MPCF2x7jDzAnAVJtPXnnJfRQ1o3cCrd1aDbi/I9qAs
4WACLTsyKrsN7yqMIRsPRmoWyoSm65WqpNY9RnqnfphkJezSCoKEV+Rr0QVGkOUb
ZVFt8N72vtrs2eGWf0KfnzL80yWE+m7pnZ4oTW3kfHbUeofaEb4Np61cGJA/FZBx
yfWEpo8xY4ilHQzgwi7m8oLtrOsbVBzwtdFgr+qgIpOOrUW/CWrIVjT6/y/jZG/F
SWupDg6TEUP7brMtyuNAU5If5RMhksnB46O7Sx9Y90Wwh8w4qudfjCmwNeuU7pzW
yEAHfpFIWK919Biq3wshKBJIGGojbP/gNSTHThKdDCttsWMJY152G0n1KqFQ2GYH
xOAC0lIRxl5pn8QKmP8YWTiTpXDNQzVCXIfqd+nRzr8ZD8kMJ9MjPr1z4ZcirnF/
NN7ZPPj9Qd5OZcXxP+phc03oDkfSQNLuclK+tVI6fxNuBu+5VcQcmfPUz3pdvgQ3
bXRD6/ov3xU/zGmiMkQL
=vMR+
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/juanquintela/tags/migration/20140204-1' into staging
migration/next for 20140204
# gpg: Signature made Tue 04 Feb 2014 15:52:00 GMT using RSA key ID 5872D723
# gpg: Can't check signature: public key not found
* remotes/juanquintela/tags/migration/20140204-1:
Don't abort on memory allocation error
Don't abort on out of memory when creating page cache
XBZRLE cache size should not be larger than guest memory size
migration:fix free XBZRLE decoded_buf wrong
Add check for cache size smaller than page size
Set xbzrle buffers to NULL after freeing them to avoid double free errors
exec: fix ram_list dirty map optimization
vmstate: Make VMSTATE_STRUCT_POINTER take type, not ptr-to-type
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The ae2810c4bb patch introduced
optimization for ram_list.dirty_memory update. However it can only
work correctly if hpratio is 1 as the @bitmap parameter stores 1 bits
per system page size (may vary, 4K or 64K on PPC64) and
ram_list.dirty_memory stores 1 bit per TARGET_PAGE_SIZE
(which is hardcoded to 4K).
This fixes hpratio!=1 case to fall back to the slow path.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Do not rely on int8_t (and friends) not being preprocessor
symbols (or symbols expanding to themselves). On NetBSD (for example) the
glue(u, SDATA_TYPE) results in u__int8_t, which is undefined. There is no way
to stop cpp expanding inner macros, so just add the few lines explicitly and
get rid of the magic.
Signed-off-by: Martin Husemann <martin@NetBSD.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
There is a HOST_PAGE_ALIGN macro which makes sense for KVM accelerator
but it uses qemu_host_page_size/qemu_host_page_mask which initialized
for TCG only.
This moves qemu_host_page_size/qemu_host_page_mask initialization from
TCG's page_init() and adds a call for it from kvm_init().
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
cpu_physical_memory_set_dirty_lebitmap calls getpageaddr and ffsl which are
unavailable for MinGW. As the function is unused for MinGW, it can simply
be excluded from compilation.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
If bitmaps are aligned properly, use bitmap operations. If they are
not, just use old bit at a time code.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
We want to have all the functions that handle directly the dirty
bitmap near. We will change it later.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
All the functions that use ram_addr_t should be here.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
We have an end parameter in all the callers, and this make it coherent
with the rest of cpu_physical_memory_* functions, that also take a
length parameter.
Once here, move the start/end calculation to
tlb_reset_dirty_range_all() as we don't need it here anymore.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
All uses except one really want the other meaning.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
We were clearing a range of bits, so use bitmap_clear().
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
We were setting a range of bits, so use bitmap_set().
Note: xen has always been wrong, and should have used start instead
of addr from the beginning.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
This operation is way faster than doing it bit by bit.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Now all functions use the same wording that bitops/bitmap operations
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
And make cpu_physical_memory_get_dirty_flag() to use it. It used to
be the other way around.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
After all the previous patches, spliting the bitmap gets direct.
Note: For some reason, I have to move DIRTY_MEMORY_* definitions to
the beginning of memory.h to make compilation work.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
For historical reasons it was bit 3. Once there, create a constant to
know the number of clients.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Document it
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
So remove the flag argument and do it directly. After this change,
there is nothing else using cpu_physical_memory_set_dirty_flags() so
remove it.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
So cpu_physical_memory_get_dirty_flags is not needed anymore
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
So return void.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>