The MO_8/16/32/64 constants have the same encoding and meaning
as the OT_BYTE/WORD/LONG/QUAD. Since we rely on them being the
same, for the qemu_ld/st helpers, standardize on the common names.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
In preference to the older helpers. Stores only in this patch.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
In preference to the older helpers. Loads only in this patch.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Now that we don't combine mem_index with operand size info,
we don't need to encode it. Which tidies many places that
access it.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Rather than add s->mem_index into a combined size+mem_index
argument, pass the context down. This will allow cleaning
up s->mem_index later.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
After commit b1bbfe7 (aio / timers: On timer modification, qemu_notify
or aio_notify, 2013-08-21) FreeBSD guests report a huge slowdown.
The problem shows up as soon as FreeBSD turns out its periodic (~1 ms)
tick, but the timers are only the trigger for a pre-existing problem.
Before the offending patch, setting a timer did a timer_settime system call.
After, setting the timer exits the event loop (which uses poll) and
reenters it with a new deadline. This does not cause any slowdown; the
difference is between one system call (timer_settime and a signal
delivery (SIGALRM) before the patch, and two system calls afterwards
(write to a pipe or eventfd + calling poll again when re-entering the
event loop).
Unfortunately, the exit/enter causes the main loop to grab the iothread
lock, which in turns kicks the VCPU thread out of execution. This
causes TCG to execute the next VCPU in its round-robin scheduling of
VCPUS. When the second VCPU is mostly unused, FreeBSD runs a "pause"
instruction in its idle loop which only burns cycles without any
progress. As soon as the timer tick expires, the first VCPU runs
the interrupt handler but very soon it sets it again---and QEMU
then goes back doing nothing in the second VCPU.
The fix is to make the pause instruction do "cpu_loop_exit".
Cc: Richard Henderson <rth@twiddle.net>
Reported-by: Luigi Rizzo <rizzo@iet.unipi.it>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1384948442-24217-1-git-send-email-pbonzini@redhat.com
Signed-off-by: Anthony Liguori <aliguori@amazon.com>
The instructions CMOVcc, FCMOVcc and F[U]COMI[P] should only be
present if the CMOV feature bit is set. Add missing feature bit
checks so we correctly fault if emulating a 486 or 586.
This fixes bug LP:1201446.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Prepares for changing cpu_single_step() argument to CPUState.
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Signed-off-by: Andreas Färber <afaerber@suse.de>
Also use bool type while at it.
Prepares for moving singlestep_enabled field to CPUState.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The code reorganization in commit 4a6fd938 broke handling of PREFIX_ADR.
While fixing this, tidy and comment the code so that it's more obvious
what's going on in setting both aflag and dflag.
The TARGET_X86_64 ifdef can be eliminated because CODE64 expands to the
constant zero when TARGET_X86_64 is undefined.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Reported-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1369855851-21400-1-git-send-email-rth@twiddle.net
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Fix EFLAGS corruption by ROR r8/r16 imm instruction located at the end
of the TB, similarly to commit 089305ac for the non-immediate case.
Reported-by: Hervé Poussineau <hpoussin@reactos.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
This replaces the feature-bit fields on both X86CPU and x86_def_t
structs with an array.
With this, we will be able to simplify code that simply does the same
operation on all feature words (e.g. kvm_check_features_against_host(),
filter_features_for_kvm(), add_flagname_to_bitmaps(), CPU feature-bit
property lookup/registration, and the proposed "feature-words" property)
The following field replacements were made on X86CPU and x86_def_t:
(cpuid_)features -> features[FEAT_1_EDX]
(cpuid_)ext_features -> features[FEAT_1_ECX]
(cpuid_)ext2_features -> features[FEAT_8000_0001_EDX]
(cpuid_)ext3_features -> features[FEAT_8000_0001_ECX]
(cpuid_)ext4_features -> features[FEAT_C000_0001_EDX]
(cpuid_)kvm_features -> features[FEAT_KVM]
(cpuid_)svm_features -> features[FEAT_SVM]
(cpuid_)7_0_ebx_features -> features[FEAT_7_0_EBX]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Fixed EFLAGS corruption by ROR r8/r16 instruction located at the end of the TB.
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
gen_op_mov_TN_reg() loads the value in cpu_T[0], so this temporary should
be used instead of cpu_tmp0.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
When starting from CC_OP_DYNAMIC, and issuing adox before adcx,
a typo used the wrong value for the resulting CC_OP.
Cc: Blue Swirl <blauwirbel@gmail.com>
Reported-by: Torbjorn Granlund <tg@gmplib.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Fix various typos and misspellings. The bulk of these were found with
codespell.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The gen_icount_start/end functions are now somewhat misnamed since they
are useful for generic "start/end of TB" code, used for more than just
icount. Rename them to gen_tb_start/end.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
These correspond very closely to the insns that we're emulating.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The shift and rotate insns use movcond to set CC_OP, and thus
achieve a conditional EFLAGS setting. By discarding CC_OP in
a later flags setting insn, we can discard that movcond.
Signed-off-by: Richard Henderson <rth@twiddle.net>
We weren't computing flags for lzcnt at all. At the same time,
adjust the implementation of bsf/bsr to avoid the local branch,
using movcond instead.
Signed-off-by: Richard Henderson <rth@twiddle.net>
As this is the first of the BMI insns to be implemented,
this carries quite a bit more baggage than normal.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Add another slot in ENV and store two of the three inputs. This lets us
do less work when carry-out is not needed, and avoids the unpredictable
CC_OP after translating these insns.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Pass the data in explicitly, rather than indirectly via env.
This avoids all sorts of unnecessary register spillage.
Signed-off-by: Richard Henderson <rth@twiddle.net>
After a comparison or subtraction, the original value of the LHS will
currently be reconstructed using an addition. However, in most cases
it is already available: store it in a temp-local variable and save 1
or 2 TCG ops (2 if the result of the addition needs to be extended).
The temp-local can be declared dead as soon as the cc_op changes again,
or also before the translation block ends because gen_prepare_cc will
always make a copy before returning it. All this magic, plus copy
propagation and dead-code elimination, ensures that the temp local will
(almost) never be spilled.
Example (cmp $0x21,%rax + jbe):
Before After
----------------------------------------------------------------------------
movi_i64 tmp1,$0x21 movi_i64 tmp1,$0x21
movi_i64 cc_src,$0x21 movi_i64 cc_src,$0x21
sub_i64 cc_dst,rax,tmp1 sub_i64 cc_dst,rax,tmp1
add_i64 tmp7,cc_dst,cc_src
movi_i32 cc_op,$0x11 movi_i32 cc_op,$0x11
brcond_i64 tmp7,cc_src,leu,$0x0 discard loc11
brcond_i64 rax,cc_src,leu,$0x0
Before After
----------------------------------------------------------------------------
mov (%r14),%rbp mov (%r14),%rbp
mov %rbp,%rbx mov %rbp,%rbx
sub $0x21,%rbx sub $0x21,%rbx
lea 0x21(%rbx),%r12
movl $0x11,0xa0(%r14) movl $0x11,0xa0(%r14)
movq $0x21,0x90(%r14) movq $0x21,0x90(%r14)
mov %rbx,0x98(%r14) mov %rbx,0x98(%r14)
cmp $0x21,%r12 | cmp $0x21,%rbp
jbe ... jbe ...
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Placing the CC_OP_DYNAMIC at the join is less effective than
before the branch, as the branch will have forced global registers
to their home locations. This way we have a chance to discard
CC_SRC2 before it gets stored.
Signed-off-by: Richard Henderson <rth@twiddle.net>
A jump that ends a basic block or otherwise falls back to CC_OP_DYNAMIC
will always have to call gen_op_set_cc_op. However, not all jumps end
a basic block, so introduce a variant that does not do this.
This was partially undone earlier (i386: drop cc_op argument of gen_jcc1),
redo it now also to prepare for the introduction of src2.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Replace low-level ops with a higher-level "cmp %al, (A0)" in the case
of scas, and "cmp T0, (A0)" in the case of cmps.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
It is almost unused, and it is simpler to pass a TCG value directly
to gen_shiftd_rm_T1_T3. This value is then written to t2 without
going through a temporary register.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This simplifies all the jump generation code. CCPrepare allows the
code to create an efficient brcond always, so there is no need to
duplicate the setcc and jcc code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This makes the i386 front-end able to create CCPrepare structs for all
condition, not just those that come from a single flag. In particular,
JCC_L and JCC_LE can be optimized because gen_prepare_cc is not forced
to return a result in bit 0 (unlike gen_setcc_slow).
However, for now the slow jcc operations will still go through CC
computation in a single-bit temporary, followed by a brcond if the
temporary is nonzero.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Introduce a struct that describes how to build a *cond operation
that checks for a given x86 condition code. For now, just change
gen_compute_eflags_* to return the new struct, generate code for
the CCPrepare struct, and go on as before.
[rth: Use ctz with the proper width rather than ffs.]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reconstruct the arguments for complex conditions involving CC_OP_SUBx (BE,
L, LE). In the others do it via setcond and gen_setcc_slow (which is
not that slow in many cases).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
And allow gen_setcc_slow to operate on cpu_cc_src.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This is looking at EFLAGS, but it can do so more efficiently with
setcond.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Do not hard code the destination register.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Do the switch at translation time, converting the helper templates to
TCG opcodes. In some cases CF can be computed with a single setcond,
though others it may require a little more work.
In the CC_OP_DYNAMIC case, compute the whole EFLAGS, same as for ZF/SF/PF.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Make gen_compute_eflags_z and gen_compute_eflags_s able to compute the
inverted condition, and use this in gen_setcc_slow_T0. We cannot do it
yet in gen_compute_eflags_c, but prepare the code for it anyway. It is
not worthwhile for PF, as usual.
shr+and+xor could be replaced by and+setcond. I'm not doing it yet.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
ZF, SF and PF can always be computed from CC_DST except in the
CC_OP_EFLAGS case (and CC_OP_DYNAMIC, which just resolves to CC_OP_EFLAGS
in gen_compute_eflags). Use setcond to compute ZF and SF.
We could also use a table lookup to compute PF.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This gets us universal coverage, rather than scattering discards
around at various places. As a bonus, we do not emit redundant
discards e.g. between sequential logic insns.
Signed-off-by: Richard Henderson <rth@twiddle.net>
This makes code more similar to the other callers of gen_eob, especially
loopz/loopnz/jcxz.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
After calling gen_compute_eflags, leave the computed value in cc_reg_src
and set cc_op to CC_OP_EFLAGS. The next few patches will remove anyway
most calls to gen_compute_eflags.
As a result of this change it is more natural to remove the register
argument from gen_compute_eflags and change all the callers.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Introduce new functions to extract PF, SF, OF, ZF in addition to CF.
These provide single entry points for optimizing accesses to a single
flag.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
All of the conditional calls to gen_op_set_cc_op go away, and
gen_op_set_cc_op itself gets inlined into its only remaining caller.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use a dirty flag to know whether env->cc_op is up to date,
rather than forcing s->cc_op to DYNAMIC and losing info.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Before computing flags we need to store the cc_op to memory. Move this
to gen_compute_eflags_c and gen_compute_eflags rather than doing it all
over the place.
Alo, after computing the flags in cpu_cc_src we are in EFLAGS mode.
Set s->cc_op and discard cpu_cc_dst in gen_compute_eflags, rather than
doing it all over the place.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Discard CC_DST and set s->cc_op immediately after computing EFLAGS.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Always compute EFLAGS first since it is needed whenever
the shift is non-zero, i.e. most of the time. This makes it possible
to remove some writes of CC_OP_EFLAGS to cpu_cc_op and more importantly
removes cases where s->cc_op becomes CC_OP_DYNAMIC. Also, we can
remove cc_tmp and just modify cc_src from within the helper.
Finally, always follow gen_compute_eflags(cpu_cc_src) by setting s->cc_op
and discarding cpu_cc_dst.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This ensures the invariant that cpu_cc_op matches s->cc_op when calling
the helpers. The next patches need this because gen_compute_eflags and
gen_compute_eflags_c will take care of setting cpu_cc_op.
Always compute EFLAGS first since it is needed whenever the shift is
non-zero, i.e. most of the time. This makes it possible to remove some
writes of CC_OP_EFLAGS to cpu_cc_op and more importantly removes cases
where s->cc_op becomes CC_OP_DYNAMIC. These are slow and we want to
avoid them: CC_OP_EFLAGS is quite efficient once we paid the initial
cost of computing the flags.
Finally, always follow gen_compute_eflags(cpu_cc_src) by setting s->cc_op
and discarding cpu_cc_dst.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This ensures the invariant that cpu_cc_op matches s->cc_op when calling
the helpers. The next patches need this because gen_compute_eflags and
gen_compute_eflags_c will take care of setting cpu_cc_op.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
As in the gen_repz_scas/gen_repz_cmps case, delay setting
CC_OP_DYNAMIC in gen_jcc until after code generation. All of
gen_jcc1/is_fast_jcc/gen_setcc_slow_T0 now work on s->cc_op, which makes
things a bit easier to follow and to patch.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Set it to the appropriate CC_OP_SUBx constant in gen_scas/gen_cmps.
In the repz case it can be overridden to CC_OP_DYNAMIC after generating
the code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Introduce a function that abstracts extracting an 8, 16, 32 or 64-bit value
with or without sign, generalizing gen_extu and gen_exts.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Rename the public-facing function cpu_set_log to qemu_set_log. This
requires us to rename the internal-only qemu_set_log() to
do_qemu_set_log().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Pass around CPUArchState instead of using global cpu_single_env.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Pass around CPUArchState instead of using global cpu_single_env.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Acked-by: Richard Henderson <rth@twiddle.net>
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn>
This patch implements Supervisor Mode Execution Prevention (SMEP) and
Supervisor Mode Access Prevention (SMAP) for x86. The purpose of the
patch, obviously, is to help kernel developers debug the support for
those features.
A fair bit of the code relates to the handling of CPUID features. The
CPUID code probably would get greatly simplified if all the feature
bit words were unified into a single vector object, but in the
interest of producing a minimal patch for SMEP/SMAP, and because I had
very limited time for this project, I followed the existing style.
[ v2: don't change the definition of the qemu64 CPU shorthand, since
that breaks loading old snapshots. Per Anthony Liguori this can be
fixed once the CPU feature set is snapshot.
Change the coding style slightly to conform to checkpatch.pl. ]
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
For all targets that currently call tcg_gen_debug_insn_start,
add CPU_LOG_TB_OP_OPT to the condition that gates it.
This is useful for comparing optimization dumps, when the
pre-optimization dump is merely noise.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
> This instruction is always treated as a register-to-register (MOD = 11)
> instruction, regardless of the encoding of the MOD field in the MODR/M
> byte.
Also, Microport UNIX System V/386 v 2.1 (ca 1987) runs fine on
real Intel 386 and 486 CPU's (at least), but does not run in qemu without
this patch.
Signed-off-by: Matthew Ogilvie <mmogilvi_qemu@miniinfo.net>
Signed-off-by: malc <av1474@comtv.ru>
Add an explicit CPUX86State parameter instead of relying on AREG0.
Remove temporary wrappers and switch to AREG0 free mode.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Add an explicit CPUX86State parameter instead of relying on AREG0.
Rename remains of op_helper.c to seg_helper.c.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Make FPU helpers take a parameter for CPUState instead
of relying on global env.
Introduce temporary wrappers for FPU load and store ops. Remove
wrappers for non-AREG0 code. Don't call unconverted helpers
directly.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
According to the Intel manual
"Intel® 64 and IA-32 Architectures Software Developer’s Manual
Volume 3", "3.4.4 Segment Loading Instructions in IA-32e Mode":
"When in compatibility mode, FS and GS overrides operate as defined by
32-bit mode behavior regardless of the value loaded into the upper 32
linear-address bits of the hidden descriptor register base field.
Compatibility mode ignores the upper 32 bits when calculating an effective address."
However, the code misses the 64-bit mode case, where an instruction with
address and segment size override would be translated incorrectly. For example,
inc dword ptr gs:260h[ebx*4] gets incorrectly translated to:
(uint32_t)(gs.base + ebx * 4 + 0x260)
instead of
gs.base + (uint32_t)(ebx * 4 + 0x260)
Signed-off-by: Vitaly Chipounov <vitaly.chipounov@epfl.ch>
Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Rephrase some of the expressions used to select an entry
in the SSE op table arrays so that it's clearer that they
don't overrun the op table array size.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The X86_64_DEF macro is a confusing way of making some terms
in a conditional only appear if TARGET_X86_64 is defined. We
only use it in two places, and in both cases this is for making
the same test, so abstract that check out into a function
where we can use a more conventional #ifdef.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Commit 11f8cdb removed all the uses of the X86_64_ONLY
macro. The BUGGY_64() macro has been unused for a long time:
it originally marked some ops which couldn't be enabled
because of issues with the pre-TCG code generation scheme.
Remove the now-unnecessary definitions of both macros.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
commit c4baa0503d improved SSE table
type safety which now raises compiler errors when latest QEMU was
configured with --enable-debug.
Fix this by splitting the SSE tables even further to separate
helper functions with different signatures.
Instead of crashing by calling address 0, the code now jumps to
label illegal_op.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
SSE function tables could easily be corrupted because of use
of void pointers.
Introduce function pointer types and helper variables in order
to improve type safety.
Split sse_op_table3 according to types used.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Add an explicit CPUX86State parameter instead of relying on AREG0.
Merge raise_exception_env() to raise_exception(), likewise with
raise_exception_err_env() and raise_exception_err().
Introduce cpu_svm_check_intercept_param() and cpu_vmexit()
as wrappers.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Scripted conversion:
sed -i "s/CPUState/CPUX86State/g" target-i386/*.[hc]
sed -i "s/#define CPUX86State/#define CPUState/" target-i386/cpu.h
Signed-off-by: Andreas Färber <afaerber@suse.de>
Acked-by: Anthony Liguori <aliguori@us.ibm.com>
Commit 2355c16e74 introduced a new ldmxcsr
helper taking an i32 argument, but the helper is actually passed a long.
Fix that by truncating the long to i32.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
SSE rounding and flush to zero control has never been implemented. However
given that softfloat-native was using a single state for FPU and SSE and
given that glibc is setting both FPU and SSE state in fesetround(), this
was working correctly up to the switch to softfloat.
Fix that by adding an update_sse_status() function similar to
update_fpu_status(), and callin git on write to mxcsr.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
When the i386 cmpxchg instruction is executed with a memory operand
and the comparison result is "unequal", do the memory write before
changing the accumulator instead of the other way around, because
otherwise the new accumulator value will incorrectly be used in the
comparison when the instruction is restarted after a page fault.
This bug was originally reported on 2010-04-25 as
https://bugs.launchpad.net/qemu/+bug/569760
Signed-off-by: Andreas Gustafsson <gson@gson.org>
T0 was already masked to 16 bits when loading it.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Remove also two assert statements which were the last remaining users.
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
The (x << (cl - 1)) quantity is only used if CL != 0. Move the
computation of that quantity nearer its use.
This avoids the creation of undefined TCG operations when the
constant propagation optimization proves that CL == 0, and thus
CL-1 is outside the range [0-wordsize).
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: malc <av1474@comtv.ru>
While trying to use qemu -cpu pentium3 to test for incorrect uses of certain
SSE2 instructions, I found that QEMU allowed the mfence and lfence
instructions to be executed even though Pentium 3 doesn't support them.
According to the processor specs (and experience on a real Pentium 3), these
instructions are only available with SSE2, but QEMU is checking for SSE. The
check for the related sfence instruction is correct (it works with SSE).
This trival patch fixes the test.
Signed-off-by: Martin Simmons <martin@lispworks.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Function gen_pc_load was introduced in commit
d2856f1ad4.
The only reason for parameter searched_pc was
a debug statement in target-i386/translate.c.
Parameter puc was needed by target-sparc until
commit d7da2a1040.
Remove searched_pc from the debug statement and remove both
parameters from the parameter list of gen_pc_load.
As the function name gen_pc_load was also misleading,
it is now called restore_state_to_opc. This new name
was suggested by Peter Maydell, thanks.
v2: Remove last parameter, too, and rename the function.
v3: Fix [] typo in target-arm/translate.c.
Fix wrong SHA1 object name in commit message (copy+paste error).
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
tcg_gen_exit_tb takes a parameter of type tcg_target_long,
so the type casts of pointer to long should be replaced by
type casts of pointer to tcg_target_long (suggested by Blue Swirl).
These changes are needed for build environments where
sizeof(long) != sizeof(void *), especially for w64.
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Use this for assignment to the low byte or low word of a register.
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
This patch simplifies target-i386/translate.c a bit by replacing some
code with gen_update_cc_op()
Signed-off-by: Jun Koi <junkoi2004@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
This patch replaces constant value assigned for (DisasContext
*)->is_jmp with DISAS_TB_JUMP.
Signed-off-by: Jun Koi <junkoi2004@gmail.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
ssse3 uses tables with only two entries per op, but it is indexed
with b1 which can contain variables upto 3. This happens when ssse3
or sse4 are used with REP* prefixes.
Add boundary checking for this case.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
We were ignoring REX_B while special-casing NOP, i.e. xchg eax,eax.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Negative four byte displacements need to be sign-extended after
c086b783eb. Do so.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The proper logging for -d cpu is done in generic code.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The commit c22549204a led movntps &
movntdq to be translated incorrectly.
Signed-off-by: TeLeMan <geleman@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
A SIB byte with an index of 4 means "no scaled index", even if the scale
value is not 0. In 64-bit mode, if REX.X is used, an index of 4 selects
%r12. This is correctly handled by the computation of the index variable,
which includes the index bits, and also the REX.X prefix:
index = ((code >> 3) & 7) | REX_X(s);
Thanks to Avi Kivity, Jamie Lokier and Malc for the analysis of the
problem and the initial patch.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Similarly to what is done in 32938e127f
for "jmp im", trunc the immediate to 32-bit when not running in 64-bit
mode.
Reported-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
lzcnt is a AMD Phenom/Barcelona added instruction returning the
number of leading zero bits in a word.
As this is similar to the "bsr" instruction, reuse the existing
code. There need to be some more changes, though, as lzcnt always
returns a valid value (in opposite to bsr, which has a special
case when the operand is 0).
lzcnt is guarded by the ABM CPUID bit (Fn8000_0001:ECX_5).
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The arpl implementation in target-i386/translate.c uses cpu_A0
temporary across a brcond op. This patch fixes that issue.
Signed-off-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
This patch corrects the following aspects of exception generation in
fxsave/fxrstor:
* Generate #GP if the operand is not aligned to a 16 byte boundary
* Generate #UD if the LOCK prefix is used
* For CR0.EM = 1 #NM is generated, not #UD
Signed-off-by: Kevin Wolf <mail@kevin-wolf.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
RDTSCP reads the time stamp counter and atomically also the content
of a 32-bit MSR, which can be freely set by the OS. This allows CPU
local data to be queried by userspace.
Linux uses this to allow a fast implementation of the getcpu()
syscall, which uses the vsyscall page to avoid a context switch.
AMD CPUs since K8RevF and Intel CPUs since Nehalem support this
instruction.
RDTSCP is guarded by the RDTSCP CPUID bit (Fn8000_0001:EDX[27]).
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
This adds support for the AMD Phenom/Barcelona's SSE4a instructions.
Those include insertq and extrq, which are doing shift and mask on
XMM registers, in two versions (immediate shift/length values and
stored in another XMM register).
Additionally it implements movntss, movntsd, which are scalar
non-temporal stores (avoiding cache trashing). These are implemented
as normal stores, though.
SSE4a is guarded by the SSE4A CPUID bit (Fn8000_0001:ECX[6]).
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
AMD CPUs featuring a shortcut to access CR8 even from 32-bit mode.
If you use the LOCK prefix with "mov CR0", it accesses CR8 instead.
This behavior is guarded by the CR8_LEGACY CPUID bit
(Fn8000_0001:ECX[1]).
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Problem: Our file sys-queue.h is a copy of the BSD file, but there are
some additions and it's not entirely compatible. Because of that, there have
been conflicts with system headers on BSD systems. Some hacks have been
introduced in the commits 15cc923584,
f40d753718,
96555a96d7 and
3990d09adf but the fixes were fragile.
Solution: Avoid the conflict entirely by renaming the functions and the
file. Revert the previous hacks.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Include assert.h from qemu-common.h and remove other direct uses.
cpu-all.h still need to include it because of the dyngen-exec.h hacks
Signed-off-by: Paul Brook <paul@codesourcery.com>
This replaces a compile time option for some targets and adds
this feature to targets which did not have a compile time option.
Add monitor command to enable or disable single step mode.
Modify monitor command "info status" to display single step mode.
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7004 c046a42c-6fe2-441c-8c8c-71466251a162
Rename bswap_i32 into bswap32_i32 and bswap_i64 into bswap64_i64
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6829 c046a42c-6fe2-441c-8c8c-71466251a162
These are references to 'loglevel' that aren't on a simple 'if (loglevel &
X) qemu_log()' statement.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6340 c046a42c-6fe2-441c-8c8c-71466251a162
This is a large patch that changes all occurrences of logfile/loglevel
global variables to use the new qemu_log*() macros.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6338 c046a42c-6fe2-441c-8c8c-71466251a162
The attached patch updates the FSF address in the GPL/LGPL boilerplate
in most GPL/LGPLed files, and also in COPYING.LIB.
Signed-off-by: Stuart Brady <stuart.brady@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6162 c046a42c-6fe2-441c-8c8c-71466251a162
When running grub-install (32-bit) on an x86_64 Linux system in qemu, it
hangs on a pagefault forever, because an integer overflow occurs on the
IP on "jmp im". This patch masks overflows for 32 bit IPs on a 64 bit
system, just like it is done for 16 bit IPs already.
Using this patch, x86_64 openSUSE installation works again.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Kevin Wolf <kwolf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5963 c046a42c-6fe2-441c-8c8c-71466251a162
The icebp instruction can be abused to terminate the emulation,
resulting in denial of service.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5921 c046a42c-6fe2-441c-8c8c-71466251a162
This switches cpu_break/watchpoint_* to TAILQ wrappers, simplifying the
code and also fixing a use after release issue in
cpu_break/watchpoint_remove_all.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5799 c046a42c-6fe2-441c-8c8c-71466251a162
This patch prepares the QEMU cpu_watchpoint/breakpoint API to allow the
succeeding enhancements this series comes with.
First of all, it overcomes MAX_BREAKPOINTS/MAX_WATCHPOINTS by switching
to dynamically allocated data structures that are kept in linked lists.
This also allows to return a stable reference to the related objects,
required for later introduced x86 debug register support.
Breakpoints and watchpoints are stored with their full information set
and an additional flag field that makes them easily extensible for use
beyond pure guest debugging.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5738 c046a42c-6fe2-441c-8c8c-71466251a162
Forced the constant's width to long long so that it doesn't overflow,
problem spotted by C. W. Betts.
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5417 c046a42c-6fe2-441c-8c8c-71466251a162
This adds support for CPUID_EXT_SSE41, CPUID_EXT_SSE42, CPUID_EXT_POPCNT
extensions. Most instructions haven't been tested yet.
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5411 c046a42c-6fe2-441c-8c8c-71466251a162
On Intel CPUs, sysenter and sysexit are valid in 64-bit mode. This patch
makes both 64-bit aware and enables them for Intel CPUs.
Add cpu save/load for 64-bit wide sysenter variables.
Signed-off-by: Alexander Graf <agraf@suse.de>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5318 c046a42c-6fe2-441c-8c8c-71466251a162