Replace the "nvdimm-cap" option which took numeric arguments such as "2"
with a more user friendly "nvdimm-persistence" option which takes symbolic
arguments "cpu" or "mem-ctrl".
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Suggested-by: Michael S. Tsirkin <mst@redhat.com>
Suggested-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This commit:
commit aa78a16d86 ("hw/i386: Rename 2.13 machine types to 3.0")
updated the name used to create the q35 machine, which in turn changed the
SSDT table which is generated when we run "make check":
acpi-test: Warning! SSDT mismatch. Actual [asl:/tmp/asl-QZDWJZ.dsl,
aml:/tmp/aml-T8JYJZ], Expected [asl:/tmp/asl-DTWVJZ.dsl,
aml:tests/acpi-test-data/q35/SSDT.dimmpxm].
Here's the only difference, aside from the checksum:
< Name (MEMA, 0x07FFF000)
---
> Name (MEMA, 0x07FFE000)
Update the binary table that we compare against so it reflects this name
change.
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Fixes: commit aa78a16d86 ("hw/i386: Rename 2.13 machine types to 3.0")
Tested-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
- Various bug fixes
- Removal of qemu-img convert's deprecated -s option
- qemu-io now exits with an error when a command failed
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJbHoXuAAoJEPQH2wBh1c9AyDYIAJ4jp0Uw4Taw9L6qOM/J44Ui
g6HNujGZoSHejRemiXHfKVozHHbkkNje0UVJJH7lKAPdPy1BtoThAXNfoqSP6mMg
frsOVMjGJSzPxodmGlFRgnQXNJKWsxrXWJgrGCygwEAlDPnlJxqPpMgHP49H+0QF
XHWUGPLUebHTdz5LaLOMfkY3e6hFw9EgrlSOsKI3J0ik1kpE2A/+kplAjX54eA7d
zMyrVQUklYiGYN96QGqWqQgGLMJ1XmCYzs8Rx7fFhdBwZ+6Baana2RbOTfDN/tKr
HtzDrKwJx3QaawoyBL6WCcoUzXdE868MnqwZ+SriIYna9kPgxB5tFVmdfIbhgCQ=
=HVjp
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2018-06-11' into staging
Block patches:
- Various bug fixes
- Removal of qemu-img convert's deprecated -s option
- qemu-io now exits with an error when a command failed
# gpg: Signature made Mon 11 Jun 2018 15:23:42 BST
# gpg: using RSA key F407DB0061D5CF40
# gpg: Good signature from "Max Reitz <mreitz@redhat.com>"
# Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40
* remotes/maxreitz/tags/pull-block-2018-06-11: (29 commits)
iotests: Add case for a corrupted inactive image
qcow2: Do not mark inactive images corrupt
block: Make bdrv_is_writable() public
throttle: Fix crash on reopen
block/qcow2-bitmap: fix free_bitmap_clusters
qemu-img: Remove deprecated -s snapshot_id_or_name option
iotests: Fix 219's timing
iotests: improve pause_job
iotests: Test post-backing convert target behavior
qemu-img: Special post-backing convert handling
iotests: Add test for rebasing with relative paths
qemu-img: Resolve relative backing paths in rebase
iotests: Let 216 make use of qemu-io's exit code
iotests.py: Add qemu_io_silent
qemu-io: Exit with error when a command failed
qemu-io: Let command functions return error code
qemu-io: Drop command functions' return values
iotests: Repairing error during snapshot deletion
qcow2: Repair OFLAG_COPIED when fixing leaks
iotests: Rework 113
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: John Snow <jsnow@redhat.com>
Tested-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180606193702.7113-4-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
When signaling a corruption on a read-only image, qcow2 already makes
fatal events non-fatal (i.e., they will not result in the image being
closed, and the image header's corrupt flag will not be set). This is
necessary because we cannot set the corrupt flag on read-only images,
and it is possible because further corruption of read-only images is
impossible.
Inactive images are effectively read-only, too, so we should do the same
for them. bdrv_is_writable() can tell us whether an image can actually
be written to, so use its result instead of !bs->read_only.
(Otherwise, the assert(!(bs->open_flags & BDRV_O_INACTIVE)) in
bdrv_co_pwritev() will fail, crashing qemu.)
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180606193702.7113-3-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This is a useful function for the whole block layer, so make it public.
At the same time, users outside of block.c probably do not need to make
use of the reopen functionality, so rename the current function to
bdrv_is_writable_after_reopen() create a new bdrv_is_writable() function
that just passes NULL to it for the reopen queue.
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180606193702.7113-2-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The throttle block filter can be reopened, and with this it is
possible to change the throttle group that the filter belongs to.
The way the code does that is the following:
- On throttle_reopen_prepare(): create a new ThrottleGroupMember
and attach it to the new throttle group.
- On throttle_reopen_commit(): detach the old ThrottleGroupMember,
delete it and replace it with the new one.
The problem with this is that by replacing the ThrottleGroupMember the
previous value of io_limits_disabled is lost, causing an assertion
failure in throttle_co_drain_end().
This problem can be reproduced by reopening a throttle node:
$QEMU -monitor stdio
-object throttle-group,id=tg0,x-iops-total=1000 \
-blockdev node-name=hd0,driver=qcow2,file.driver=file,file.filename=hd.qcow2 \
-blockdev node-name=root,driver=throttle,throttle-group=tg0,file=hd0,read-only=on
(qemu) block_stream root
block/throttle.c:214: throttle_co_drain_end: Assertion `tgm->io_limits_disabled' failed.
Since we only want to change the throttle group on reopen there's no
need to create a ThrottleGroupMember and discard the old one. It's
easier if we simply detach it from its current group and attach it to
the new one.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: 20180608151536.7378-1-berto@igalia.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
This assert may fail, because bitmap_table is not initialized. Just
drop it, as it's obvious, that bitmap_table_load sets bitmap_table
parameter only when returning zero.
Reported-by: Pavel Butsykin <pbutsykin@virtuozzo.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20180608101225.2575-1-vsementsov@virtuozzo.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
It has been marked as deprecated since QEMU v2.0 already, so it
is time now to finally remove it.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-id: 1528288551-31641-1-git-send-email-thuth@redhat.com
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
219 has two issues that may lead to sporadic failure, both of which are
the result of issuing query-jobs too early after a job has been
modified. This can then lead to different results based on whether the
modification has taken effect already or not.
First, query-jobs is issued right after the job has been created.
Besides its current progress possibly being in any random state (which
has already been taken care of), its total progress too is basically
arbitrary, because the job may not yet have been able to determine it.
This patch addresses this by just filtering the total progress, like
what has been done for the current progress already. However, for more
clarity, the filtering is changed to replace the values by a string
'FILTERED' instead of deleting them.
Secondly, query-jobs is issued right after a job has been resumed. The
job may or may not yet have had the time to actually perform any I/O,
and thus its current progress may or may not have advanced. To make
sure it has indeed advanced (which is what the reference output already
assumes), keep querying it until it has.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180606190628.8170-1-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
It's possible, that job was finished during waiting. In this case we
will see error message "Timeout waiting for job to pause" which is not
very informative. So, let's check during waiting iteration that the job
exists.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20180601115923.17159-1-vsementsov@virtuozzo.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
This adds a test case to 122 for what happens when you convert to a
target with a backing file that is shorter than the target, and the
image format does not support efficient zero writes (as is the case with
qcow2 v2).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180501165750.19242-3-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Currently, qemu-img convert writes zeroes when it reads zeroes.
Sometimes it does not because the target is initialized to zeroes
anyway, so we do not need to overwrite (and thus potentially allocate)
it. This is never the case for targets with backing files, though. But
even they may have an area that is initialized to zeroes, and that is
the area past the end of the backing file (if that is shorter than the
overlay).
So if the target format's unallocated blocks are zero and there is a gap
between the target's backing file's end and the target's end, we do not
have to explicitly write zeroes there.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1527898
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180501165750.19242-2-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180509182002.8044-3-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Currently, rebase interprets a relative path for the new backing image
as follows:
(1) Open the new backing image with the given relative path (thus relative to
qemu-img's working directory).
(2) Write it directly into the overlay's backing path field (thus
relative to the overlay).
If the overlay is not in qemu-img's working directory, both will be
different interpretations, which may either lead to an error somewhere
(either rebase fails because it cannot open the new backing image, or
your overlay becomes unusable because its backing path does not point to
a file), or, even worse, it may result in your rebase being performed
for a different backing file than what your overlay will point to after
the rebase.
Fix this by interpreting the target backing path as relative to the
overlay, like qemu-img does everywhere else.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1569835
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180509182002.8044-2-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
As a showcase of how you can use qemu-io's exit code to determine
success or failure (same for qemu-img), this test is changed to use
qemu_io_silent() instead of qemu_io(), and to assert the exit code
instead of logging the filtered result.
One real advantage of this is that in case of an error, you get a
backtrace that helps you locate the issue in the test file quickly.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180509194302.21585-6-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
With qemu-io now returning a useful exit code, some tests may find it
sufficient to just query that instead of logging (and filtering) the
whole output.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180509194302.21585-5-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Currently, qemu-io basically always returns success when it gets to
interactive mode (so once the whole command line has been parsed; even
before the commands on the command line are interpreted). That is not
very useful.
This patch makes qemu-io return failure when any of the executed
commands failed.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1519617
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180509194302.21585-4-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
This is basically what everything else in the qemu code base does, so we
can do it here, too.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180509194302.21585-3-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
For qemu-io, a function returns an integer with two possible values: 0
for "qemu-io may continue execution", or 1 for "qemu-io should exit".
However, there is only a single command that returns 1, and that is
"quit".
So let's turn this case into a global variable instead so we can make
better use of the return value in a later patch.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180509194302.21585-2-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
This adds a test for an I/O error during snapshot deletion, and maybe
more importantly, for how to repair the resulting image. If the
snapshot has been deleted before the error occurs, the only negative
result will be leaked clusters -- and those should be repairable with
qemu-img check -r leaks.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180509200059.31125-3-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Repairing OFLAG_COPIED is usually safe because it is done after the
refcounts have been repaired. Therefore, it we did not find anyone else
referencing a data or L2 cluster, it makes no sense to not set
OFLAG_COPIED -- and the other direction (clearing OFLAG_COPIED) is
always safe, anyway, it may just induce leaks.
Furthermore, if OFLAG_COPIED is actually consistent with a wrong (leaky)
refcount, we will decrement the refcount with -r leaks, but OFLAG_COPIED
will then be wrong. qemu-img check should not produce images that are
more corrupted afterwards then they were before.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1527085
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180509200059.31125-2-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
This test case has been broken since 398e6ad014 (roughly half a
year). qemu-img amend requires its output image to be R/W, so it opens
it as such; the node is then turned into an read-only node automatically
which is now accompanied by a warning, however. This warning has not
been part of the reference output.
For one thing, this warning shows that we cannot keep the test case as
it is. We would need a format that has no create_opts but that does
have write support -- we do not have such a format, though.
Another thing is that qemu now actually checks whether an image format
supports amendment instead of whether it has create_opts (since the
former always implies the latter). So we can now use any format that
does not support amendment (even if it supports creation) and thus test
the same code path.
The reason nobody has noticed the breakage until now of course is the
fact that nobody runs the iotests for nbd+bochs. There actually was
never any reason to set the protocol to "nbd" but because that was
technically correct; functionally it made no difference. So that is the
first thing we are going to change: Make the protocol "file" instead so
that people might actually notice breakage here.
Secondly, now that bochs no longer works for the amend test case, we
have to change the format there anyway. Set let us just bend the truth
a bit, declare this test a raw test. In fact, that does not even
concern the bochs test cases, other than the output now reading 'bochs'
instead of 'IMGFMT'.
So with this test now being a raw test, we can rework the amend test
case to use raw instead.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 20180509210023.20283-8-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
This adds test cases to 082 for qemu-img create/convert/amend "-o help"
on formats that do not support creation or amendment, respectively.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180509210023.20283-7-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
The only users of print_block_option_help() are qemu-img create and
qemu-img convert for the output image, so this function is always used
for image creation (it used to be used for amendment also, but that is
no longer the case).
So if image creation is not supported by either the format or the
protocol, there is no need to print any option description, because the
user cannot create an image like this anyway.
This also fixes an assertion failure:
$ qemu-img create -f bochs -o help
Supported options:
qemu-img: util/qemu-option.c:219:
qemu_opts_print_help: Assertion `list' failed.
[1] 24831 abort (core dumped) qemu-img create -f bochs -o help
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180509210023.20283-6-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
The more generic print_block_option_help() function is not really
suitable for qemu-img amend, for a couple of reasons:
(1) We do not need to append the protocol-level options, as amendment
happens only on one node and does not descend downwards to its
children.
(2) print_block_option_help() says those options are "supported". For
option amendment, we do not really know that. So this new function
explicitly says that those options are the creation options, and not
all of them may be supported.
(3) If the driver does not support option amendment, we should not print
anything (except for an error message that amendment is not
supported).
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1537956
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180509210023.20283-5-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
It really is up to the caller to decide what this list of options means.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180509210023.20283-4-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Looking at the qcow2 code that is riddled with error_report() calls,
this is really how it should have been from the start.
Along the way, turn the target_version/current_version comparisons at
the beginning of qcow2_downgrade() into assertions (the caller has to
make sure these conditions are met), and rephrase the error message on
using compat=1.1 to get refcount widths other than 16 bits.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180509210023.20283-3-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Instead of checking whether a driver has a non-NULL create_opts we
should check whether it supports image amendment in the first place. If
it does, it must have create_opts.
On the other hand, if it does not have create_opts (so it does not
support amendment either), the error message "does not support any
options" is a bit useless. Stating clearly that the driver has no
amendment support whatsoever is probably better.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180509210023.20283-2-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
This patch adds a test case to 153 which tries to overwrite an image
(using qemu-img create) while it is in use. Without the original user
explicitly sharing the necessary permissions (writing and truncation),
this should not be allowed.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Message-id: 20180509215336.31304-4-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
When creating a file, we should take the WRITE and RESIZE permissions.
We do not need either for the creation itself, but we do need them for
clearing and resizing it. So we can take the proper permissions by
replacing O_TRUNC with an explicit truncation to 0, and by taking the
appropriate file locks between those two steps.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180509215336.31304-3-mreitz@redhat.com
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
raw_apply_lock_bytes() and raw_check_lock_bytes() currently take a
BDRVRawState *, but they only use the lock_fd field. During image
creation, we do not have a BDRVRawState, but we do have an FD; so if we
want to reuse the functions there, we should modify them to receive only
the FD.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Message-id: 20180509215336.31304-2-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Rather than limit total TB size to PAGE-32 bytes, end the TB when
near the end of a page. This should provide proper semantics of
SIGSEGV when executing near the end of a page.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180512050250.12774-9-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Removed ctx->insn_pc in favour of ctx->base.pc_next.
Yes, it is annoying, but didn't want to waste its 4 bytes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180512050250.12774-7-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The name gen_lookup_tb is at odds with tcg_gen_lookup_and_goto_tb.
For these cases, we do indeed want to exit back to the main loop.
Similarly, DISAS_UPDATE performs no actual update, whereas DISAS_EXIT
does what it says.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180512050250.12774-6-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
These are all indirect or out-of-page direct jumps.
We can indirectly chain to the next TB without going
back to the main loop.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180512050250.12774-5-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
We have exited the TB after using goto_tb; there is no
distinction from DISAS_NORETURN.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180512050250.12774-3-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The raise_exception helper does not return. Do not generate
any code following that.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180512050250.12774-2-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
For the case where the end_transfer_func is also the caller of
ide_transfer_start, the mutual recursion can lead to unlimited
stack usage. Introduce a new version that can be used to change
tail recursion into a loop, and use it in trace_ide_atapi_cmd_reply_end.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180606190955.20845-8-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
The ATAPI_INT_REASON_IO interrupt is raised when I/O starts, but in the
AHCI case ide_set_irq was actually called at the end of a mutual recursion.
Move it early, with the side effect that ide_transfer_start becomes a tail
call in ide_atapi_cmd_reply_end.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180606190955.20845-7-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
There is code checking s->end_transfer_func and it was not taught about
ide_transfer_cancel. We can just use ide_transfer_stop because
s->end_transfer_func is only ever called in the DRQ phase.
ide_transfer_cancel can then be removed, since it would just be
calling ide_transfer_halt.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180606190955.20845-6-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
The code can simply be moved to the sole caller that has notify == true.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180606190955.20845-5-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
Now that end_transfer_func is a tail call in ahci_start_transfer,
formalize the fact that the callback (of which ahci_start_transfer is
the sole implementation) takes care of the transfer too: rename it to
pio_transfer and, if it is present, call the end_transfer_func as soon
as it returns.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180606190955.20845-4-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
The PIO Setup FIS is written in the PIO:Entry state, which comes before
the ATA and ATAPI data transfer states. As a result, the PIO Setup FIS
interrupt is now raised before DMA ends for ATAPI commands, and tests have
to be adjusted.
This is also hinted by the description of the command header in the AHCI
specification, where the "A" bit is described as
When ‘1’, indicates that a PIO setup FIS shall be sent by the device
indicating a transfer for the ATAPI command.
and also by the description of the ACMD (ATAPI command region):
The ATAPI command must be either 12 or 16 bytes in length. The length
transmitted by the HBA is determined by the PIO setup FIS that is sent
by the device requesting the ATAPI command.
QEMU, which conflates the "generator" and the "receiver" of the FIS into
one device, always uses ATAPI_PACKET_SIZE, aka 12, for the length.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180606190955.20845-3-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
It's not always 512, and it does wind up mattering for PIO tranfers,
because this means DRQ blocks are four times as big for ATAPI.
Replace an instance of 2048 with the correct define, too.
This patch by itself winds changing no behavior. fis->count is ignored
for CMD_PACKET, and sect_count only gets used in non-ATAPI cases.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180606190955.20845-2-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>