Replace an unnecessary subtraction with a bitwise AND when determining the
value of ext_tcode in fw_fill_transaction() to save a cpu cycle or two in a
somewhat critical path.
Signed-off-by: Jarod Wilson <jwilson@redhat.com>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
read_rom() obtained a fresh new fw_device.generation for each read
transaction. Hence it was able to continue reading in the middle of the
ROM even if a bus reset happened. However the device may have modified
the ROM during the reset. We would end up with a corrupt fetched ROM
image then.
Although all of this is quite unlikely, it is not impossible.
Therefore we now restart reading the ROM if the bus generation changed.
Note, the memory barrier in read_rom() is still necessary according to
tests by Jarod Wilson, despite of the ->generation access being moved up
in the call chain.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
This is essentially what I've been beating on locally, and I've yet to hit
another config rom read failure with it.
Signed-off-by: Jarod Wilson <jwilson@redhat.com>
fw_device.node_id and fw_device.generation are accessed without mutexes.
We have to ensure that all readers will get to see node_id updates
before generation updates.
Fixes an inability to recognize devices after "giving up on config rom",
https://bugzilla.redhat.com/show_bug.cgi?id=429950
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Reviewed by Nick Piggin <nickpiggin@yahoo.com.au>.
Verified to fix 'giving up on config rom' issues on multiple system and
drive combinations that were previously affected.
Signed-off-by: Jarod Wilson <jwilson@redhat.com>
Signed-off-by: Kristian Høgsberg <krh@redhat.com>
We have to use the fw_device.generation here, not the fw_card.generation,
because the generation must never be newer than the node ID when we emit
a transaction. This cannot be guaranteed with fw_card.generation.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Verified in concert with subsequent memory barriers patch to fix 'giving
up on config rom' issues on multiple system and drive combinations that
were previously affected.
Signed-off-by: Jarod Wilson <jwilson@redhat.com>
There was a small window where a login or reconnect job could use an
already updated card generation with an outdated node ID. We have to
use the fw_device.generation here, not the fw_card.generation, because
the generation must never be newer than the node ID when we emit a
transaction. This cannot be guaranteed with fw_card.generation.
Furthermore, the target's and initiator's node IDs can be obtained from
fw_device and fw_card. Dereferencing their underlying topology objects
is not necessary.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Verified in concert with subsequent memory barriers patch to fix 'giving
up on config rom' issues on multiple system and drive combinations that
were previously affected.
Signed-off-by: Jarod Wilson <jwilson@redhat.com>
Ask the target to grant 4 seconds instead of the standard and minimum of
1 second window after bus reset for reconnection. This accelerates
reconnection if there are more than one targets on the bus: If a login
and inquiry to one target blocks the fw-sbp2 workqueue for more than 1s
after bus reset, we now still can reconnect to the other target.
Before that, fw-sbp2's reconnect attempts would be rejected with "error
status: 0:9" (function rejected), and fw-sbp2 would finally re-login.
All those futile reconnect attemps cost extra time until the target
which needs re-login is ready for I/O again.
The reconnect timeout field in the login ORB doesn't have to be honored
by the target though. I found that we could get up to
- allegedly 32768s from an old OXFW911 firmware
- 256s from LSI bridges
- 4s from OXUF922 and OXFW912 bridges,
- 2s from TI bridges,
- only the standard 1s from Initio and Prolific bridges and from
Apple OpenFirmware in target mode.
We just try to get 4 seconds which already covers the case of a few
HDDs on the same bus quite nicely.
A minor drawback occurs in the following (rare and impractical) border
case:
- two initiators are there, initiator 1 holds an exclusive login to
a target,
- initiator 1 goes off the bus,
- target refuses login attempts from initiator 2 until reconnect_hold
seconds after bus reset.
An alternative approach to the issue at hand would be to parallelize
fw-sbp2's reconnect and login work.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Acked-by: Jarod Wilson <jwilson@redhat.com>
Don't attempt to send a logout ORB if the target was already unplugged
or had its link switched off. If two targets are attached, this
enhances the chance to quickly reconnect to the remaining target when
one target is plugged out.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Acked-by: Jarod Wilson <jwilson@redhat.com>
Previously, the fw-ohci driver used fixed-length buffers for storing
descriptors for isochronous receive DMA programs. If an application
(such as libdc1394) generated a DMA program that was too large, fw-ohci
would reach the limit of its fixed-sized buffer and return an error to
userspace.
This patch replaces the fixed-length ring-buffer with a linked-list of
page-sized buffers. Additional buffers can be dynamically allocated and
appended to the list when necessary. For a particular context, buffers
are kept around after use and reused as necessary, so there is no
allocation taking place after the DMA program is generated for the first
time.
In addition, the buffers it uses are coherent for DMA so there is no
syncing required before and after writes. This syncing wasn't properly
done in the previous version of the code.
-
This is the fourth version of my patch that replaces a fixed-length
buffer for DMA descriptors with a dynamically allocated linked-list of
buffers.
As we discovered with the last attempt, new context programs are
sometimes queued from interrupt context, making it unacceptable to call
tasklet_disable() from context_get_descriptors().
This version of the patch uses ohci->lock for all locking needs instead
of tasklet_disable/enable. There is a new requirement that
context_get_descriptors() be called while holding ohci->lock. It was
already held for the AT context, so adding the requirement for the iso
context did not seem particularly onerous. In addition, this has the
side benefit of allowing iso queue to be safely called from concurrent
user-space threads, which previously was not safe.
Signed-off-by: David Moore <dcm@acm.org>
Signed-off-by: Kristian Høgsberg <krh@redhat.com>
Signed-off-by: Jarod Wilson <jwilson@redhat.com>
-
Fixes the following issues:
- Isochronous reception stopped prematurely if an application used a
larger buffer. (Reproduced with coriander.)
- Isochronous reception stopped after one or a few frames on VT630x
in OHCI 1.0 mode. (Fixes reception in coriander, but dvgrab still
doesn't work with these chips.)
Patch update: struct member alignment, whitespace nits
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
The firewire-ohci driver so far lacked the ability to resume cycle
master duty after that condition happened, as added to ohci1394 in Linux
2.6.18 by commit 57fdb58fa5. This ports
this patch to fw-ohci.
The "cycle too long" condition has been seen in practice
- with IIDC cameras if a mode with packets too large for a speed is
chosen,
- sporadically when capturing DV on a VIA VT6306 card with ohci1394/
ieee1394/ raw1394/ dvgrab 2.
https://bugzilla.redhat.com/show_bug.cgi?id=415841#c7
(This does not fix Fedora bug 415841.)
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Fix extraction of the source node id from the packet header.
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
This patch corrects a number of bugs in the current OHCI 1.0
packet-per-buffer support:
1. Correctly deal with payloads that cross a page boundary. The
previous version would not split the descriptor at such a boundary,
potentially corrupting unrelated memory.
2. Allow user-space to specify multiple packets per struct
fw_cdev_iso_packet in the same way that dual-buffer allows. This is
signaled by header_length being a multiple of header_size. This
multiple determines the number of packets. The payload size allocated
per packet is determined by dividing the total payload size by the
number of packets.
3. Make sync support work properly for packet-per-buffer.
I have tested this patch with libdc1394 by forcing my OHCI 1.1
controller to use the packet-per-buffer support instead of dual-buffer.
I would greatly appreciate testing by those who have a DV devices and
other types of iso streamers to make sure I didn't cause any
regressions.
Stefan, with this patch, I'm hoping that libdc1394 will work with all
your OHCI 1.0 controllers now.
The one bit of future work that remains for packet-per-buffer support is
the automatic compaction of short payloads that I discussed with
Kristian.
Signed-off-by: David Moore <dcm@acm.org>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
This patch fixes the problem where different OHCI 1.1 controllers behave
differently when a received iso packet straddles three or more buffers
when using the dual-buffer receive mode. Two changes are made in order
to handle this situation:
1. The packet sync DMA descriptor is given a non-zero header length and
non-zero payload length. This is because zero-payload descriptors are
not discussed in the OHCI 1.1 specs and their behavior is thus
undefined. Instead we use a header size just large enough for a single
header and a payload length of 4 bytes for this first descriptor.
2. As we process received packets in the context's tasklet, read the
packet length out of the headers. Keep track of the running total of
the packet length as "excess_bytes", so we can ignore any descriptors
where no packet starts or ends. These descriptors may not have had
their first_res_count or second_res_count fields updated by the
controller so we cannot rely on those values.
The main drawback of this patch is that the excess_bytes value might get
"out of sync" with the packet descriptors if something strange happens
to the DMA program. I'm not if such a thing could ever happen, but I
appreciate any suggestions in making it more robust.
Also, the packet-per-buffer support may need a similar fix to deal with
issue 1, but I haven't done any work on that yet.
Stefan, I'm hoping that with this patch, all your OHCI 1.1 controllers
will work properly with an unmodified version of libdc1394.
Signed-off-by: David Moore <dcm@acm.org>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
SBP2_MAX_SECTORS is nowhere used in fw-sbp2.
It merely got copied over from sbp2 where it played a role in the past.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Bug noted by Pieter Palmers: Isochronous transmit tasklets were
scheduled on isochronous receive events, in addition to the proper
isochronous receive tasklets.
http://marc.info/?l=linux1394-devel&m=119783196222802
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
This patch speeds up sbp2 a little bit --- but more importantly, it
brings the behavior of sbp2 and fw-sbp2 closer to each other. Like
fw-sbp2, sbp2 now does not limit the size of single transfers to 255
sectors anymore, unless told so by a blacklist flag or by module load
parameters.
Only very old bridge chips have been known to need the 255 sectors
limit, and we have got one such chip in our hardwired blacklist. There
certainly is a danger that more bridges need that limit; but I prefer to
have this issue present in both fw-sbp2 and sbp2 rather than just one of
them.
An OXUF922 with 400GB 7200RPM disk on an S400 controller is sped up by
this patch from 22.9 to 23.5 MB/s according to hdparm. The same effect
could be achieved before by setting a higher max_sectors module
parameter. On buses which use 1394b beta mode, sbp2 and fw-sbp2 will
now achieve virtually the same bandwidth. Fw-sbp2 only remains faster
on 1394a buses due to fw-core's gap count optimization.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Convert ieee1394 from nopage to fault.
Remove redundant vma range checks (correct resource range check is retained).
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Replace sg->length by sg_dma_len(sg). Rename a variable for shorter
line lengths and eliminate some superfluous local variables.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
scsi_init_queue is expected to clean up allocated things when it
fails.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Needs to call kmem_cache_destroy for scsi_bidi_sdb_cache in
scsi_exit_queue.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
This enables fill_from_dev_buffer and fetch_to_dev_buffer to handle
bidi commands.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Douglas Gilbert <dougg@torque.net>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
This adds get_data_transfer_info helper function that get lha and
sectors for READ_* and WRITE_* commands (and XDWRITEREAD_10 later).
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Douglas Gilbert <dougg@torque.net>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
With the sg table code, every SCSI driver is now either chain capable
or broken (or has sg_tablesize set so chaining is never activated), so
there's no need to have a check in the host template.
Also tidy up the code by moving the scatterlist size defines into the
SCSI includes and permit the last entry of the scatterlist pools not
to be a power of two.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
At the block level bidi request uses req->next_rq pointer for a second
bidi_read request.
At Scsi-midlayer a second scsi_data_buffer structure is used for the
bidi_read part. This bidi scsi_data_buffer is put on
request->next_rq->special. Struct scsi_cmnd is not changed.
- Define scsi_bidi_cmnd() to return true if it is a bidi request and a
second sgtable was allocated.
- Define scsi_in()/scsi_out() to return the in or out scsi_data_buffer
from this command This API is to isolate users from the mechanics of
bidi.
- Define scsi_end_bidi_request() to do what scsi_end_request() does but
for a bidi request. This is necessary because bidi commands are a bit
tricky here. (See comments in body)
- scsi_release_buffers() will also release the bidi_read scsi_data_buffer
- scsi_io_completion() on bidi commands will now call
scsi_end_bidi_request() and return.
- The previous work done in scsi_init_io() is now done in a new
scsi_init_sgtable() (which is 99% identical to old scsi_init_io())
The new scsi_init_io() will call the above twice if needed also for
the bidi_read command. Only at this point is a command bidi.
- In scsi_error.c at scsi_eh_prep/restore_cmnd() make sure bidi-lld is not
confused by a get-sense command that looks like bidi. This is done
by puting NULL at request->next_rq, and restoring.
[jejb: update to sg_table and resolve conflicts
also update to blk-end-request and resolve conflicts]
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
In preparation for bidi we abstract all IO members of scsi_cmnd,
that will need to duplicate, into a substructure.
- Group all IO members of scsi_cmnd into a scsi_data_buffer
structure.
- Adjust accessors to new members.
- scsi_{alloc,free}_sgtable receive a scsi_data_buffer instead of
scsi_cmnd. And work on it.
- Adjust scsi_init_io() and scsi_release_buffers() for above
change.
- Fix other parts of scsi_lib/scsi.c to members migration. Use
accessors where appropriate.
- fix Documentation about scsi_cmnd in scsi_host.h
- scsi_error.c
* Changed needed members of struct scsi_eh_save.
* Careful considerations in scsi_eh_prep/restore_cmnd.
- sd.c and sr.c
* sd and sr would adjust IO size to align on device's block
size so code needs to change once we move to scsi_data_buff
implementation.
* Convert code to use scsi_for_each_sg
* Use data accessors where appropriate.
- tgt: convert libsrp to use scsi_data_buffer
- isd200: This driver still bangs on scsi_cmnd IO members,
so need changing
[jejb: rebased on top of sg_table patches fixed up conflicts
and used the synergy to eliminate use_sg and sg_count]
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
If we export scsi_init_io()/scsi_release_buffers() instead of
scsi_{alloc,free}_sgtable() from scsi_lib than tgt code is much more
insulated from scsi_lib changes. As a bonus it will also gain bidi
capability when it comes.
[jejb: rebase on to sg_table and fix up rejections]
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Acked-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
CC [M] drivers/scsi/aic7xxx/aic7xxx_osm_pci.o
drivers/scsi/aic7xxx/aic7xxx_osm_pci.c:148: warning: 'ahc_linux_pci_dev_suspend' defined but not used
drivers/scsi/aic7xxx/aic7xxx_osm_pci.c:166: warning: 'ahc_linux_pci_dev_resume' defined but not used
This moves aic7xxx_pci_driver struct, removes some forward declarations,
and adds some ifdef CONFIG_PM.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
CC [M] drivers/scsi/aic7xxx/aic79xx_osm_pci.o
drivers/scsi/aic7xxx/aic79xx_osm_pci.c:101: warning: 'ahd_linux_pci_dev_suspend' defined but not used
drivers/scsi/aic7xxx/aic79xx_osm_pci.c:121: warning: 'ahd_linux_pci_dev_resume' defined but not used
This moves aic79xx_pci_driver struct, removes some forward
declarations, and adds some ifdef CONFIG_PM.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
The driver only needs to check the SCB_ACTIVE flag if the SCB is not
in the untagged queue.
If the driver is in error recovery, you may end panic'ing on a TUR
that is in the untagged queue.
Attempting to queue an ABORT message
CDB: 0x0 0x0 0x0 0x0 0x0 0x0
SCB 3 done'd twice
This patch is included in Adaptec's 6.3.11 driver on their website.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
SGI IP28 machines would need special treatment (enable adding addtional
wait states) when accessing memory uncached. To avoid this pain I
changed the driver to use only cached access to memory.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
The commit de25deb180 changed
scsi_cmnd.sense_buffer from a static array to a dynamically allocated
buffer. We can't access to sense_buffer in '&cmd->sense_buffer' way.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Christof Schmitt <christof.schmitt@de.ibm.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
The commit de25deb180 changed
scsi_cmnd.sense_buffer from a static array to a dynamically allocated
buffer. We can't access to sense_buffer in '&cmd->sense_buffer' way.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
The commit de25deb180 changed
scsi_cmnd.sense_buffer from a static array to a dynamically allocated
buffer. We can't access to sense_buffer in '&cmd->sense_buffer' way.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
&cmnd->sense_buffer now zeroes the wrong thing.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
This paves the way for multiple architecture support. Note that while
ioapic.c could potentially be shared with ia64, it is also moved.
Signed-off-by: Avi Kivity <avi@qumranet.com>
Add printk_ratelimit check in front of printk. This prevents spamming
of the message during 32-bit ubuntu 6.06server install. Previously, it
would hang during the partition formatting stage.
Signed-off-by: Ryan Harper <ryanh@us.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
This patch moves kvm_vm_stat to x86.h, and every arch
can define its own kvm_vm_stat in $arch.h
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
This patches moves two fields round_robin_prev_vcpu and tss to kvm_arch.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
This patches moves two fields vpid and vioapic to kvm_arch
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
This patches create kvm_arch to hold arch-specific kvm fileds
and moves fields naliases and aliases to kvm_arch.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
This patches moves kvm_vcpu_stat to x86.h, so every
arch can define its own kvm_vcpu_stat structure.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
This patches removes KVM_COMM macro, original it is hold
kvm_vcpu common fields.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
This patches moves kvm_vcpu definition to kvm.h, and finally
kvm.h includes x86.h.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Since these functions need to know the details of kvm or kvm_vcpu structure,
it can't be put in x86.h. Create mmu.h to hold them.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Move all the architecture-specific fields in kvm_vcpu into a new struct
kvm_vcpu_arch.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Emulate cmpxchg8b atomically on i386. This is required to avoid a guest
pte walker from seeing a splitted write.
[avi: make it compile]
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
This lets SVM ignore writes of the value 0 to the performance counter control
registers. Thus enabling them will still fail in the guest, but a write of 0
which keeps them disabled is accepted. This is required to boot Windows
Vista 64bit.
[avi: avoid fall-thru in switch statement]
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Markus Rechberger <markus.rechberger@amd.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
This patch fixes a compile error of the LAPIC code with APIC debugging enabled.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Markus Rechberger <markus.rechberger@amd.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
There is a race where VCPU0 is shadowing a pagetable entry while VCPU1
is updating it, which results in a stale shadow copy.
Fix that by comparing the contents of the cached guest pte with the
current guest pte after write-protecting the guest pagetable.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
With this patch KVM on SVM will exit to userspace if the guest writes to CR8
and the in-kernel APIC is disabled.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Markus Rechberger <markus.rechberger@amd.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
In addition to removing some duplicated code, this also handles the unlikely
case of real-mode code updating a guest page table. This can happen when
one vcpu (in real mode) touches a second vcpu's (in protected mode) page
tables, or if a vcpu switches to real mode, touches page tables, and switches
back.
Signed-off-by: Avi Kivity <avi@qumranet.com>
As set_pte() no longer references either a gpte or the guest walker, we can
move it out of paging mode dependent code (which compiles twice and is
generally nasty).
Signed-off-by: Avi Kivity <avi@qumranet.com>
When we emulate a guest pte write, we fail to apply the correct inherited
permissions from the parent ptes. Now that we store inherited permissions
in the shadow page, we can use that to update the pte permissions correctly.
Signed-off-by: Avi Kivity <avi@qumranet.com>
While the page table walker correctly generates a guest page fault
if a guest tries to execute a non-executable page, the shadow code does
not mark it non-executable. This means that if a guest accesses an nx
page first with a read access, then subsequent code fetch accesses will
succeed.
Fix by setting the nx bit on shadow ptes.
Signed-off-by: Avi Kivity <avi@qumranet.com>
The nx bit is awkwardly placed in the 63rd bit position; furthermore it
has a reversed meaning compared to the other bits, which means we can't use
a bitwise and to calculate compounded access masks.
So, we simplify things by creating a new 3-bit exec/write/user access word,
and doing all calculations in that.
Signed-off-by: Avi Kivity <avi@qumranet.com>
In preparation for multi-threaded guest pte walking, use cmpxchg()
when updating guest pte's. This guarantees that the assignment of the
dirty bit can't be lost if two CPU's are faulting the same address
simultaneously.
[avi: fix kunmap_atomic() parameters]
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Stack instructions are always 64-bit on 64-bit mode; many of the
emulated stack instructions did not take that into account. Fix by
adding a 'Stack' bitflag and setting the operand size appropriately
during the decode stage (except for 'push r/m', which is in a group
with a few other instructions, so it gets its own treatment).
This fixes random crashes on Vista x64.
Signed-off-by: Avi Kivity <avi@qumranet.com>
This patch adds code to emulate the access to the cr8 register to the x86
instruction emulator in kvm. This is needed on svm, where there is no
hardware decode for control register access.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Markus Rechberger <markus.rechberger@amd.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
With apic in userspace, we must exit to userspace after a cr8 write in order
to update the tpr. But if the apic is in the kernel, the exit is unnecessary.
Noticed by Joerg Roedel.
Signed-off-by: Avi Kivity <avi@qumranet.com>
We prepare eflags for the emulated instruction, then clobber it with an 'andl'.
Fix by popping eflags as the last thing in the sequence.
Patch taken from Xen (16143:959b4b92b6bf)
Signed-off-by: Avi Kivity <avi@qumranet.com>
Instead of each subarch doing its own thing, add an API for queuing an
injection, and manage failed exception injection centerally (i.e., if
an inject failed due to a shadow page fault, we need to requeue it).
Signed-off-by: Avi Kivity <avi@qumranet.com>
This abstracts the detail of x86 hlt and INIT modes into a function.
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Acked-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Change
dest_Loest_Prio -> IOAPIC_LOWEST_PRIORITY
dest_Fixed -> IOAPIC_FIXED
the original names are x86 specific, while the ioapic code will be reused
for ia64.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
This patch replaces lapic structure with kvm_vcpu in ioapic.c, making ioapic
independent of the local apic, as required by ia64.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
This patch removes the KVM specific defines for MSR_EFER that were being used
in the svm support file and migrates all references to use instead the ones
from the kernel headers that are used everywhere else and that have the same
values.
Signed-off-by: Carlo Marcelo Arenas Belon <carenas@sajinet.com.pe>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Currently, make headers_check barfs due to <asm/kvm.h>, which <linux/kvm.h>
includes, not existing. Rather than add a zillion <asm/kvm.h>s, export kvm.h
only if the arch actually supports it.
Signed-off-by: Avi Kivity <avi@qumranet.com>
Unify the special instruction switch with the regular instruction switch,
and the two byte special instruction switch with the regular two byte
instruction switch. That makes it much easier to find an instruction or
the place an instruction needs to be added in.
Signed-off-by: Avi Kivity <avi@qumranet.com>
Currently rep processing is handled somewhere in the middle of instruction
processing. Move it to a sensible place.
Signed-off-by: Avi Kivity <avi@qumranet.com>
Add emulation for the cmps instruction. This lets OpenBSD boot on kvm.
Signed-off-by: Guillaume Thouvenin <guillaume.thouvenin@ext.bull.net>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Previous patches have removed the dependency on cr2; we can now stop passing
it to the emulator and rename uses to 'memop'.
Signed-off-by: Sheng Yang <sheng.yang@intel.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Mark guest pages as accessed when removed from the shadow page tables for
better lru processing.
Signed-off-by: Izik Eidus <izike@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
cmps and scas instructions accept repeat prefixes F3 and F2. So in
order to emulate those prefixed instructions we need to be able to know
if prefixes are REP/REPE/REPZ or REPNE/REPNZ. Currently kvm doesn't make
this distinction. This patch introduces this distinction.
Signed-off-by: Guillaume Thouvenin <guillaume.thouvenin@ext.bull.net>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Non-x86 archs don't need this mechanism. Move it to arch, and
keep its interface in common.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
The state of SECONDARY_VM_EXEC_CONTROL shouldn't depend on in-kernel IRQ chip,
this patch fix this.
Signed-off-by: Sheng Yang <sheng.yang@intel.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
The current cpuid management suffers from several problems, which inhibit
passing through the host feature set to the guest:
- No way to tell which features the host supports
While some features can be supported with no changes to kvm, others
need explicit support. That means kvm needs to vet the feature set
before it is passed to the guest.
- No support for indexed or stateful cpuid entries
Some cpuid entries depend on ecx as well as on eax, or on internal
state in the processor (running cpuid multiple times with the same
input returns different output). The current cpuid machinery only
supports keying on eax.
- No support for save/restore/migrate
The internal state above needs to be exposed to userspace so it can
be saved or migrated.
This patch adds extended cpuid support by means of three new ioctls:
- KVM_GET_SUPPORTED_CPUID: get all cpuid entries the host (and kvm)
supports
- KVM_SET_CPUID2: sets the vcpu's cpuid table
- KVM_GET_CPUID2: gets the vcpu's cpuid table, including hidden state
[avi: fix original KVM_SET_CPUID not removing nx on non-nx hosts as it did
before]
Signed-off-by: Dan Kenigsberg <danken@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
These are traditionally named 'page', but even more traditionally, that name
is reserved for variables that point to a 'struct page'. Rename them to 'sp'
(for "shadow page").
Signed-off-by: Avi Kivity <avi@qumranet.com>
Converting a frame number to an address is tricky since the data type changes
size. Introduce a function to do it. This fixes an actual bug when
accessing guest ptes.
Signed-off-by: Avi Kivity <avi@qumranet.com>
If all we're doing is increasing permissions on a pte (typical for demand
paging), then there's not need to flush remote tlbs. Worst case they'll
get a spurious page fault.
Signed-off-by: Avi Kivity <avi@qumranet.com>
I spent an hour worrying why I see so many guest page faults on FC6 i386.
Turns out bypass wasn't implemented for nonpae. Implement it so it doesn't
happen again.
Signed-off-by: Avi Kivity <avi@qumranet.com>
Split kvm_arch_vcpu_create() into kvm_arch_vcpu_create() and
kvm_arch_vcpu_setup(), enabling preemption notification between the two.
This mean that we can now do vcpu_load() within kvm_arch_vcpu_setup().
Signed-off-by: Avi Kivity <avi@qumranet.com>
Moving !user_alloc case to kvm_arch to avoid unnecessary
code logic in non-x86 platform.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Instead of incrementally changing the mmu cache size for every memory slot
operation, recalculate it from scratch. This is simpler and safer.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Instead of fetching one byte at a time, prefetch 15 bytes (or until the next
page boundary) to avoid guest page table walks.
Signed-off-by: Avi Kivity <avi@qumranet.com>
Theoretically used to acccess memory known to be ordinary RAM, it was
never implemented. It is questionable whether it is possible to implement
it correctly.
Signed-off-by: Avi Kivity <avi@qumranet.com>
Improve dirty bit setting for pages that kvm release, until now every page
that we released we marked dirty, from now only pages that have potential
to get dirty we mark dirty.
Signed-off-by: Izik Eidus <izike@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
When we map a page, we check whether some other vcpu mapped it for us and if
so, bail out. But we should decrease the refcount on the page as we do so.
Signed-off-by: Izik Eidus <izike@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Use kvm_write_guest_page() with empty_zero_page, instead of doing
kmap and memset.
Signed-off-by: Izik Eidus <izike@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Ensure that segment.base == segment.selector << 4 when entering the real
mode on Intel so that the CPU will not bark at us. This fixes some old
protected mode demo from http://www.x86.org/articles/pmbasics/tspec_a1_doc.htm.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Move out kvm_mmu init and exit functionality from kvm_main.c
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Meanwhile keep the interface in common, and leave as more logic in common
as possible.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Instead of having each architecture do it individually, we
do this in the arch-independent code (just x86 as of now).
[avi: add svm to the mix, which was added to mainline during the
2.6.24-rc process]
Signed-off-by: Amit Shah <amit.shah@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>