net_tap_init() always sets vnet_hdr using qemu_opt_get_bool(), but
initialize it in net_init_tap() just to reduce confusion.
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Add a NetClientInfo pointer to VLANClientState and use that
for the typecode and function pointers.
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Okay, let's try re-enabling the drain-entire-queue behaviour, with a
difference - before each subsequent packet, use qemu_can_send_packet()
to check that we can send it. This is similar to how we check before
polling the tap fd and avoids having to drop a packet if the receiver
cannot handle it.
This patch should be a performance improvement since we no longer have
to go through the mainloop for each packet.
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
If qemu_send_packet_async() returns zero, it means the packet has been
queued and the sent callback will be invoked once it has been flushed.
This is only possible where the NIC's receive() handler returns zero
and promises to notify the networking core that room is available in its
queue again.
In the case where the receive handler does not have this capability
(and its queue fills up) it returns -1 and the networking core does not
queue up the packet. This condition is indicated by a -1 return from
qemu_send_packet_async().
Currently, tap handles this condition simply by dropping the packet. It
should do its best to avoid getting into this situation by checking such
NIC's have room for a packet before copying the packet from the tap
interface.
tap_send() used to achieve this by only reading a single packet before
returning to the mainloop. That way, tap_can_send() is called before
reading each packet.
tap_send() was changed to completely drain the tap interface queue
without taking into account the situation where the NIC returns an
error and the packet is not queued. Let's start fixing this by
reverting to the previous behaviour of reading one packet at a time.
Reported-by: Scott Tsai <scottt.tw@gmail.com>
Tested-by: Sven Rudolph <Sven_Rudolph@drewag.de>
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Currently compiling the tap sources breaks on Mac OS X. This is because of:
1) tap-linux.h requiring Linux includes
2) typos
3) missing #includes
This patch adds what's necessary to compile tap happily on Mac OS X.
I haven't tested if using tap actually works, but I don't think that's a
major issue as that code was probably seriously untested before already.
I didn't split the patch, because it's only a few lines of code and
splitting is probably not worth the effort here.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Okay, this makes the tap options available on AIX even though there's
no support, but if we want to do it right we should have not compile
the tap code at all on AIX using e.g. CONFIG_TAP.
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>