Currently we only allocate a fixed default number of descriptors for the tx
and rx side. We should dynamically resize it to the number of descriptors
resides in the transport rings. We should know the number of transmit
descriptors at initializaiton. We will allocate the default number of
descriptors for receive side and allocate additional ones when we know the
actual max entries for receive.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Allen Hubbe <allen.hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Instead of keep trying to go through the init routine when we aren't able
to allocate memory, we should just stop and go down.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
We can leave tasklet spinning forever if we disable the tasklet during
qp shutdown and the tasklets are still being kicked off. This hopefully
should avoid that race condition.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reported-by: Alex Depoutovitch <alex@pernixdata.com>
Tested-by: Alex Depoutovitch <alex@pernixdata.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
The transport right now does not handle the case where we run out of DMA
descriptors. We just fail when we do not succeed. Adding code to retry for
a bit attempting to use the DMA engine instead of instantly fail to CPU
copy.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
The transmit overrun avoidance error path in ntb_process_tx accidentally
swapped the first two values being passed to the tx_handler client.
This could result in crashes in the ntb_netdev (or other out-of-tree NTB
clients).
Reported-by: Alex Depoutovitch <alex@pernixdata.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
resource_size_t may be 32-bit wide on some architectures, which causes
this warning when building the NTB code:
drivers/ntb/ntb_transport.c: In function 'ntb_transport_link_work':
drivers/ntb/ntb_transport.c:828:46: warning: right shift count >= width of type [-Wshift-count-overflow]
The warning is harmless but can be avoided by using the upper_32_bits()
macro.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: e26a5843f7 ("NTB: Split ntb_hw_intel and ntb_transport drivers")
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Order of operations issue with the QP Num and MW count, which would
result in the receive buffer pointer being invalid if there are more
than 1 MW. Corrected with parenthesis to enforce the proper order of
operations.
Reported-by: John I. Kading <John.Kading@gd-ms.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
These variables were not used anywhere. So remove them.
Signed-off-by: Sudip Mukherjee <sudip@vectorindia.org>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
We were accessing nt->mw_vec after freeing it. Fix the error path so
that we free nt->mw_vec after we have finished using it.
Signed-off-by: Sudip Mukherjee <sudip@vectorindia.org>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
smatch detected an issue in the function ntb_transport_max_size() where
we could be dereferencing a dma channel pointer when it is NULL.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Allocate two DMA channels, one for TX operation and one for RX
operation, instead of having one DMA channel for everything. This
provides slightly better performance, and also will make error handling
cleaner later on.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
The dma_sync_wait can hurt the performance of workloads mixed with both
large and small frames. Large frames will be copied using the dma
engine. Small frames will be copied by the cpu. The dma_sync_wait
prevents the cpu and dma engine copying in parallel.
In the period where the cpu is copying, the dma engine is stopped. The
dma engine is not doing any useful work to copy large frames during that
time, and the additional time to restart the dma engine for the next
large frame. This will decrease the throughput for the portion of a
workload with large frames.
In the period where the dma engine is copying, the cpu is held up
waiting for dma to complete. The small frames processing will be
delayed until the dma is complete. The RX frames are completed
in-order, and the processing of small frames takes very little time, so
dma_sync_wait may have an insignificant impact on the respose time of
frames. The more significant impact is to the system, because the delay
in dma_sync_wait is implemented as busy non-blocking wait. This can
prevent the delayed core from doing any useful work, even if it could be
processing work for other drivers, unrelated to transport RX processing.
After applying the earlier patch to fix out-of-order RX acknoledgement,
the dma_sync_wait is no longer necessary. Remove it, so that cpu memcpy
will proceed immediately for small frames, in parallel with ongoing dma
for large frames. Do not hold up the cpu from doing work while dma is
in progress. The prior fix will continue to ensure in-order completion
of the RX frames to the upper layer, and in-order delivery of the RX
acknoledgement.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Make QP stats info more readable for debugging purposes. Also add an
entry to indicate whether DMA is being used.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
The list should be added from the bottom and not the top in order to
ensure the transport is provided in the same order to clients as ntb
devices are discovered.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Right now if we push the NTB really hard, we start dropping packets due
to not able to process the packets fast enough. We need to st:qop the
upper layer from flooding us when that happens.
A timer is necessary in order to restart the queue once the resource has
been processed on the receive side. Due to the way NTB is setup, the
resources on the tx side are tied to the processing of the rx side and
there's no async way to know when the rx side has released those
resources.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Remove early dereference of a pointer that is checked later in the code.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
A plain 32 bit integer will overflow for values over 4GiB.
Change the plain integer size to the appropriate size type in
ntb_set_mw. Change the type of the size parameter and two local
variables used for size.
Even if there is no overflow, a size of zero is invalid here.
Reported-by: Juyoung Jung <jjung@micron.com>
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Schedule to receive on QP link up, to make sure that the doorbell is
properly cleared for interrupts.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
When the remote side is not up, we do not have all the context for the
transport, and that causes NULL ptr access. Have the debugfs reads check
to see if transport is up before we make access.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Currently the debugfs does not have files for all NTB transport queue
pairs. When there are multiple NTBs present in a system, the QP names
of the last transport clobber the names of previously added transport
QPs. Only the last added QPs can be observed via debugfs.
Create a directory per NTB transport to associate the QPs with that
transport. Name the directory the same as the PCI device.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
It was possible for a synchronous update of the RX index in the error
case to get ahead of the asynchronous RX index update in the normal
case. Change the RX processing to preserve an RX completion order.
There were two error cases. First, if a buffer is not present to
receive data, there would be no queue entry to preserve the RX
completion order. Instead of dropping the RX frame, leave the RX frame
in the ring. Schedule RX processing when RX entries are enqueued, in
case there are RX frames waiting in the ring to be received.
Second, if a buffer is too small to receive data, drop the frame in the
ring, mark the RX entry as done, and indicate the error in the RX entry
length. Check for a negative length in the receive callback in
ntb_netdev, and count occurrences as rx_length_errors.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Printouts driver name and version to indicate what is being loaded.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Benchmarking showed a significant performance increase with the MTU size
to 64k instead of 16k. Change the driver default to 64k.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Disable DMA usage by default, since the CPU provides much better
performance with write combining. Provide a module parameter to enable
DMA usage when offloading the memcpy is preferred.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Changing the memory window BAR mappings to write combining significantly
boosts the performance. We will also use memcpy that uses non-temporal
store, which showed performance improvement when doing non-cached
memcpys.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Allocate memory and request the DMA channel for the same NUMA node as
the NTB device.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
When the ntb transport is connecting and waiting for the peer, the debug
console receives lots of debug level messages about the remote qp link
status being down. Rate limit those messages.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Reset the link stats when the link goes down. In particular, the TX and
RX index and count must be reset, or else the TX side will be sending
packets to the RX side where the RX side is not expecting them. Reset
all the stats, to be consistent.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
On link down, don't advance RX index to the next entry. The next entry
should never be valid after receiving the link down flag.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
The same message "qp %d: Link Down\n" was printed at two locations in
ntb_transport. Change the messages so they are distinct.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
The transport was writing and then reading the peer scratch pad,
essentially reading what it just wrote instead of exchanging any
information with the peer. The transport expects the peer values to be
the same as the local values, so this issue was not obvious.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Change ntb_hw_intel to use the new NTB hardware abstraction layer.
Split ntb_transport into its own driver. Change it to use the new NTB
hardware abstraction layer.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
This patch only moves files to their new locations, before applying the
next two patches adding the NTB Abstraction layer. Splitting this patch
from the next is intended make distinct which code is changed only due
to moving the files, versus which are substantial code changes in adding
the NTB Abstraction layer.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
The NTB translate register must have the value to be BAR size aligned.
This alignment check make sure that the DMA memory allocated has the
proper alignment. Another requirement for NTB to function properly with
memory window BAR size greater or equal to 4M is to use the CMA feature
in 3.16 kernel with the appropriate CONFIG_CMA_ALIGNMENT and
CONFIG_CMA_SIZE_MBYTES set.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
The detection of an uneven number of queues on the given memory windows
was not correct. The mw_num is zero based and the mod should be
division to spread them evenly over the mw's.
Signed-off-by: Jon Mason <jon.mason@intel.com>
NTB-RP Link Up issue, Xeon Doorbell errata workaround, ntb_transport
link down race, and correct dmaengine_get/put usage. Also, clean-ups
to remove duplicate defines and document a hardware errata. Finally,
some changes to improve performance.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJSj/dCAAoJEG5mS6x6i9IjCfAP/0pVA7VYAjQQGLtpUIf8RNIe
iNV58Q3h74+CSzyHkz51HYuJB+6uRC6c0ChKuZMUZk/ca3hqfW0FcC1V3SjYLzxP
Bdf687TaEE4WJOaTI5hnAzzDQ+gU9aE4DgaqvFz43M4jh4scSleuE+CQ0Pjm682x
71KespuPO7Z77EC5QPbWWV5KRSkF4FHuAuDcr8TYAmSnCb8VDGDxdnwYYwBTeY3j
DuPUwLGwvbE6X4gWmOYZSwDz6cArEs1CunpeP3EUFY4rjTZArc4MSLWdC6lYQtZp
T90khUDU/UklUqlOOuydJe30zT60uPBagcEL94JxyWFT+wa5b/ra+gvU+FE4NBzA
ZW6LVlXnYTQ5XJeO5HNDRBHPZ3zZaDI+TGDBM13EBS0wbcO3PojR6Z78nlkjPLeJ
/a1sIEfM3ts7u3Vpw8GpBbhl6+8tP6xbV/rDbzfE6mfq/VVwbfAa/k9oQVh8pfoJ
YyGq1Gr43aAFm0imPRF0qRNDnSDF2UiYrNHntWWyfhW7Vmq/EjT0H21AHhgxnP7b
3AaKQl7nuYoNzx6x8ZiEcSkDy+8qY84i2vUzd9Qll/SBVCOK7KvBoFmu9x7D+ZdN
0mS4qFNXSmHhow7ur2ffwzIaSkggVIhLbEiH4bYbPP+wXzhlTLTnnRb3lFmTs/5X
/HTR7QGxfDAzcjpYAzlS
=nOG2
-----END PGP SIGNATURE-----
Merge tag 'ntb-3.13' of git://github.com/jonmason/ntb
Pull non-transparent bridge updates from Jon Mason:
"NTB driver bug fixes to address a missed call to pci_enable_msix,
NTB-RP Link Up issue, Xeon Doorbell errata workaround, ntb_transport
link down race, and correct dmaengine_get/put usage.
Also, clean-ups to remove duplicate defines and document a hardware
errata. Finally, some changes to improve performance"
* tag 'ntb-3.13' of git://github.com/jonmason/ntb:
NTB: Disable interrupts and poll under high load
NTB: Enable Snoop on Primary Side
NTB: Document HW errata
NTB: remove duplicate defines
NTB: correct dmaengine_get/put usage
NTB: Fix ntb_transport link down race
ntb: Fix missed call to pci_enable_msix()
NTB: Fix NTB-RP Link Up
NTB: Xeon Doorbell errata workaround
dmaengine_get() causes the initialization of the per-cpu channel tables.
It needs to be called prior to dma_find_channel().
Initial version by Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Jon Mason <jon.mason@intel.com>
A WARN_ON is being hit in ntb_qp_link_work due to the NTB transport link
being down while the ntb qp link is still active. This is caused by the
transport link being brought down prior to the qp link worker thread
being terminated. To correct this, shutdown the qp's prior to bringing
the transport link down. Also, only call the qp worker thread if it is
in interrupt context, otherwise call the function directly.
Signed-off-by: Jon Mason <jon.mason@intel.com>
Use the generic unmap object to unmap dma buffers.
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Tomasz Figa <t.figa@samsung.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Jon Mason <jon.mason@intel.com>
[djbw: fix up unmap len, and GFP flags]
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Many variable names in the NTB driver refer to the primary or secondary
side. However, these variables will be used to access the reverse case
when in NTB-RP mode. Make these names more generic in anticipation of
NTB-RP support.
Signed-off-by: Jon Mason <jon.mason@intel.com>
Allocate and use a DMA engine channel to transmit and receive data over
NTB. If none is allocated, fall back to using the CPU to transfer data.
Signed-off-by: Jon Mason <jon.mason@intel.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
There is a Xeon hardware errata related to writes to SDOORBELL or
B2BDOORBELL in conjunction with inbound access to NTB MMIO Space, which
may hang the system. To workaround this issue, use one of the memory
windows to access the interrupt and scratch pad registers on the remote
system. This bypasses the issue, but removes one of the memory windows
from use by the transport. This reduction of MWs necessitates adding
some logic to determine the number of available MWs.
Since some NTB usage methodologies may have unidirectional traffic, the
ability to disable the workaround via modparm has been added.
See BF113 in
http://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/xeon-c5500-c3500-spec-update.pdf
See BT119 in
http://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/xeon-e5-family-spec-update.pdf
Signed-off-by: Jon Mason <jon.mason@intel.com>
Debugfs was setup in NTB to only have a single debugfs directory. This
resulted in the leaking of debugfs directories and files when multiple
NTB devices were present, due to each device stomping on the variables
containing the previous device's values (thus preventing them from being
freed on cleanup). Correct this by creating a secondary directory of
the PCI BDF for each device present, and nesting the previously existing
information in those directories.
Signed-off-by: Jon Mason <jon.mason@intel.com>
Fix issue with adding multiple ntb client devices to the ntb virtual
bus. Previously, multiple devices would be added with the same name,
resulting in crashes. To get around this issue, add a unique number to
the device when it is added.
Signed-off-by: Jon Mason <jon.mason@intel.com>