Fixes bug on Tegra where we'd strip kind information from system memory
(ie. all) buffers, resulting in misrendering.
Behaviour on dGPU should be unchanged.
Reported-by: Thierry Reding <treding@nvidia.com>
Fixes: d7722134b8 ("drm/nouveau: switch over to new memory and vmm interfaces")
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Tested-by: Thierry Reding <treding@nvidia.com>
While the Tegra (GK20A, GM20B, GP10B) MMUs support large pages in host
memory, we're currently lacking IOMMU support for merging system pages
into large enough chunks to be mapped as such by the GPU.
The core VMM code actually supports automatically determining the best
page size to map with, which is intended for these situations, but for
various complicated reasons the DRM is currently forcing the page size
selection on a per-BO basis.
This should fix breakage reported on Tegra GPUs in the meantime, until
one or both of the above issues are resolved properly.
Reported-by: Mikko Perttunen <cyndis@kapsi.fi>
Fixes: 7dc6a446da ("drm/nouveau: improve selection of GPU page size")
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Tested-by: Thierry Reding <treding@nvidia.com>
On my GP107 when I load nouveau after unloading it, for some reason the
GPU stopped sending or the CPU stopped receiving interrupts if MSI was
enabled.
Doing a rearm once before getting any interrupts fixes this.
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
When the fbcon object is initialized, but nouveau_fbcon_create is not
called, we run into a NULL pointer access within nouveau_fbcon_create when
unloading nouveau.
The call to drm_fb_helper_funcs.fb_probe is deferred until there is a
display for real since 4.14, that's why fbcon->helper.fb is still not set.
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.
Addresses-Coverity-ID: 1260018
Addresses-Coverity-ID: 1260019
Addresses-Coverity-ID: 1260022
Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.
Addresses-Coverity-ID: 143119
Addresses-Coverity-ID: 143120
Addresses-Coverity-ID: 143121
Addresses-Coverity-ID: 143122
Addresses-Coverity-ID: 143123
Addresses-Coverity-ID: 143124
Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Don't populate arrays hwsq_signature and edid_sig on the stack but
instead make them static. Makes the object code smaller by over 190
bytes:
Before:
text data bss dec hex filename
35676 3312 64 39052 988c nouveau_bios.o
After:
text data bss dec hex filename
35319 3472 64 38855 97c7 nouveau_bios.o
(gcc version 7.2.0 x86_64)
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
VMAs are about to not take references on the VMM they belong to, which
means more care is required when handling delayed unmapping.
Queuing it on the client workqueue ensures all pending VMA unmaps will
have completed before the VMM is destroyed.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This is already handled in the top-level gem_new() ioctl in another manner,
but this will be removed in a future commit.
Ideally we'd not need to check up-front at all, and let the VMM code handle
error checking, but there are paths in the current BO management code where
this isn't possible due to map() not always being called during BO creation,
and map() calls not being allowed to fail during buffer migration.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
If the VMA is being deleted, we don't need to explicity unmap first
anymore. The MMU code will automatically merge the operations into
a single page tree walk.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>