nvxx_graph_isr is already taking care of it. In some cases this
could've made you miss PGRAPH interrupts (e.g. when you were supposed
to get several IRQs of the same kind in a row).
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
No functional changes, just simplify some code paths a bit.
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
In typical Apple fashion there's no standard information about what
encoders are present on this machine, this patch adds a quirk to
provide it.
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Fix running of destroy_context() when create_context() has never been
called for the channel, and fill in engine's tlb_flush() function pointer.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Allows callers to install their own handlers for when a GPIO line
changes state (such as for hotplug detect).
This also fixes a bug where we weren't acknowledging the GPIO IRQ
until after the bottom half had run, causing a severe IRQ storm
in some cases.
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The point is to share more code between the PFB/PGRAPH tile region
hooks, and give the hardware specific functions a chance to allocate
per-region resources.
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
nouveau_bo_move_m2mf() needs to lock the kernel channel, and it may be
called from the pushbuf IOCTL with an user channel already locked. Use
a separate subclass for the kernel channel mutex because this is
legitimate mutex nesting.
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
In a multihead setup vblank interrupts may end up enabled in both
heads. In that case we want to ignore the vblank interrupts coming
from the wrong CRTC to avoid tearing and unbalanced calls to
drm_vblank_get/put (fdo bug 31074).
Reported-by: Felix Leimbach <felix.leimbach@gmx.net>
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
nv0x-nv4x should be mostly fine, nv50 doesn't work yet.
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
nouveau_fence_* functions are not type safe, which could lead to bugs.
Additionally every use of nouveau_fence_unref had to cast struct
nouveau_fence to void **.
Fix it by renaming old functions and creating static inline functions with
new prototypes. We still need old functions, because we pass function
pointers to ttm.
As we are wrapping functions, drop unused "void *arg" parameter where possible.
Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Only supported on NV50+ so far, and disabled by default currently. The
module parameter "msi=1" will enable it.
There's a kernel bug which will cause this to fail if the module (or the
NVIDIA binary driver) has ever been loaded before loading nouveau with
MSI enabled. As such, this is only safe to enable if you have nouveau
load on boot, and don't wish to ever reload it.
The workaround is to "echo 0 > /sys/bus/pci/devices/<device>/enable"
until the enable count reads 0. Then you should be able to load nouveau
with MSI enabled.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Not an issue right now, we're forced to 64k size/alignment by the BO
allocator anyway. This won't be the case soon.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This really needs cleaning up somehow, and probably investigate what's
needed to do this on earlier generations. NVIDIA do something similar
there too.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
We previously added all the available classes for the entire generation,
even though the objects wouldn't work on the hardware.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The structs themselves, as well as the non-sw object creation function are
probably very misnamed now. That's a problem for later :)
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Without it there's a potential race with nouveau_fence_update().
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
It needs a "strong" channel reference because it actually writes to
the channel pushbuf, otherwise the corresponding FIFO context could
get kicked off in the middle of nouveau_fence_sync().
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Fences didn't increment the channel reference count, and the fenced
channel could go away at any time. Fixes a potential race in
nouveau_fence_update().
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
nouveau_channel_ref() takes a "weak" channel reference that doesn't
prevent the hardware channel resources from being released, it just
keeps the channel data structure alive.
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
nouveau_channel_put() can be executed after the 'refcount == 0' check
in nouveau_channel_get() and before the channel reference count is
incremented. In that case CPU0 will take the context down while CPU1
thinks it owns the channel and 'refcount == 1'.
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The destroy_context() engine hooks call gpuobj management functions to
release the channel resources, these functions use HARDIRQ-unsafe locks
whereas destroy_context() is called with the HARDIRQ-safe
context_switch_lock held, that's a lock ordering violation.
Push the engine-specific channel destruction logic into destroy_context()
and let the hardware-specific code lock and unlock when it's actually
needed. Change the engine destruction order to avoid a race in the small
gap between pgraph and pfifo context uninitialization.
Reported-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The pushbuf ioctl syncs after validation, no need for this anymore.
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
No other driver uses this, and userspace should be responsible for handling
locking between them if they share BOs.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This fixes a race condition between fbcon acceleration and TTM buffer
moves. To reproduce:
- start X
- switch to vt and "while (true); do dmesg; done"
- switch to another vt and "sleep 2 && cat /path/to/debugfs/dri/0/evict_vram"
- switch back to vt running dmesg
We don't make use of this on any other channel yet, they're currently
protected by drm_global_mutex. This will change in the near future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
A future commit will add locking to the DRM's channel, and there's numerous
problems that come up if we allow printk from an interrupt context to be
accelerated. It seems saner to just disallow it completely.
As a nice side-effect, all the "to accel or not to accel" logic gets moved
out of the chipset-specific code.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
* 'drm-radeon-fusion' of ../drm-radeon-next:
drm/radeon/kms: add Ontario APU ucode loading support
drm/radeon/kms: add Ontario Fusion APU pci ids
drm/radeon/kms: enable MSIs on fusion APUs
drm/radeon/kms: add power table parsing support for Ontario fusion APUs
drm/radeon/kms: refactor atombios power state fetching
drm/radeon/kms: add bo blit support for Ontario fusion APUs
drm/radeon/kms: add thermal sensor support for fusion APUs
drm/radeon/kms: fill in GPU init for AMD Ontario Fusion APUs
drm/radeon/kms: add radeon_asic struct for AMD Ontario fusion APUs
drm/radeon/kms: evergreen.c updates for fusion
drm/radeon/kms: MC setup changes for fusion APUs
drm/radeon/kms: move r7xx/evergreen to its own vram_gtt setup function
drm/radeon/kms: add support for ss overrides on Fusion APUs
drm/radeon/kms: Add support for external encoders on fusion APUs
drm/radeon/kms: atom changes for DCE4.1 devices
drm/radeon/kms: add new family id for AMD Ontario APUs
drm/radeon/kms: upstream power table updates
drm/radeon/kms: upstream atombios.h updates
drm/radeon/kms: upstream ObjectID.h updates
drm/radeon/kms: setup mc chremap properly on r7xx/evergreen
* 'drm-radeon-next' of ../drm-radeon-next:
drm/radeon/kms: improve pflip precision on r1xx-r4xx
drm/kms/radeon: Use high precision timestamps for pageflip completion events.
drm/kms/radeon: Reorder vblank and pageflip interrupt handling.
drm/radeon/kms: add pageflip ioctl support (v3)
drm/kms/radeon: Add support for precise vblank timestamping.
The update pending bit has a separate enable bit.
Cc: Mario Kleiner <mario.kleiner@tuebingen.mpg.de>
Signed-off-by: Alex Deucher <alexdeucher@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The vbios power tables on my inagua board seem a bit funky...
Signed-off-by: Alex Deucher <alexdeucher@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The function was getting too large. Rework it to share
more state better handle new power table formats.
Signed-off-by: Alex Deucher <alexdeucher@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
- CONFIG_MEMSIZE is in bytes on fusion.
- FB_BASE and FB_TOP are finer grained.
Signed-off-by: Alex Deucher <alexdeucher@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
MC_VM_FB_LOCATION is at a different offset between r6xx and r7xx/evergreen.
The location is needed for vram setup on fusion chips.
Signed-off-by: Alex Deucher <alexdeucher@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
System specific spread spectrum overrides can be specified in
the integrated system info table for Fusion APUs. This adds
support for using those overrides.
Signed-off-by: Alex Deucher <alexdeucher@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Should improve performance slightly and possibly fix some
issues.
Signed-off-by: Alex Deucher <alexdeucher@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Rather than re-implementing in the Radeon driver,
Use the execbuf / cs / pushbuf utilities that comes with TTM.
This comes with an even greater benefit now that many spinlocks have been
optimized away...
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This patch attempts to fix up shortcomings with the current calling
sequences.
1) There's a fastpath where no locking occurs and only io_mem_reserved is
called to obtain needed info for mapping. The fastpath is set per
memory type manager.
2) If the fastpath is disabled, io_mem_reserve and io_mem_free will be exactly
balanced and not called recursively for the same struct ttm_mem_reg.
3) Optionally the driver can choose to enable a per memory type manager LRU
eviction mechanism that, when io_mem_reserve returns -EAGAIN will attempt
to kill user-space mappings of memory in that manager to free up needed
resources
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Rather than having the driver supply the validation sequence, leave that
responsibility to TTM. This saves some confusion and a function argument.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Drastically reduce the number of spin lock / unlock operations by performing
unreserving and fencing under global locks.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jerome Glisse <j.glisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The bo lock used only to protect the bo sync object members, and since it
is a per bo lock, fencing a buffer list will see a lot of locks and unlocks.
Replace it with a per-device lock that protects the sync object members on
*all* bos. Reading and setting these members will always be very quick, so
the risc of heavy lock contention is microscopic. Note that waiting for
sync objects will always take place outside of this lock.
The bo device fence lock will eventually be replaced with a seqlock /
rcu mechanism so we can determine that a bo is idle under a
rcu / read seqlock.
However this change will allow us to batch fencing and unreserving of
buffers with a minimal amount of locking.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jerome Glisse <j.glisse@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Add an aid for the driver to detect deadlocks on multi-bo reservations
Update documentation.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jerome Glisse <j.glisse@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>