Commit Graph

13 Commits

Author SHA1 Message Date
Maarten Lankhorst
f2d476a110 drm/ttm: use ttm_bo_reserve_slowpath_nolru in ttm_eu_reserve_buffers, v2
This requires re-use of the seqno, which increases fairness slightly.
Instead of spinning with a new seqno every time we keep the current one,
but still drop all other reservations we hold. Only when we succeed,
we try to get back our other reservations again.

This should increase fairness slightly as well.

Changes since v1:
 - Increase val_seq before calling ttm_bo_reserve_slowpath_nolru and
   retrying to take all entries to prevent a race.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
2013-01-15 14:57:10 +01:00
Maarten Lankhorst
7a1863084c drm/ttm: cleanup ttm_eu_reserve_buffers handling
With the lru lock no longer required for protecting reservations we
can just do a ttm_bo_reserve_nolru on -EBUSY, and handle all errors
in a single path.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
2013-01-15 14:56:48 +01:00
Maarten Lankhorst
63d0a41955 drm/ttm: remove lru_lock around ttm_bo_reserve
There should no longer be assumptions that reserve will always succeed
with the lru lock held, so we can safely break the whole atomic
reserve/lru thing. As a bonus this fixes most lockdep annotations for
reservations.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
2013-01-15 14:56:37 +01:00
Maarten Lankhorst
4154f051e7 drm/ttm: change fence_lock to inner lock
This requires changing the order in ttm_bo_cleanup_refs_or_queue to
take the reservation first, as there is otherwise no race free way to
take lru lock before fence_lock.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2012-12-10 20:09:58 +10:00
Maarten Lankhorst
654aa79259 drm/ttm: alter cpu_writers to return -EBUSY in ttm_execbuf_util reservations
This is similar to other platforms that don't allow command submission
to buffers locked on the cpu.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2012-11-20 16:17:35 +10:00
Maarten Lankhorst
5fb4ef0e36 drm/ttm: remove sync_obj_arg member
vmwgfx was its only user and always sets it to the same..

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-By: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2012-11-20 16:09:55 +10:00
David Howells
760285e7e7 UAPI: (Scripted) Convert #include "..." to #include <path/...> in drivers/gpu/
Convert #include "..." to #include <path/...> in drivers/gpu/.

Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Dave Airlie <airlied@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Dave Jones <davej@redhat.com>
2012-10-02 18:01:07 +01:00
Thomas Hellstrom
6570596202 drm/ttm/vmwgfx: Have TTM manage the validation sequence.
Rather than having the driver supply the validation sequence, leave that
responsibility to TTM. This saves some confusion and a function argument.

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-11-22 13:25:21 +10:00
Thomas Hellstrom
95762c2b34 drm/ttm: Improved fencing of buffer object lists
Drastically reduce the number of spin lock / unlock operations by performing
unreserving and fencing under global locks.

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jerome Glisse <j.glisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-11-22 13:25:20 +10:00
Thomas Hellstrom
702adba224 drm/ttm/radeon/nouveau: Kill the bo lock in favour of a bo device fence_lock
The bo lock used only to protect the bo sync object members, and since it
is a per bo lock, fencing a buffer list will see a lot of locks and unlocks.
Replace it with a per-device lock that protects the sync object members on
*all* bos. Reading and setting these members will always be very quick, so
the risc of heavy lock contention is microscopic. Note that waiting for
sync objects will always take place outside of this lock.

The bo device fence lock will eventually be replaced with a seqlock /
rcu mechanism so we can determine that a bo is idle under a
rcu / read seqlock.

However this change will allow us to batch fencing and unreserving of
buffers with a minimal amount of locking.

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jerome Glisse <j.glisse@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-11-22 13:25:18 +10:00
Thomas Hellstrom
68c4fa31aa drm/ttm: Optimize ttm_eu_backoff_reservation
Avoid the ttm_bo_unreserve() spinlocks by calling
ttm_eu_backoff_reservation_locked under the lru spinlock.

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jerome Glisse <j.glisse@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-11-22 13:25:15 +10:00
Dave Airlie
d6ea88865d drm/ttm: Add a bo list reserve fastpath (v2)
Makes it possible to reserve a list of buffer objects with a single
spin lock / unlock if there is no contention.
Should improve cpu usage on SMP kernels.

v2: Initialize private list members on reserve and don't call
ttm_bo_list_ref_sub() with zero put_count.

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-11-22 13:24:40 +10:00
Thomas Hellstrom
c078aa2fc4 drm/ttm: Add TTM execbuf utilities.
Utilities to reserve, unreserve and fence a list of TTM
buffer objects in a deadlock-safe manner.

Used by the vmwgfx driver.

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2009-12-07 15:22:05 +10:00