Instead of reporting EIO upfront in the entrance of an ioctl that may or
may not attempt to use the GPU, defer the actual detection of an invalid
ioctl to when we issue a GPU instruction. This allows us to continue to
use bo in video memory (via pread/pwrite and mmap) after the GPU has hung.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
We can only utilize the stolen portion of the GTT if we are in sole
charge of the hardware. This is only true if using GEM and KMS,
otherwise VESA continues to access stolen memory.
Reported-by: Arnd Bergmann <arnd@arndb.de>
Reported-by: Frederic Weisbecker <fweisbec@gmail.com>
Tested-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
There are I915_NUM_RINGS-1 inter-ring synchronisation counters, but we
were clearing I915_NUM_RINGS of them. Oops.
Reported-by: Jiri Slaby <jirislaby@gmail.com>
Tested-by: Jiri Slaby <jirislaby@gmail.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Rather than evicting an object at random, which is unlikely to alleviate
the memory pressure sufficient to allow us to continue, zap the entire
aperture. That should give the system long enough to recover and reap
some pages from the evicted objects, forestalling the allocation error
for the new object.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
In order to retire active buffers whilst no client is active, we need to
insert our own flush requests onto the ring.
This is useful for servers that queue up some rendering and then go to
sleep as it allows us to the complete processing of those requests,
potentially making that memory available again much earlier.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
We need to ensure that writes through the GTT land before any
modification to the MMIO registers and so must impose a mandatory write
barrier when flushing the GTT domain. This was revealed by relaxing the
write ordering by experimentally mapping the registers and the GATT as
write-combining.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
The relative-to-general state default is useless as it means having to
rewrite the streaming kernels for each batch. Relative-to-surface is
more useful, as that stream usually needs to be rewritten for each
batch. And absolute addressing mode, vital if you start streaming
state, is also only available by adjusting the register...
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
In order to enforce the correct memory barriers for irq get/put, we need
to perform the actual counting using atomic operations.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
In order for bos to retire eventually, a request must be sent down the
ring. This is expected, for example, by occlusion queries for which mesa
will wait upon (whilst running glean) before issuing more batches and so
the normal activity upon the ring is suspended and we need to emit a
request to clear the idle ring.
Reported-by: Jinjin, Wang <jinjin.wang@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=30380
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
I'm still seeing tiling corruption of PutImage and CopyArea (I think)
under mutter on pnv, so obviously the pipelining logic is deeply flawed.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
The bulk of the change is to convert the growing list of rings into an
array so that the relationship between the rings and the semaphore sync
registers can be easily computed.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
As the tracepoint is now decoupled from when the actual register is
assigned and was never complemented by detailing when the object lost
its fence, it has outlived its limited usefulness. Profiling the actual
stalls is a far more profitable venture anyway.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
As the userspace mappings are torn down on every GPU write, we prefer to
track when the buffer is activated (via a fresh i915_gem_fault). This
makes the LRU conceptually simpler. With coherent mappings, the
remaining use-case for set_domain_ioctl is GPU synchronisation.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
With this change, every batchbuffer can use all available fences (save
pinned and scanout, of course) without ever stalling the gpu!
In theory. Currently the actual pipelined update of the register is
disabled due to some stability issues. However, just the deferred update
is a significant win.
Based on a series of patches by Daniel Vetter.
The premise is that before every access to a buffer through the GTT we
have to declare whether we need a register or not. If the access is by
the GPU, a pipelined update to the register is made via the ringbuffer,
and we track the last seqno of the batches that access it. If by the
CPU we wait for the last GPU access and update the register (either
to clear or to set it for the current buffer).
One advantage of being able to pipeline changes is that we can defer the
actual updating of the fence register until we first need to access the
object through the GTT, i.e. we can eliminate the stall on set_tiling.
This is important as the userspace bo cache does not track the tiling
status of active buffers which generate frequent stalls on gen3 when
enabling tiling for an already bound buffer.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
... so that upon first use after resume we will reacquire the fence reg.
Reported-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
We don't track gpu flush request in any special way. So even with
obj->write_domain == 0, a gpu flush might be outstanding but no
yet executed. Even worse, the latest request might use the object
only for reading. So and unconditional call to object_wait_rendering
is needed for !pipelined.
Hence revert that patch fully and untangle the flushing from the
synchronization again.
Reported-by: Keith Packard <keithp@keithp.com>
Tested-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Besides the minimal improvement in reducing the execbuffer overhead, the
real benefit is clarifying a few routines.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
A number of dragons have been seen lurking within the execbuffer code.
The first step is then to isolate them from the rest and begin to
scrutinise them in depth. Suggested by Daniel Vetter.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Simply remove our accounting of objects inside the aperture, keeping
only track of what is in the aperture and its current usage. This
removes the over-complication of BUGs that were attempting to keep the
accounting correct and also removes the overhead of the accounting on
the hot-paths.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
With KMS, we can simply relinquish the fence when we idle the GPU and
reassign it upon first use.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Avoid evicting buffers that will be used later in the batch in order to
make room for the initial buffers by pinning all bound buffers in a
single pass before binding (and evicting for) fresh buffer.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
This used to check the precondition that all fences were to be located
in a mappable area, redundant now as those two parameters are combined
into one.
After pinning, we assert that the buffer is bound into the desired
region.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
The pipe control object is allocated by the device for the sole use of the
render ringbuffer. Move this detail from the general code to the render
ring buffer initialisation.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Combining map_and_fenceable revealed a bug in
i915_gem_object_gtt_size() in that it always computed the appropriate
fence size for the object regardless of tiling state which caused us to
over-allocate linear buffers when binding to the GTT.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
This is required to restore gtt mappings on resume when agp is gone.
The right way to do this would be to make sturct drm_mm_node embeddable
and use the allocation list maintained by the drm memory manager. But
that's a bigger project. Getting rid of the per bo agp_mem will save
more memory than this wastes, anyway.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Currently if we hit a pagefault when applying a user relocation for the
execbuffer, we bail and return EFAULT to the application. Instead, we
need to unwind, drop the dev->struct_mutex, copy all the relocation
entries to a vmalloc array (to avoid any potential circular deadlocks
when resolving the pagefault), retake the mutex and then apply the
relocations. Afterwards, we need to again drop the lock and copy the
vmalloc array back to userspace.
v2: Incorporate feedback from Daniel Vetter.
Reported-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Commit 2549d6c2 removed the vmalloc used for temporary storage of the
relocation lists used during execbuffer. However, our use of vmalloc was
being protected by an integer overflow check which we do want to
preserve!
Reported-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Linus Torvalds found that it was rather trivial to trigger a system
freeze:
In fact, with lockdep, I don't even need to do the sysrq-d thing: it
shows the bug as it happens. It's the X server taking the same lock
recursively.
Here's the problem:
=============================================
[ INFO: possible recursive locking detected ]
2.6.37-rc2-00012-gbdbd01a #7
---------------------------------------------
Xorg/2816 is trying to acquire lock:
(&dev->struct_mutex){+.+.+.}, at: [<ffffffff812c626c>] i915_gem_fault+0x50/0x17e
but task is already holding lock:
(&dev->struct_mutex){+.+.+.}, at: [<ffffffff812c403b>] i915_mutex_lock_interruptible+0x28/0x4a
other info that might help us debug this:
2 locks held by Xorg/2816:
#0: (&dev->struct_mutex){+.+.+.}, at: [<ffffffff812c403b>] i915_mutex_lock_interruptible+0x28/0x4a
#1: (&mm->mmap_sem){++++++}, at: [<ffffffff81022d4f>] page_fault+0x156/0x37b
This recursion was introduced by rearranging the locking to avoid the
double locking on the fast path (4f27b5d and fbd5a26d) and the
introduction of the prefault to encourage the fast paths (b5e4f2b). In
order to undo the problem, we rearrange the code to perform the access
validation upfront, attempt to prefault and then fight for control of the
mutex. the best case scenario where the mutex is uncontended the
prefaulting is not wasted.
Reported-and-tested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
g33/pineview doesn't have any alignment constrains for unfenced tiled
buffers. But older chips have. Fix this.
Problem introduced in a00b10c360.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
An old and oft reported bug, is that of the GPU hanging on a
MI_WAIT_FOR_EVENT following a mode switch. The cause is that the GPU is
waiting on a scanline counter on an inactive pipe, and so waits for a
very long time until eventually the user reboots his machine.
We can prevent this either by moving the WAIT into the kernel and
thereby incurring considerable cost on every swapbuffers, or by waiting
for the GPU to retire the last batch that accesses the framebuffer
before installing a new one. As mode switches are much rarer than swap
buffers, this looks like an easy choice.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=28964
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=29252
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@kernel.org
... and so prevent a potential circular reference:
[ INFO: possible circular locking dependency detected ]
2.6.37-rc1-uwe1+ #4
-------------------------------------------------------
Xorg/1401 is trying to acquire lock:
(&mm->mmap_sem){++++++}, at: [<c01e4ddb>] might_fault+0x4b/0xa0
but task is already holding lock:
(&dev->struct_mutex){+.+.+.}, at: [<f869c3ac>]
i915_mutex_lock_interruptible+0x3c/0x60 [i915]
which lock already depends on the new lock.
When the locking around the pwrite ioctl was simplified, I did not spot
that the phys path never took any locks and so we introduced this
potential circular reference.
Reported-by: Uwe Helm <uwe.helm@googlemail.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Instead of killing the process, just return no page found and reschedule
the process giving the GPU some time to (hopefully) recover.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
a00b10c360 "Only enforce fence limits inside the GTT" also
added a fenceable/mappable disdinction when binding/pinning buffers.
This only complicates the code with no pratical gain:
- In execbuffer this matters on for g33/pineview, as this is the only
chip that needs fences and has an unmappable gtt area. But fences
are only possible in the mappable part of the gtt, so need_fence
implies need_mappable. And need_mappable is only set independantly
with relocations which implies (for sane userspace) that the buffer
is untiled.
- The overlay code is only really used on i8xx, which doesn't have
unmappable gtt. And it doesn't support tiled buffers, currently.
- For all other buffers it's a bug to pass in a tiled bo.
In short, this disdinction doesn't have any practical gain.
I've also reverted mapping the overlay and context pages as possibly
unmappable. It's not worth being overtly clever here, all the big
gains from unmappable are for execbuf bos.
Also add a comment for a clever optimization that confused me
while reading the original patch by Chris Wilson.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
When merging Daniel's full-gtt patches I had a set of tweaks which I
thought I had undone. I was half right...
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=31286
Reported-by: jinjin.wang@intel.com
Reported-by: Alexey Fisher <bug-track@fisher-privat.net>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Take two passes to evict everything whilst searching for sufficient free
space to bind the batchbuffer. After searching for sufficient free space
using LRU eviction, evict everything that is purgeable and try again.
Only then if there is insufficient free space (or the GTT is too badly
fragmented) evict everything from the aperture and try one last time.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Accessing the uninitialised obj->pages instead of the local page lead to
an OOPs.
Reported-by: Xavier Chantry <chantry.xavier@gmail.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
So long as we adhere to the fence registers rules for alignment and no
overlaps (including with unfenced accesses to linear memory) and account
for the tiled access in our size allocation, we do not have to allocate
the full fenced region for the object. This allows us to fight the bloat
tiling imposed on pre-i965 chipsets and frees up RAM for real use. [Inside
the GTT we still suffer the additional alignment constraints, so it doesn't
magic allow us to render larger scenes without stalls -- we need the
expanded GTT and fence pipelining to overcome those...]
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Also spotted by Dan Carpenter.
obj->pin_count is unsigned so the BUG_ON(obj->pin_count<0) will never
trigger.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
The error code is only expected during the actual pruning and not during
the first measurement (nr_to_scan == 0) pass.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
It is possible for the active list to only contain a read-only buffer so
that the ring->gpu_write_list remains entry. This leads to an
inconsistency between i915_gpu_is_active() and i915_gpu_idle() causing
an infinite spin during the shrinker and an assertion failure that
i915_gpu_idle() does indeed flush all buffers from the active lists.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
In order to force a page-fault on a GTT mapping after we start using it
from the GPU and so enforce correct CPU/GPU synchronisation, we need to
invalidate the mapping.
Pointed out by Owain G. Ainsworth.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
By using read_cache_page() for individual pages during pwrite/pread we
can eliminate an unnecessary large allocation (and immediate free) of
obj->pages. Also this eliminates any potential nesting of get/put pages,
simplifying the code and preparing the path for greater things.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Since we rarely use the mmap_offset and it is easily computable from the
obj->map_list.hash, remove it.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Eliminate the racy device unload by embedding a shrinker into each
device. Smaller, simpler code.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
More precisely: For those that _need_ to be mappable. Also add two
BUG_ONs in fault and pin to check the consistency of the mappable
flag.
Changes in v2:
- Add tracking of gtt mappable space (to notice mappable/unmappable
balancing issues).
- Improve the mappable working set tracking by tracking fault and pin
separately.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
This way we can make some more educated guesses as to why exactly
we can't use 2G apertures to their full potential ;)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
At least the part that's currently enabled by the BIOS.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
In i915_gem_object_pin obviously unbind only if mappable is true.
This is the last part to enable gtt_mappable_end != gtt_size, which
the next patch will do.
v2: Fences on g33/pineview only work in the mappable part of the
gtt.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Like before add a parameter mappable (also to gem_object_pin) and
set it depending upon the context. Only bos that are brought into
the gtt due to an execbuffer call can be put into the unmappable
part of the gtt, everything else (especially pinned objects) need
to be put into the mappable part of the gtt.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Add a mappable parameter to i915_gem_evict_something to distinguish
the two cases (non-restricted vs. mappable gtt allocations). No
functional changes because the mappable limit is set to the end of
the gtt currently.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Preparing the ringbuffer for adding new commands can fail (a timeout
whilst waiting for the GPU to catch up and free some space). So check
for any potential error before overwriting HEAD with new commands, and
propagate that error back to the user where possible.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
The ringbuffer keeps a pointer to the parent device, so we can use that
instead of passing around the pointer on the stack.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
* 'drm-core-next' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6: (476 commits)
vmwgfx: Implement a proper GMR eviction mechanism
drm/radeon/kms: fix r6xx/7xx 1D tiling CS checker v2
drm/radeon/kms: properly compute group_size on 6xx/7xx
drm/radeon/kms: fix 2D tile height alignment in the r600 CS checker
drm/radeon/kms/evergreen: set the clear state to the blit state
drm/radeon/kms: don't poll dac load detect.
gpu: Add Intel GMA500(Poulsbo) Stub Driver
drm/radeon/kms: MC vram map needs to be >= pci aperture size
drm/radeon/kms: implement display watermark support for evergreen
drm/radeon/kms/evergreen: add some additional safe regs v2
drm/radeon/r600: fix tiling issues in CS checker.
drm/i915: Move gpu_write_list to per-ring
drm/i915: Invalidate the to-ring, flush the old-ring when updating domains
drm/i915/ringbuffer: Write the value passed in to the tail register
agp/intel: Restore valid PTE bit for Sandybridge after bdd3072
drm/i915: Fix flushing regression from 9af90d19f
drm/i915/sdvo: Remove unused encoding member
i915: enable AVI infoframe for intel_hdmi.c [v4]
drm/i915: Fix current fb blocking for page flip
drm/i915: IS_IRONLAKE is synonymous with gen == 5
...
Fix up conflicts in
- drivers/gpu/drm/i915/{i915_gem.c, i915/intel_overlay.c}: due to the
new simplified stack-based kmap_atomic() interface
- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c: added .llseek entry due to BKL
removal cleanups.
Keep the current interface but ignore the KM_type and use a stack based
approach.
The advantage is that we get rid of crappy code like:
#define __KM_PTE \
(in_nmi() ? KM_NMI_PTE : \
in_irq() ? KM_IRQ_PTE : \
KM_PTE0)
and in general can stop worrying about what context we're in and what kmap
slots might be appropriate for that.
The downside is that FRV kmap_atomic() gets more expensive.
For now we use a CPP trick suggested by Andrew:
#define kmap_atomic(page, args...) __kmap_atomic(page)
to avoid having to touch all kmap_atomic() users in a single patch.
[ not compiled on:
- mn10300: the arch doesn't actually build with highmem to begin with ]
[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: fix up drivers/gpu/drm/i915/intel_overlay.c]
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Chris Metcalf <cmetcalf@tilera.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Miller <davem@davemloft.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Dave Airlie <airlied@linux.ie>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
... to prevent flush processing of an idle (or even absent) ring.
This fixes a regression during suspend from 87acb0a5.
Reported-and-tested-by: Alexey Fisher <bug-track@fisher-privat.net>
Tested-by: Peter Clifton <pcjc2@cam.ac.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
When the object has been written to by the gpu it remains on the ring
until its flush has been retired. However, when the object is moving to
the ring and the associated cache needs to be invalidated, we need to
perform the flush on the target ring, not the one it came from (which is
NULL in the reported case and so the flush was entirely absent).
Reported-by: Peter Clifton <pcjc2@cam.ac.uk>
Reported-and-tested-by: Alexey Fisher <bug-track@fisher-privat.net>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Whilst moving the code around in 9af90d19f, I dropped the or'ing in of
new write domains which would zero out the write domain for a render
target if later reused as a source later in the batch. This meant that
we might drop a required flush before reading from the render target.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=31043
Reported-by: xunx.fang@intel.com
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Based on an original patch by Zhenyu Wang, this initializes the BLT ring for
SandyBridge and enables support for user execbuffers.
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
If the userspace driver is using a constant relocation array with a
static buffer, they will pass the same relocation array back to the
kernel. So we *do* need to update the presumed offset value in those
relocations to reflect the current object so that they remain correct
with future batchbuffers and we avoid the necessity of having to suspend
execution and perform redundant relocations.
Fixes the regression introduced by 12f889c for applications using
absolute addressing on trees of buffer (i.e. the current consumers of
libdrm_intel.so).
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=30996
Reported-by: Wang, Jinjin <jinjin.wang@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
To handle retirements, we need per-ring tracking of active objects.
To handle evictions, we need global tracking of active objects.
As we enable more rings, rebuilding the global list from the individual
per-ring lists quickly grows tiresome and overly complicated. Tracking the
active objects in two lists is the lesser of two evils.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
... by always initialising the empty ringbuffer it is always then safe
to check whether it is active.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
The most frequent relocation within a batchbuffer is a contiguous sequence
of vertex buffer relocations, for which we can virtually eliminate the
drm_gem_object_lookup() overhead by caching the last handle to object
translation.
In doing so we refactor the pin and relocate retry loop out of
do_execbuffer into its own helper function and so improve the error
paths.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
One of the primarily consumers of the i915 driver is X, a large signal
driven application. Frequently when writing into the buffers, there is a
pending signal which causes us not to take the interruptible lock but
then we need to take that same lock around the object unreference. By
rearranging the code to do the interruptible lock as the first check, we
can avoid the frequent additional locking around the unreference.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
... to avoid reacquiring it to drop the object reference count on
exit. Note we have to make sure we now drop (and reacquire) the lock
around acquiring the mm semaphore on the slow paths.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
After allocation a handle for the fresh object, we know that we can
safely drop the refcnt without triggering a free so we do not need the
mutex. Strangely, this mutex acquisition is the one that appears on
driver profiles.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>