drm/i915: Remove leftover vma->obj->pages_pin_count on insert/remove

We now do the page pin count upfront in vma_get_pages/vma_put_pages, so
that we do the allocations before we enter the vm->mutex. Our vma
page references we are tracked in vma->pages_count and the extra
obj->pages_pin_count being performed later in i915_vma_insert and
i915_vma_remove is redundant, and worse throws off the shrinker's logic
on when it can free an object by unbinding it.

Reported-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reported-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191015100155.10376-1-chris@chris-wilson.co.uk
This commit is contained in:
Chris Wilson 2019-10-15 11:01:55 +01:00
parent 56184a20a8
commit 454a325a97

View File

@ -703,7 +703,6 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
list_add_tail(&vma->vm_link, &vma->vm->bound_list);
if (vma->obj) {
atomic_inc(&vma->obj->mm.pages_pin_count);
atomic_inc(&vma->obj->bind_count);
assert_bind_count(vma->obj);
}
@ -726,14 +725,12 @@ i915_vma_remove(struct i915_vma *vma)
if (vma->obj) {
struct drm_i915_gem_object *obj = vma->obj;
atomic_dec(&obj->bind_count);
/*
* And finally now the object is completely decoupled from this
* vma, we can drop its hold on the backing storage and allow
* it to be reaped by the shrinker.
*/
i915_gem_object_unpin_pages(obj);
atomic_dec(&obj->bind_count);
assert_bind_count(obj);
}