I've added a bit of logic such that running the hangman test on chips
without any hw reset support at all doesn't wedge the gpu because the
reset failed. This relied on checking for non-null stop_rings.
Unfortunately I've botched a rebase somewhere and stop_rings is still
cleared at the old place before the reset code.
Fix this up so that running the i-g-t tests on gen2/3 doesn't result
in a wedged gpu.
v2: Actually remove the lines instead of adding them twice ... my git
license should be revoked immediately.
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
I'm seeing about a 5% FPS improvement across various benchmarks on my
IVB i3. Rumor has it that the higher end parts show even more benefit.
This derives from a patch originally given to me by Bernard. The docs
are confusing about the definition names (ie. medium really seems like
max), but it would seem it gives more cache to the GT at the expense of
uncore. This configuration makes the split most in favor of the GT. I've
not tried the other IDICOS values.
Cc: "Kilarski, Bernard R" <bernard.r.kilarski@intel.com>
Acked-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This got dropped as a result of the last round of comments. I didn't
test it on unsupported HW (which this is likely the case).
Note that this prevents hw context from blowing up on any pre-gen6 hw.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=51142
[danvet: Added note and buglink.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Somehow this went unnoticed in the past reviews, but the condition would
never timeout properly.
This was initially introduced in the v2 of original SBI enabling patch.
Highly embarrassing.
Note that we now actually time out for the read, which resulted in gcc
complaining that we can now return unitialized garbage if that
happens. There's not much we can do here because there's not much
point in thread -EIO all the way down through these functions. Hence
simply shut up the compiler.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
[danvet: Added note and squashed uninitialized value shut-up into this
patch.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
VGA hotplug detection "works" by measuring the resistance across
certain pins. A lot of kvm switches fumble this and wire up cheap
resistors with the wrong resistance or don't bother at all.
To accomodate these, also try to detect a connected monitor by trying
to grab the edid. Contrary to !HAS_HOTPLUG platforms we don't bother
with an actual load-detection cycle when the output is life - that
would be actual work to implement because things moved around. This is
the big difference to Chris Wilson's original approach:
commit 9e612a008f
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Thu May 31 13:08:53 2012 +0100
drm/i915/crt: Do not rely upon the HPD presence pin
This blew up on Linus' machine because it errornously detected a vga
screen (without and edid and hence only the default modes), leading to
it's prompt removal:
commit 8f53369b75
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date: Fri Jun 8 14:53:06 2012 -0700
Revert "drm/i915/crt: Do not rely upon the HPD presence pin"
Some digging around in Bspec shows the reason why load detect doesn't work on
newer chips - the legacy VGA load detect bit isn't wired up any longer:
Public Snb Bspec, Vol3 Part1, 1.1.1 ST00 Input Status 0, bit4:
"RGB Comparator / Sense. This bit is here for compatibility and will
always return one. Monitor detection must be done be done through the
programming of registers in the MMIO space.
0 = Below threshold
1 = Above threshold"
v2: Add a comment in the code that load detect on hotplug capable
machines is broken and pimp the commit message with a quote of Bspec
to show why.
Reported-and-tested-by: Matthieu LAVIE <boiteamadmax@hotmail.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=50501
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Use the rsvd1 field in execbuf2 to specify the context ID associated
with the workload. This will allow the driver to do the proper context
switch when/if needed.
v2: Add checks for context switches on rings not supporting contexts.
Before the code would silently ignore such requests.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Add the interfaces to allow user space to create and destroy contexts.
Contexts are destroyed automatically if the file descriptor for the dri
device is closed.
Following convention as usual here causes checkpatch warnings.
v2: with is_initialized, no longer need to init at create
drop the context switch on create (daniel)
v3: Use interruptible lock (Chris)
return -ENODEV in !GEM case (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
To keep things as sane as possible, switch to the default context before
idling. This should help free context objects, as well as put things in
a more well defined state before suspending.
v2: remove seqno from context switch call (daniel)
return error on failed context switch instead of WARN+continue (daniel)
v3: move idling to i915_gpu idle (from i915_gem_idle) (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
With the code to do HW context switches in place have the driver load the
default context for the render ring when the driver loads.
The default context will be an ever present context that is available to
switch to at any time for the given ring.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
From http://intellinuxgraphics.org/documentation/SNB/IHD_OS_Vol1_Part3.pdf
[DevSNB] If Flush TLB invalidation Mode is enabled it's the driver's
responsibility to invalidate the TLBs at least once after the previous
context switch after any GTT mappings changed (including new GTT
entries). This can be done by a pipelined PIPE_CONTROL with TLB inv bit
set immediately before MI_SET_CONTEXT.
On GEN7 the invalidation mode is explicitly set, but this appears to be
lacking for GEN6. Since I don't know the history on this, I've decided
to dynamically read the value at ring init time, and use that value
throughout.
v2: better comment (daniel)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
This has showed up in several other patches. It's required for the next
context workaround.
I tested this one on its own and saw no differences in basic tests
(performance or otherwise). This patch is relatively likely to cause
regressions, hence why it's split out.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
The workaround itself applies to gen7 only (according to the docs) and
as Eric Anholt points out shouldn't be required since we don't use HW
scheduling features, and therefore arbitration. Though since it is a
small, and simple addition, and we don't really understand the issue,
just do it.
FWIW, I eventually want to play with some of the arbitration stuff, and
I'd hate to forget about this.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
This way round we don't introduce and ugly layering violations and use
the interface as I planned to use it.
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Implement the context switch code as well as the interfaces to do the
context switch. This patch also doesn't match 1:1 with the RFC patches.
The main difference is that from Daniel's responses the last context
object is now stored instead of the last context. This aids in allows us
to free the context data structure, and context object independently.
There is room for optimization: this code will pin the context object
until the next context is active. The optimal way to do it is to
actually pin the object, move it to the active list, do the context
switch, and then unpin it. This allows the eviction code to actually
evict the context object if needed.
The context switch code is missing workarounds, they will be implemented
in future patches.
v2: actually do obj->dirty=1 in switch (daniel)
Modified comment around above
Remove flags to context switch (daniel)
Move mi_set_context code to i915_gem_context.c (daniel)
Remove seqno , use lazy request instead (daniel)
v3: use i915_gem_request_next_seqno instead of
outstanding_lazy_request (Daniel)
remove id's from trace events (Daniel)
Put the context BO in the instruction domain (Daniel)
Don't unref the BO is context switch fails (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Invent an abstraction for a hw context which is passed around through
the core functions. The main bit a hw context holds is the buffer object
which backs the context. The rest of the members are just helper
functions. Specifically the ring member, which could likely go away if
we decide to never implement whatever other hw context support exists.
Of note here is the introduction of the 64k alignment constraint for the
BO. If contexts become heavily used, we should consider tweaking this
down to 4k. Until the contexts are merged and tested a bit though, I
think 64k is a nice start (based on docs).
Since we don't yet switch contexts, there is really not much complexity
here. Creation/destruction works pretty much as one would expect. An idr
is used to generate the context id numbers which are unique per file
descriptor.
v2: add DRM_DEBUG_DRIVERS to distinguish ENOMEM failures (ben)
convert a BUG_ON to WARN_ON, default destruction is still fatal (ben)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Very basic code for context setup/destruction in the driver.
Adds the file i915_gem_context.c This file implements HW context
support. On gen5+ a HW context consists of an opaque GPU object which is
referenced at times of context saves and restores. With RC6 enabled,
the context is also referenced as the GPU enters and exists from RC6
(GPU has it's own internal power context, except on gen5). Though
something like a context does exist for the media ring, the code only
supports contexts for the render ring.
In software, there is a distinction between contexts created by the
user, and the default HW context. The default HW context is used by GPU
clients that do not request setup of their own hardware context. The
default context's state is never restored to help prevent programming
errors. This would happen if a client ran and piggy-backed off another
clients GPU state. The default context only exists to give the GPU some
offset to load as the current to invoke a save of the context we
actually care about. In fact, the code could likely be constructed,
albeit in a more complicated fashion, to never use the default context,
though that limits the driver's ability to swap out, and/or destroy
other contexts.
All other contexts are created as a request by the GPU client. These
contexts store GPU state, and thus allow GPU clients to not re-emit
state (and potentially query certain state) at any time. The kernel
driver makes certain that the appropriate commands are inserted.
There are 4 entry points into the contexts, init, fini, open, close.
The names are self-explanatory except that init can be called during
reset, and also during pm thaw/resume. As we expect our context to be
preserved across these events, we do not reinitialize in this case.
As Adam Jackson pointed out, The cutoff of 1MB where a HW context is
considered too big is arbitrary. The reason for this is even though
context sizes are increasing with every generation, they have yet to
eclipse even 32k. If we somehow read back way more than that, it
probably means BIOS has done something strange, or we're running on a
platform that wasn't designed for this.
v2: rename load/unload to init/fini (daniel)
remove ILK support for get_size() (indirectly daniel)
add HAS_HW_CONTEXTS macro to clarify supported platforms (daniel)
added comments (Ben)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
The GPUs can have different default context layouts, and the sizes could
vary based on platform or BIOS. In order to back the context object with
a properly sized BO, we must read this register in order to find out a
sufficient size.
Thankfully (sarcarm!), the register moves and changes meanings
throughout generations.
CTX and CXT differences are intentional as that is how it is in the
documentation (prior to GEN6 it was CXT).
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
The Lenovo Thinkpad T410 has the LVDS_PIPEB_SELECT bit set in the LVDS
register when booted with the lid closed, even though the LVDS hasn't
really been initialized. Ignore this bit so that the VBT value will be
used instead.
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As we switch on/off the primary plane if it is completely obscured by an
overlapping video sprite, we also nee to make sure that we update the
FBC configuration at the same time.
v2: Not all crtcs are intel_crtcs, as spotted by Daniel.
v3: Boot testing rules.
References: https://bugs.freedesktop.org/show_bug.cgi?id=50238
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Jani Nikula <jani.nikula@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Especially vesafb likes to map everything as uc- (yikes), and if that
mapping hangs around still while we try to map the gtt as wc the
kernel will downgrade our request to uc-, resulting in abyssal
performance.
Unfortunately we can't do this as early as readon does (i.e. as the
first thing we do when initializing the hw) because our fb/mmio space
region moves around on a per-gen basis. So I've had to move it below
the gtt initialization, but that seems to work, too. The important
thing is that we do this before we set up the gtt wc mapping.
Now an altogether different question is why people compile their
kernels with vesafb enabled, but I guess making things just work isn't
bad per se ...
v2:
- s/radeondrmfb/inteldrmfb/
- fix up error handling
v3: Kill #ifdef X86, this is Intel after all. Noticed by Ben Widawsky.
v4: Jani Nikula complained about the pointless bool primary
initialization.
v5: Don't oops if we can't allocate, noticed by Chris Wilson.
v6: Resolve conflicts with agp rework and fixup whitespace.
Reported-and-tested-by: "Kilarski, Bernard R" <bernard.r.kilarski@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When drm/i915 is in control of the gtt, we need to call
the enable function at all the relevant places ourselves.
Reviewed-by: Jani Nikula <jani.nikula@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
To be able to directly set up the intel-gtt code from drm/i915 and
avoid setting up the fake-agp driver we need to prepare a few things:
- pass both the bridge and gpu pci_dev to the probe function and add
code to handle the gpu pdev both being present (for drm/i915) and
not present (fake agp).
- add refcounting to the remove function so that unloading drm/i915
doesn't kill the fake agp driver
v2: Fix up the cleanup and refcount, noticed by Jani Nikula.
Reviewed-by: Jani Nikula <jani.nikula@linux.intel.com>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For that to work we need to export the base address of the gtt
mmio window from intel-gtt. Also replace all other uses of
dev->agp by values we already have at hand.
Reviewed-by: Jani Nikula <jani.nikula@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Given the havoc the missing backlight pipe select code might have
caused, let's try to re-enabled pipe A support for lvds on gen4 hw.
Just to see what all blows up ...
Note though that
commit 4add75c43f
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Sat Dec 4 17:49:46 2010 +0000
drm/i915: Allow LVDS to be on pipe A for Ironlake+
claims that this caused tons of spurious wakeups somehow.
More details can be found in the old revert:
commit 12e8ba25ef
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Tue Sep 7 23:39:28 2010 +0100
Revert "drm/i915: Allow LVDS on pipe A on gen4+"
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=16307
Reviewed-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On gen4+ we have a bitfield to specify from which pipe the backlight
controller should take it's clock. For PCH split platforms we've
already set these up, but only at initialization time. And without
taking into account the 3rd pipe added with ivb.
For gen4, we've completely ignored these. Although we do restrict lvds
to the 2nd pipe, so this is only a problem on machines where we boot
up with the lvds on the first pipe.
So restructure the code to enable the backlight on the right pipe at
modeset time.
v2: For odd reasons panel_enable_backlight gets called twice in a
modeset, so we can't WARN_ON in there if the backlight controller is
switched on already.
v3: backlight enable can also be called through dpms on, so the check
in there is legit. Update the comment to reflect that.
Tested-By: Kamal Mostafa <kamal@canonical.com>
Bugzilla: https://bugs.launchpad.net/bugs/954661
Cc: Carsten Emde <C.Emde@osadl.org>
Reviewed-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
- Regroup definitions for BLC_PWM_CTL so that they're all together and
and ordered according to the bitfields.
- Add all missing definitions for BLC_PWM_CTL2.
- Use the BLM_ (for backlight modulation) prefix consistently.
- Note that combination mode (i.e. also taking the legacy backlight
control value from pci config space into account) is gen4 only.
- Move the new registers for PCH-split machines up, they're an almost
match for the gen4 defitions. Prefix the special PCH-only bits with
BLM_PCH_. Also add the pipe C select bit for ivb.
- Rip out the second pair of PCH polarity definitions - they're only
valid on early (pre-production) ilk silicon.
- Adapt the existing code to use the new definitions. This has the
nice benefit of killing a magic (1 << 30) left behind be Jesse
Barnes.
No functional changes in this patch.
Reviewed-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We already correctly ignore bit0 on gen < 4, now we also know why ;-)
I've decided that losing that single bit of precision isn't worth the
trouble to sprinkle IS_PINEVIEW checks all over the backlight control
code - that code is way too fragile imo.
Reviewed-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This function is supposed to be used at mode set time, so prevent
against future mistakes by adding a WARN().
Based on a patch by Paulo Zanoni, with the check extracted into a
little assert_hdmi_port_disabled helper added to make things self
documenting and move the assert stuff out of line.
[fixed up spelling goof-up while applying.]
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Change the ns_timeout parameter of the wait ioctl to a signed value.
Doing this allows the kernel to provide an infinite wait when a timeout
of less than 0 is provided. This mimics select/poll.
Initially the parameter was meant to match up with the GL spec 1:1, but
after being made aware of how much 2^64 - 1 nanoseconds actually is, I
do not think anyone will ever notice the loss of 1 bit.
The infinite timeout on waiting is similar to the existing i915
userspace interface with the exception that struct_mutex is dropped
while doing the wait in this ioctl.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Bspec Vol 3, Part 3, Section 3.8.1.1, bit 30:
"[DevIBX] Writing to this bit only takes effect when port is enabled.
Due to hardware issue it is required that this bit be cleared when port
is disabled. To clear this bit software must temporarily enable this
port on transcoder A."
Unfortunately the public Bspec misses totally out on the same language
for HDMIB. Internal Bspec also mentions that one of the bad
side-effects is that DPx can fail to light up on transcoder A if HDMIx
is disabled but using transcoder B.
I've found this while reviewing Bsepc. We already implement the same
workaround for the DP ports.
Also replace a magic 1 with PIPE_B I've found while looking through the
code.
v2: Implement suggestions from Chris Wilson:
- add pipe variable to cut down on code noise
- write the reg value twice to w/a hw issues (Bspec is unclear on
which bit actually require the write twice stuff, but better be
paranoid about it)
- untangle the if logic
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This makes for easier benchmarking and testing. One can set a fixed
frequency by setting min and max to the same value.
v2: fix whitespace & comment (Eugeni)
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Eugeni Dodonov <eugeni@dodonov.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We should only frob adjusted_mode. This is in preparation of
a massive patch by Laurent Pinchart to make the mode argument
const.
After the previous two prep patches the only thing left is to clean up
things a bit. I've opted to pass in an adjust_mode param to
dp_adjust_dithering because that way we can be sure to avoid
duplicating this logic between mode_valid and mode_fixup - which was
the cause behind a dp link bw calculation bug in the past.
Also mark the mode argument of pch_panel_fitting const.
v2: Split up the mode->clock => adjusted_mode->clock change,
as suggested by Chris Wilson.
Reported-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
... instead of changing mode->clock, which we should leave as-is.
After the previous patch we only touch that if it's a panel, and then
adjusted mode->clock equals adjusted_mode->clock. Outside of
intel_dp.c we only use ajusted_mode->clock in the mode_set functions.
Within intel_dp.c we only use it to calculate the dp dithering
and link bw parameters, so that's the only thing we need to fix
up.
As a temporary ugliness (until the cleanup in the next patch) we
pass the adjusted_mode into dp_dither for both parameters (because
that one still looks at mode->clock).
Note that we do overwrite adjusted_mode->clock with the selected dp
link clock, but that only happens after we've calculated everything we
need based on the dotclock of the adjusted output configuration.
Outside of intel_dp.c only intel_display.c uses adjusted_mode->clock,
and that stays the same after this patch (still equals the selected dp
link clock). intel_display.c also needs the actual dotclock (as
target_clock), but that has been fixed up in the previous patch.
v2: Adjust the debug message to also use adjusted_mode->clock.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
... instead of abusing mode->clock by storing it in there - we
shouldn't touch that one at all. This patch is the first prep step to
constify the mode argument of the intel_dp_mode_fixup function.
The next patch will stop us from modifying mode->clock.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Both busy_ioctl and the new wait_ioct need to do the same dance (or at
least should). Some slight changes:
- busy_ioctl now unconditionally checks for olr. Before emitting a
require flush would have prevent the olr check and hence required a
second call to the busy ioctl to really emit the request.
- the timeout wait now also retires request. Not really required for
abi-reasons, but makes a notch more sense imo.
I've tested this by pimping the i-g-t test some more and also checking
the polling behviour of the wait_rendering_timeout ioctl versus what
busy_ioctl returns.
v2: Too many people complained about unplug, new color is
flush_active.
v3: Kill the comment about the unplug moniker.
v4: s/un-active/inactive/
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Instead of checking for !CPT, check for IBX to make it clearer that
this is a IBX-specific workaround. No functional change because we
smash the PPT PCH into the HAS_PCH_CPT check and atm DP isn't enabled
on the haswell LPT PCH yet.
See Bspec Vol 3, Part 3, Section 4.[3-5].1 about DP[BCD], bit 30 for
details of this workaround.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We need the latest dma-buf code from Dave Airlie so that we can pimp
the backing storage handling code in drm/i915 with Chris Wilson's
unbound tracking and stolen mem backed gem object code.
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Positively checking for the required feature/gen is simpler than build
a cascade of negative "we need to bail" checks. And the later won't
scale if we add more stuff that doesn't fit in nicely.
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This fixes an (albeit really hard to hit) race resulting in an oops:
- The parity work get scheduled.
- We re-init the irq state and call INIT_WORK again.
- The workqueue code tries to run the work item and stumbles over a
work item that should be on it's runlist.
Also initiliaze the work item unconditionally like all the others,
it's simpler.
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>