The ring member of the object structure was always updated with the
last_read_seqno member. Thus with the conversion to last_read_req, obj->ring is
now a direct copy of obj->last_read_req->ring. This makes it somewhat redundant
and potentially misleading (especially as there was no comment to explain its
purpose).
This checkin removes the redundant field. Many uses were simply testing for
non-null to see if the object is active on the GPU. Some of these have been
converted to check 'obj->active' instead. Others (where the last_read_req is
about to be used anyway) have been changed to check obj->last_read_req. The rest
simply pull the ring out from the request structure and proceed as before.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Almost everywhere that caled i915_seqno_passed() was really asking 'has the
given seqno popped out of the hardware yet?'. Thus it had to query the current
hardware seqno and then do a signed delta comparison (which copes with wrapping
around zero but not with seqno values more than 2GB apart, although the latter
is unlikely!).
Now that the majority of seqno instances have been replaced with request
structures, it is possible to convert this test to be request based as well.
There is now a 'i915_gem_request_completed()' function which takes a request and
returns true or false as appropriate. Note that this currently just wraps up the
original _passed() test but a later patch in the series will reduce this to
simply returning a cached internal value, i.e.:
_completed(req) { return req->completed; }'
This checkin converts almost all _seqno_passed() calls. The only one left is in
the semaphore code which still requires seqnos not request structures.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
[danvet: Drop hunk touching the trace_irq code since I've dropped the
patch which converts that, and resolve resulting conflict.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It makes a lot more sense (and makes future seqno -> request conversion patches
simpler) to fill in the 'ring' field of the request structure at the point of
creation rather than submission. Given that the request structure is assigned by
ring specific code and thus is locked to a ring from the start, there really is
no reason to defer this assignment.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
More seqno value to request structure conversions. Note, this change temporarily
moves the 'get_seqno()' call inside ring_idle() but this will disappear again in
a later patch when i915_seqno_passed() itself is converted.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
All the code above is now using requests not seqnos so it is possible to convert
the trace functions across. Note that rather than get into problematic reference
counting issues, the trace code only saves the seqno and ring values from the
request structure not the structure pointer itself.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Converted the flip_queued_seqno value to be a request structure as part of the
on going seqno to request changes. This includes reference counting the request
being saved away to ensure it can not be retired and freed while the flip code
is still waiting on it.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
[danvet: Again get rid of the _irq request unref by simply moving that
into the unpin worker. Doesn't matter when we hang onto the request
for a bit longer, and in the unpin worker we already grab the
dev->struct_mutex anyway.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
There is no longer any need to retrieve a seqno value from an i915_add_request()
call. The calling code already knows which request structure is being processed
(it can only be ring->OLR). And as the request itself is now used in preference
to the basic seqno value, the latter is now redundant in this situation.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that all code above is using request structures instead of seqno values, it
is possible to convert __wait_seqno() itself. Internally, it is still calling
i915_seqno_passed(), this will be updated later in the series. This step is just
changing the parameter list and function name.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Converted the mmio_flip 'seqno' value to be a request structure as part of the
on going seqno to request changes. This includes reference counting the request
being saved away to ensure it can not be retired and freed while the flip code
is still waiting on it.
v2: Used the IRQ friendly request dereference call in the notify handler as that
code is called asynchronously without holding any useful mutex locks.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
[danvet: Drop the _irq variant and use the normal reques unref,
wrapped in dev->struct_mutex per the discussion on the m-l.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With refcounting it looks like you can just drop that refcount, but
that's not really the case. So make sure no one forgets.
Motivated by the unlocked call in the mmio flip code.
Cc: John Harrison <John.C.Harrison@Intel.com>
Cc: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Updated i915_wait_seqno() to take a request structure instead of a seqno value
and renamed it accordingly. Internally, it just pulls the seqno out of the
request and calls on to __wait_seqno() as before. However, all the code further
up the stack is now simplified as it can just pass the request object straight
through without having to peek inside.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
[danvet: Squash in hunk from an earlier patch which was rebased
wrongly.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Converted 'last_flip_req' to be an actual request rather than a seqno value as
part of the on going seqno to request changes. This includes reference counting
the request being saved away to ensure it can not be retired and freed while the
overlay code is still waiting on it.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated the _check_olr() function to actually take a request object and compare
it to the OLR rather than extracting seqnos and comparing those.
Note that there is one use case where the request object being processed is no
longer available at that point in the call stack. Hence a temporary copy of the
original function is still present (but called _check_ols() instead). This will
be removed in a subsequent patch.
Also, downgraded a BUG_ON to a WARN_ON as apparently the former is frowned upon
for shipping code.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The OLS value is now obsolete. Exactly the same value is guarateed to be always
available as PLR->seqno. Thus it is safe to remove the OLS completely. And also
to rename the PLR to OLR to keep the 'outstanding lazy ...' naming convention
valid.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Added reference counting of the request structure around __wait_seqno() calls.
This is a precursor to updating the wait code itself to take the request rather
than a seqno. At that point, it would be a Bad Idea for a request object to be
retired and freed while the wait code is still using it.
v3:
Note that even though the mutex lock is held during a call to i915_wait_seqno(),
it is still necessary to explicitly bump the reference count. It appears that
the shrinker can asynchronously retire items even though the mutex is locked.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
[danvet: Remove wrongly squashed hunk which breaks the build.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Convert the throttle code to use the request structure rather than extracting a
ring/seqno pair from it and using those. This is in preparation for
__wait_seqno() becoming __wait_request().
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The object structure contains the last read, write and fenced seqno values for
use in syncrhonisation operations. These have now been replaced with their
request structure counterparts.
Note that to ensure that objects do not end up with dangling pointers, the
assignments of last_*_req include reference count updates. Thus a request cannot
be freed if an object is still hanging on to it for any reason.
v2: Corrected 'last_rendering_' to 'last_read_' in a number of comments that did
not get updated when 'last_rendering_seqno' became 'last_read|write_seqno'
several millenia ago.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Added helper functions for retrieving the ring and seqno entries from a request
structure. This allows the internal workings of the request structure to be
hidden from code that is using these. It also allows for useful
workarounds/debug code to be added as or when necessary.
Note that it is intended that the majority (if not all) uses of the seqno
accessor will disappear eventually as code is updated to use the request
structure itself rather than working with seqno values.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The plan is to use request structures everywhere that seqno values were
previously used. This means saving pointers to structures in places that used to
be simple integers. In turn, that means that the target structure now needs much
more stringent lifetime tracking. That is, it must not be freed while some other
random object still holds a pointer to it.
To achieve this tracking, a reference count needs to be added. Whenever a
pointer to the structure is saved away, the count must be incremented and the
free must only occur when all references have been released.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The aim is to replace seqno values with request structures. A step along the way
is to switch to using the PLR in preference to the OLS. That requires the PLR to
only be valid when and only when the OLS is also valid. I.e., the two must be
kept in lock step. Then, code which was using the OLS can be safely switched
over to using the PLR instead.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
That's the version actually taking the dev_priv->power_domains lock.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Before suspending, we wait upon the outstanding GPU requests and flush
our pending idle handlers. This should downclock the GPU to its lowest
power state. Add a WARN to check that the delayed tasks were run and did
their job properly.
Suggested-by: Akash Goel <akash.goel@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The logical ring code was updating the software ring 'head' value
by reading the hardware 'HEAD' register. In LRC mode, this is not
valid as the hardware is not necessarily executing the same context
that is being processed by the software. Thus reading the h/w HEAD
could put an unrelated (undefined, effectively random) value into
the s/w 'head' -- A Bad Thing for the free space calculations.
Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
Reviewed-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The request queue is per-engine, and may therefore contain requests
from several different contexts/ringbuffers. In determining which
request to wait for, this function should only consider requests
from the ringbuffer that it's checking for space, and ignore any
that it finds that belong to other contexts.
Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
Reviewed-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch is the last in series of VLV/CHV PSR,
that finally enable PSR by adding it to HAS_PSR
and calling the proper enable and disable
functions on the right places.
Although it is still disabled by default.
v2: Rebase over intel_psr and merge Durgadoss's fixes.
v3: Fix typo.
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Add debugfs support for Valleyview and Cherryview considering that
we have PSR per pipe and we don't have any kind of
performance counter as we have on other platforms that support PSR.
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch introduces exit/activate functions for PSR
on VLV+. Since on VLV+ HW cannot track frame updates and force PSR
exit let's use fully SW tracking available.
v2: Rebase over intel_psr.c;
Remove Single Frame update transitioning from state 3 to 5 directly;
Fake a software invalidation for sprites and cursor so we don't miss
any screen update;
v3: As pointed out by Durgadoss msecs_to_jiffies used on wait_for only uses int,
so let's use 1 instead. Althought the 1/4 of this is needed for the
transition let's use 1 for simplicity;
Also fix comments as suggested by Durgadoss
Cc: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The biggest difference from HSW/BDW PSR here is that VLV enable_source
function enables PSR but let it in Inactive state. So it might be called
on early stage along with setup and enable_sink ones.
v2: Rebase over intel_psr.c;
Remove docs from static functions;
Merge vlv_psr_active_on_pipe;
Timeout for psr transition is 250us;
Remove SRC_TRASMITTER_STATE;
v3: Rebase after is_psr_enabled function got removed;
Get SRC_TRANSMITTER_STATE back to be on the safe side since
default for panels is to require link training on exit when
main link off;
As pointed out by Durgadoss msecs_to_jiffies used on wait_for only uses int,
so let's use 1 instead. Althought the 1/4 of this is needed for the
transition let's use 1 for simplicity;
Cc: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Baytrail (Valleyview) and Braswell (Cherryview) uses a complete different
implementation of PSR that we currently have supported for
Haswell and Broadwell. So let's start by adding registers definitions.
I usually don't like commit that adds just registers without using,
but after I put all in one commit I realized that no one would want
to take the AR to review it so I decided to split in order to make
reviewer's life easier. Only last commit in this series will actually
enable the PSR on intel enable panel path.
But as it happens currently with HSW/BDW the plan is to let it
disabled by default (protected by kernel parameter)
while we are able to fully validate it.
v2: Remove a unused bit definition that isn't used on vlv and
reserved on chv as pointed out by Durgadoss.
Cc: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This function was in use to check if PSR feature got enabled.
However on HSW and BDW we currently force psr exit by disabling
EDP_PSR_ENABLE bit at EDP_PSR_CTL(dev). So this function was actually
returning the active/inactive state that is different from the enable/disable
meaning and had the risk of false negative.
But anyway this check with DRRS was dangerous, since DRRS could try to get enabled
before PSR gets there. So let's just remove it for now.
A proper synchronization mechanism must be implemented later probably
using pipe config.
Cc: Daniel Vetter <daniel@ffwll.ch>
Reviewed-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Single frame update is a feature available on BDW for PSR that allows
Source to send Sink only one frame and get it updated. Usually useful
when page flipping. However with our frontbuffer tracking where we force
psr exit on flips we don't need this feature.
Also after it got added here many workaround was added to documentation
to mask some bits when using single frame update. So the safest thing
is to just stop using it.
v2: Rebase after removing skip aux one and fixing typo on commit message.
Reviewed-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
OEMs can specify if full_link might be always enabled, i.e. only_standby
over VBT.
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Let's use VBT + 1 now we parse it.
v2: fix subject
v3: rebase over intel_psr and without counting on previous fix
Cc: Arthur Runyan <arthur.j.runyan@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
PSR (aka SRD) block is defined at VBT and currently being used.
Mainly/At-least to configure the amount of idle_frames require to get
back to PSR Entry.
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Because the plane registers are different in Skylake we need to adapt
the MMIO code as well.
v2: Don't introduce yet another vfunc when the direction is do
consolidate the plane updates to use the same code path (Daniel)
v3:
- Use enum pipe instead of int (Ville)
- Also update PLANE_STRIDE when the tiling has changed (Ville)
- Put intel_mark_page_flip_active() in the shared code (Damien)
v4:
- Remove unused variable
v5:
- Fix whitespace Vs tabs (Ville)
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
v2: Put the DPLL0 state readout in skylake_get_ddi_pll(), closer to
where the PLL assignement read out is done rather than the frequency
readout function. (Daniel)
v3: Remove stray new line (Damien)
Add Paulo's r-b tag for v1
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> (v1)
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On pre-HSW we have two encoders per digital port: one HDMI, one DP.
However they are the same physical port in hardware and we can't enable
both at the same time. Reject the modeset if the user attempts this.
So far we've been saved by the fact that we never see both HDMI and DP
connectors as connected. But if the user decides to force a mode anyway,
all kinds of funny stuff might happen.
Unfortunately we don't seem to have any way to inform userspace that
such configurations are invalid except by returning an error from
setcrtc. possible_clones only covers real cloning situations, and
looking at the connector names doesn't work either since we don't
always register both connectors for the same port. I suppose the
only way to fix that would be to expose only a single encoder per
digital port like we do on HSW+ but that would be a fairly large
undertaking for little gain.
kms_setmode hits this since it forces modes on non-connected VGA and
HDMI connectors. Previosuly it just resulted in weirdness such as
failed link training. With this patch it will now get an error back
from the kernel and will die with an assert since it thinks that the
configuration should be fine.
v2: Deal with INTEL_OUTPUT_UNKNOWN (Paulo)
Cc: Paulo Zanoni <przanoni@gmail.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Atm, igt/gem_reset_stats can trigger the recently added WARN on
left-over PM_IIR bits in gen6_enable_rps_interrupts(). There are two
reasons for this:
1. we call intel_enable_gt_powersave() without a preceeding
intel_disable_gt_powersave()
2. gen6_disable_rps_interrupts() doesn't mask interrupts in PM_IMR
1. means RPS interrupts will remain enabled and can be serviced during
the HW initialization after a GPU reset. 2. means even if we called
gen6_disable_rps_interrupts() any new RPS interrupt during RPS
initialization would still propagate to PM_IIR too early (though
wouldn't be serviced).
This patch solves the 2. issue by also masking interrupts in PM_IMR, the
following patch fixes 1. getting rid of the WARN. This also makes
intel_enable_gt_powersave() and intel_disable_gt_powersave() more
symmetric.
Since gen6_disable_rps_interrupts() is called during driver loading with
i915 interrupts disabled add a new version of gen6_disable_pm_irq() that
doesn't WARN for this.
Also while at it, get the irq_lock around the whole PM_IMR/IER/IIR
programming sequence and make sure that any queued PM_IIR bit is also
cleared.
The WARN was caught by PRTS after I sent my previous RPS sanitizing
patchset and I could easily reproduce it on HSW. To actually fix it we
also need the next patch.
Reported-by: He, Shuang <shuang.he@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We don't really synchronously turn them off from debugfs. We try to
avoid hitting them too badly by waiting one vblank, but apparently the
irq handler can still race through that gap.
Since this isn't really all that important for testcases, only for
debugging CRC issues let's tune it down to a debug message.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=82602
Cc: Damien Lespiau <damien.lespiau@intel.com>
Acked-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Dynamic context pinning for LRCs introduced a leak in legacy mode.
Reinstate context unreference in i915_gem_free_request for legacy contexts.
Leak reported by i-g-t/drv_module_reload fixed by this patch.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=86507
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
Reviewed-by: John Harrison<John.C.Harrison@Intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updates in forcewake range for Render/Media/Common
power wells for Gen9.
Signed-off-by: Akash Goel <akash.goel@intel.com>
Signed-off-by: Zhe Wang <zhe1.wang@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Before testing if the panel VDD is enabled on eDP cancel any pending
disable worker. This makes sure the worker will be triggered with a
delay from the last time edp_panel_vdd_schedule_off() is called, not
the first time. This avoids unnecessary overhead.
https://bugs.freedesktop.org/show_bug.cgi?id=86201
v2: use cancel_delayed_work() instead of cancel_delayed_work_sync()
as the pps_mutexes will provide the required serialization with
edp_panel_vdd_work() while the sync variant may deadlock. Suggested
by Ville Syrjälä <ville.syrjala@linux.intel.com>.
Made commit message a bit clearer.
Signed-off-by: Egbert Eich <eich@suse.de>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The crc code doesn't handle anything really that could drop the
register state (by design so that we have less complexity). Which
means userspace may only start crc capture once the pipe is fully set
up.
With an i-g-t patch this will be the case, but there's still the
problem that this results in obscure unclaimed register write
failures. Which is a pain to debug.
So instead make sure we don't have the basic unclaimed register write
failure by grabbing runtime pm references. And reject completely
invalid requests with -EIO. This is still racy of course, but for a
test library we don't really care - if userspace shuts down the pipe
right afterwards the entire setup will be lost anyway.
v2: Put instead of get, spotted by Damien. Also explain the runtime pm
dance.
v3: There's really no need for rpm get/put since power_is_enabled only
checks software state (Damien).
References: https://bugs.freedesktop.org/show_bug.cgi?id=86092
Cc: Damien Lespiau <damien.lespiau@intel.com> (v2)
Tested-by: lu hua <huax.lu@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
The GPU reset also resets the display on gen3/4. The g33 docs say we
should disable all planes before flipping the reset switch. Just
disable all the crtcs instead. That seems a nicer thing to do anyway.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On gen4 and earlier the GPU reset also resets the display, so we should
protect against concurrent modeset operations. Grab all the modeset locks
around the entire GPU reset dance, remebering first ti dislogde any
pending page flip to make sure we don't deadlock. Any pageflip coming
in between these two steps should fail anyway due to reset_in_progress,
so this should be safe.
This fixes a lot of failed asserts in the modeset code when there's a
modeset racing with the reset. Naturally the asserts aren't happy when
the expected state has disappeared.
v2: Drop UMS checks, complete pending flips after the reset (Daniel)
Cc: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
g33 seems to sit somewhere between the 915/945/965 style and the
g4x style. The bits look like g4x, but we still need to do a full
reset including display.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
915/945 have the same reset registers as 965, so share the code.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On pre-ctg GPU reset also resets the display hardware. Force a mode
restore after the GPU reset, and also re-init clock gating.
v2: Use intel_modeset_init_hw() instead of intel_init_clock_gating()
in case more relevant stuff gets added there at some point
Restore interrupts after the reset as well
Tested-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On pre-ctg the reset bit directly controls the reset signal. We must
assert it for >=20usec and then deassert it. Bit 1 is a RO status bit
which should also go down when the reset is no longer asserted.
Tested-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>