Gen-12 display decompression operates on Y-tiled compressed main surface.
The CCS is linear and has 4 bits of metadata for each main surface cache
line pair, a size ratio of 1:256. Gen-12 display decompression is
incompatible with buffers compressed by earlier GPUs, so make use of a new
modifier to identify gen-12 compression. Another notable change is that
render decompression is supported on all planes except cursor and on all
pipes. Start by adding render decompression support for [A,X]BGR888 pixel
formats.
v2: Fix checkpatch warnings (Lucas)
v3:
Rebase, disable color clear, styling changes and modify
intel_tile_width_bytes and intel_tile_height to handle linear CCS
v4:
- Use format block descriptors and the i915 specific func to get the
subsampling for each color plane.
- Use helpers to convert between CCS and main planes.
v5:
- Fix subsampling returned by intel_fb_plane_get_subsampling() for
the CCS plane of the first plane.
v6:
- Rebased on v2 of patch 4.
v7:
- Fix plane dimensions during FB check.
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Matt Roper <matthew.d.roper@intel.com>
Cc: Nanley G Chery <nanley.g.chery@intel.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
Cc: Radhakrishna Sripada <radhakrishna.sripada@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Radhakrishna Sripada <radhakrishna.sripada@intel.com> (v6)
Link: https://patchwork.freedesktop.org/patch/msgid/20191221120543.22816-7-imre.deak@intel.com
Gen-12 has a new compression format, add a new modifier to indicate that.
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Matt Roper <matthew.d.roper@intel.com>
Cc: Nanley G Chery <nanley.g.chery@intel.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
Cc: Mika Kahola <mika.kahola@intel.com>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Mika Kahola <mika.kahola@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191221120543.22816-6-imre.deak@intel.com
Using helpers instead of open coding this to select a CCS plane for a
main plane makes the code cleaner and less error-prone when the location
of CCS plane can be different based on the format (packed vs. YUV
semiplanar). The same applies to selecting an AUX plane which can be a
UV plane (for an uncompressed YUV semiplanar format), or a CCS plane.
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Mika Kahola <mika.kahola@intel.com>
Cc: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Mika Kahola <mika.kahola@intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191221120543.22816-5-imre.deak@intel.com
intel_fill_fb_info() has grown quite large and wrapping the offset checks
into a separate function makes the loop a bit easier to follow.
v2: Skip the check for non-CCS planes. (Mika)
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Matt Roper <matthew.d.roper@intel.com>
Cc: Mika Kahola <mika.kahola@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Mika Kahola <mika.kahola@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191221120543.22816-4-imre.deak@intel.com
Easier to read if all the alignment changes are in one place and contained
within a function.
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Matt Roper <matthew.d.roper@intel.com>
Cc: Mika Kahola <mika.kahola@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Mika Kahola <mika.kahola@intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191221120543.22816-3-imre.deak@intel.com
intel_tile_dims() computes tile height using size and width, when there
is already a function to do just that - intel_tile_height()
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Matt Roper <matthew.d.roper@intel.com>
Cc: Mika Kahola <mika.kahola@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Mika Kahola <mika.kahola@intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191221120543.22816-2-imre.deak@intel.com
The power domain covers VDSC for DSI transcoder on ICL, and it's
pedantically about pipe, not transcoder, on TGL.
Reported-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Cc: Vandita Kulkarni <vandita.kulkarni@intel.com>
Reviewed-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191219133845.9333-1-jani.nikula@intel.com
The GT system is becoming more and more a stand-alone system in
i915 and it's fair to assign it its own debugfs directory.
rc6, rps and llc debugfs files are gt related, move them into the
gt debugfs directory.
Signed-off-by: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191222144046.1674865-3-chris@chris-wilson.co.uk
Since intel_gt_resume() is always immediately proceeded by init_hw, pull
the call into intel_gt_resume, where we have the rpm and fw already
held.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191222144046.1674865-1-chris@chris-wilson.co.uk
Begin pulling the GT setup underneath a single GT umbrella; let intel_gt
take ownership of its engines! As hinted, the complication is the
lifetime of the probed engine versus the active lifetime of the GT
backends. We need to detect the engine layout early and keep it until
the end so that we can sanitize state on takeover and release.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Acked-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191222120752.1368352-1-chris@chris-wilson.co.uk
As the GEM global context setup is now independent of the GT state
(although GT does currently still depend upon the global
i915->kernel_context), we can move its init earlier, leaving the gt init
ready to be extracted.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191221200109.1202310-1-chris@chris-wilson.co.uk
Since we may retire timelines from secondary workers,
intel_gt_retire_requests() is not always a reliable indicator that all
pending retirements are complete. If we do detect secondary workers are
in progress, recommend intel_gt_wait_for_idle() to repeat the retirement
check.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191221180204.1201217-1-chris@chris-wilson.co.uk
Allocate only an internal intel_context for the kernel_context, forgoing
a global GEM context for internal use as we only require a separate
address space (for our own protection).
Now having weaned GT from requiring ce->gem_context, we can stop
referencing it entirely. This also means we no longer have to create random
and unnecessary GEM contexts for internal use.
GEM contexts are now entirely for tracking GEM clients, and intel_context
the execution environment on the GPU.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Acked-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191221160324.1073045-1-chris@chris-wilson.co.uk
We have several places where we want to allocate a pristine
crtc state. Some of those currently call intel_crtc_state_reset()
to properly initialize all the non-zero defaults in the state, but
some places do not. Let's add intel_crtc_state_alloc() to do both
the alloc and the reset, and call that everywhere we need a fresh
crtc state.
v2: s/kzalloc/kmalloc/ since we memset() anyway (José)
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191219111430.17527-1-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Decide whether or not we need to disable arbitration within user batches
based on our intel_engine_has_preemption() flag.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213151331.1788371-1-chris@chris-wilson.co.uk
Instead of rummaging through the intel_context to peek at the GEM
context in the middle of request submission to decide whether to use
semaphores, store that information on the intel_context itself.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191220101230.256839-2-chris@chris-wilson.co.uk
Keep the intel_context as being the primary state for i915_request, with
the GEM context a backpointer from the low level state for the rarer
cases we need client information. Our goal is to remove such references
to clients from the backend, and leave the HW submission agnostic to
client interfaces and self-contained.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191220101230.256839-1-chris@chris-wilson.co.uk
Since we added the context_alloc callback to intel_context_ops, we can
safely install a custom hook for the deferred virtual context allocation.
This means that all new contexts behave the same upon creation,
simplifying later code.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191219232932.189197-1-chris@chris-wilson.co.uk
Avoid adding the retire workers to the virtual engine so that we don't
end up in the unenviable situation of trying to free the virtual engine
while its worker remains active.
Fixes: dc93c9b693 ("drm/i915/gt: Schedule request retirement when signaler idles")
Closes: https://gitlab.freedesktop.org/drm/intel/issues/867
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Acked-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191219221344.161523-1-chris@chris-wilson.co.uk
All the other display related tracepoints use intel_ instead
if i915_ as the prefix. Do the same for the pipe update
tracepoints so I don't always have to spend time looking for
them.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213133453.22152-6-ville.syrjala@linux.intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
I fumbled the conflict resolution a bit when applying the
fbc vblank wait w/a. Because of that we now call intel_fbc_pre_update()
twice. Remove the second redundant call.
Reported-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213133453.22152-2-ville.syrjala@linux.intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
icl and tgl are still affected by the modulo 4 PLANE_OFFSET.y
underrun issue. Reject such configurations on all gen9+ platforms.
Can be reproduced easily with the following sequence of
hardware poking:
while {
write FBC_CTL.enable=1
wait for vblank
write PLANE_OFFSET .x=0 .y=32
write PLANE_SURF
wait for vblank
# if PLANE_OFFSET.y is multiple of 4 the underrun won't happen
write PLANE_OFFSET .x=0 .y=31
write PLANE_SURF
wait for vblank
# extra vblank wait is required here presumably
# to get FBC into the proper state
wait for vblank
write FBC_CTL.enable=0
# underrun happens some time after FBC disable
wait for vblank
}
Both 8888 and 565 pixel formats and all tilinga formats
seem affected. Reproduced on KBL/GLK/ICL/TGL. BDW confirmed
not affected.
Closes: https://gitlab.freedesktop.org/drm/intel/issues/792
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213133453.22152-1-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
Currently pointers to and from are not initialized and may contain
garbage values. This will cause uninitialized pointer reads in the
call to intel_frontbuffer_track and later checks to see if to and from
are null. Fix this by ensuring to and from are initialized to NULL.
Addresses-Coverity: ("Uninitialised pointer read)"
Fixes: da42104f58 ("drm/i915: Hold reference to intel_frontbuffer as we track activity")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191219190916.24693-1-colin.king@canonical.com
When we park RPS, we set the GPU to run at minimum 'idle' frequency.
However, as the GPU is idle, we also disable the worker and RPS
interrupts - changing the RPS thresholds has no effect, it just incurs
extra changes to restore them when we unpark. So on parking, leave the
thresholds set to the current power level and so we expect them to be
valid for our restart.
References: https://gitlab.freedesktop.org/drm/intel/issues/848
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191218210545.3975426-2-chris@chris-wilson.co.uk
Use non-forcewaked writes to queue RPS register changes that will take
effect when the write buffer is flushed, rather than wake the mmio
device for immediate effect. This is so that we can avoid a slow
forcewake dance upon unparking, and at our irregular updates.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191218210545.3975426-1-chris@chris-wilson.co.uk
Knowing the round trip time of an engine is useful for tracking the
health of the system as well as providing a metric for the baseline
responsiveness of the engine. We can use the latter metric for
automatically tuning our waits in selftests and when idling so we don't
confuse a slower system with a dead one.
Upon idling the engine, we send one last pulse to switch the context
away from precious user state to the volatile kernel context. We know
the engine is idle at this point, and the pulse is non-preemptible, so
this provides us with a good measurement of the round trip time. It also
provides us with faster engine parking for ringbuffer submission, which
is a welcome bonus (e.g. softer-rc6).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191219105043.4169050-1-chris@chris-wilson.co.uk
Link: https://patchwork.freedesktop.org/patch/msgid/20191219124353.8607-2-chris@chris-wilson.co.uk
Very similar to commit 4f88f8747f ("drm/i915/gt: Schedule request
retirement when timeline idles"), but this time instead of coupling into
the execlists CS event interrupt, we couple into the breadcrumb
interrupt and queue a timeline's retirement when the last signaler is
completed. This should allow us to more rapidly park ringbuffer
submission, and so help reduce power consumption on older systems.
v2: Fixup intel_engine_add_retire() to handle concurrent callers
References: 4f88f8747f ("drm/i915/gt: Schedule request retirement when timeline idles")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191219124353.8607-1-chris@chris-wilson.co.uk
Fix several issues with DSC power domains that did not take DSI
transcoders into account:
- On TGL+ we need to use PW2 for DSC on pipe A, not transcoder A. There
is no longer an eDP transcoder, but there are two DSI transcoders
which may be connected to pipe A.
- On TGL+ we need to use the pipe, not transcoder, power domains for DSC
on pipes other than A. Again, there are DSI transcoders.
- On ICL we need to use PW2 for DSC also for DSI transcoders, not just
for the eDP transcoder.
Using is_pipe_dsc() also adds the warning about ICL pipe A DSC, which
does not exist.
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Vandita Kulkarni <vandita.kulkarni@intel.com>
Reviewed-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191212134728.18432-1-jani.nikula@intel.com
The check for cpu_transcoder != TRANSCODER_A is more magic than
necessary, and potentially misleading. Before TGL, DSC is supported on
pipe A if, and only if, it's used with eDP or DSI transcoders. No
functional changes.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Vandita Kulkarni <vandita.kulkarni@intel.com>
Reviewed-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/f00e9d55ce20b256177222588780c660aa587cc3.1576081155.git.jani.nikula@intel.com
ICL eDP and DSI transcoders have a DSC engine separate from the
pipe. Abstract the register selection and fix it for ICL.
Add a warning for pipe A DSC on ICL; it does not exist.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Vandita Kulkarni <vandita.kulkarni@intel.com>
Reviewed-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/01bcddcdf397b1c8eb859ed18ebe023fb64383d9.1576081155.git.jani.nikula@intel.com
When doing our global park, we like to be a good citizen and shrink our
slab caches (of which we have quite a few now), but each
kmem_cache_shrink() incurs a stop_machine() and so ends up being quite
expensive, causing machine-wide stalls. While ideally we would like to
throw away unused pages in our slab caches whenever it appears that we
are idling, doing so will require a much cheaper mechanism. In the
meantime use a delayed worked to impose a rate-limit that means we have
to have been idle for more than 2 seconds before we start shrinking.
References: https://gitlab.freedesktop.org/drm/intel/issues/848
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191218094057.3510459-1-chris@chris-wilson.co.uk
Only signal the breadcrumbs from inside the irq_work, simplifying our
interface and calling conventions. The micro-optimisation here is that
by always using the irq_work interface, we know we are always inside an
irq-off critical section for the breadcrumb signaling and can ellide
save/restore of the irq flags.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191217095642.3124521-7-chris@chris-wilson.co.uk
Avoid rc6 counter going backward in close to 0% RC6 scenarios like:
15.005477996 114,246,613 ns i915/rc6-residency/
16.005876662 667,657 ns i915/rc6-residency/
17.006131417 7,286 ns i915/rc6-residency/
18.006615031 18,446,744,073,708,914,688 ns i915/rc6-residency/
19.007158361 18,446,744,073,709,447,168 ns i915/rc6-residency/
20.007806498 0 ns i915/rc6-residency/
21.008227495 1,440,403 ns i915/rc6-residency/
There are two aspects to this fix.
First is not assuming rc6 value zero means GT is asleep since that can
also mean GPU is fully busy and we do not want to enter the estimation
path in that case.
Second is ensuring monotonicity on the estimation path itself. I suspect
what is happening is with extremely rapid park/unpark cycles we get no
updates on the real rc6 and therefore have to careful not to
unconditionally trust use last known real rc6 when creating a new
estimation.
v2:
* Simplify logic by not tracking the estimate but last reported value.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Fixes: 16ffe73c18 ("drm/i915/pmu: Use GT parked for estimating RC6 while asleep")
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> # v1
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191217142057.1000-1-tvrtko.ursulin@linux.intel.com
Move all of haswell_crtc_disable() into the encoder
.post_disable() hooks. Now we're left with just
calling the .disable() and .post_disable() hooks
back to back.
I chose to move the code into the .post_disable() hook instead
of the .disable() hook as most of the sequence is currently
implemented in the .post_disable() hook.
We should collapse it all down to just one hook and then the
encoders can drive the modeset sequence fully. But that may
need some further refactoring as we currently call the
ddi .post_disable() hook from mst code and we can't just
replace that with a call to the ddi .disable() hook.
Should also follow up with similar treatment for the enable
sequence but let's start here where it's easier.
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213195217.15168-5-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
To make life easier in the future let's pass the old crtc state
to intel_crtc_vblank_off() just like we already do for its
counterpart intel_crtc_vblank_on().
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213195217.15168-4-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
To make life easier in the future let's pass the old crtc state
to skylake_scaler_disable() just like we already do for
for its ancestor ironlake_pfit_disable().
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213195217.15168-3-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
HSW+ platforms call encoder .post_disable() and .post_pll_disable()
back to back. And since we don't even disable the PLL in between
let's just move everything into .post_disable().
intel_dp_mst does forward the .post_disable() call to intel_ddi at
the very end of its own .post_disable() hook, so this time MST
I shouldn't even break MST by accident.
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213195217.15168-2-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Remove the pointless vfunc detour for hsw_fdi_link_train()
and just call it directly. Also pass the encoder in so we
can nuke the silly encoder loop within.
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213195217.15168-1-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
For the sake of symmetry with the crtc stuff let's add
a helper to reset the plane state to sane default values.
For the moment this only gets caller from the plane init.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191107142417.11107-5-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
We have a few places where we want to reset a crtc state to its
default values. Let's add a helper for that. We'll need the new
__drm_atomic_helper_crtc_state_reset() helper for this to allow
us to just reset the state itself without clobbering the
crtc->state pointer.
And while at it let's zero out the whole thing, except a few
choice member which we'll mark as "invalid". And thanks to this
we can now nuke intel_crtc_init_scalers().
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191107142417.11107-4-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
We already have alloc/free helpers for planes, add the same for
crtcs. The main benefit is we get to move all the annoying state
initialization out of the main crtc_init() flow.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191107142417.11107-3-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>