Forgo the struct_mutex serialisation for i915_active, and interpose its
own mutex handling for active/retire.
This is a multi-layered sleight-of-hand. First, we had to ensure that no
active/retire callbacks accidentally inverted the mutex ordering rules,
nor assumed that they were themselves serialised by struct_mutex. More
challenging though, is the rule over updating elements of the active
rbtree. Instead of the whole i915_active now being serialised by
struct_mutex, allocations/rotations of the tree are serialised by the
i915_active.mutex and individual nodes are serialised by the caller
using the i915_timeline.mutex (we need to use nested spinlocks to
interact with the dma_fence callback lists).
The pain point here is that instead of a single mutex around execbuf, we
now have to take a mutex for active tracker (one for each vma, context,
etc) and a couple of spinlocks for each fence update. The improvement in
fine grained locking allowing for multiple concurrent clients
(eventually!) should be worth it in typical loads.
v2: Add some comments that barely elucidate anything :(
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-6-chris@chris-wilson.co.uk
As we need to use a mutex to serialise i915_active activation
(because we want to allow the callback to sleep), we need to push the
i915_active.retire into a worker callback in case we get need to retire
from an atomic context.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-5-chris@chris-wilson.co.uk
Replace the struct_mutex requirement for pinning the i915_vma with the
local vm->mutex instead. Note that the vm->mutex is tainted by the
shrinker (we require unbinding from inside fs-reclaim) and so we cannot
allocate while holding that mutex. Instead we have to preallocate
workers to do allocate and apply the PTE updates after we have we
reserved their slot in the drm_mm (using fences to order the PTE writes
with the GPU work and with later unbind).
In adding the asynchronous vma binding, one subtle requirement is to
avoid coupling the binding fence into the backing object->resv. That is
the asynchronous binding only applies to the vma timeline itself and not
to the pages as that is a more global timeline (the binding of one vma
does not need to be ordered with another vma, nor does the implicit GEM
fencing depend on a vma, only on writes to the backing store). Keeping
the vma binding distinct from the backing store timelines is verified by
a number of async gem_exec_fence and gem_exec_schedule tests. The way we
do this is quite simple, we keep the fence for the vma binding separate
and only wait on it as required, and never add it to the obj->resv
itself.
Another consequence in reducing the locking around the vma is the
destruction of the vma is no longer globally serialised by struct_mutex.
A natural solution would be to add a kref to i915_vma, but that requires
decoupling the reference cycles, possibly by introducing a new
i915_mm_pages object that is own by both obj->mm and vma->pages.
However, we have not taken that route due to the overshadowing lmem/ttm
discussions, and instead play a series of complicated games with
trylocks to (hopefully) ensure that only one destruction path is called!
v2: Add some commentary, and some helpers to reduce patch churn.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-4-chris@chris-wilson.co.uk
Since we cannot allocate underneath the vm->mutex (it is used in the
direct-reclaim paths), we need to shift the allocations off into a
mutexless worker with fence recursion prevention. To know when we need
this protection, we mark up the address spaces that do allocate before
insertion. In the future, we may wish to extend the async bind scheme to
more than just allocations.
v2: s/vm->bind_alloc/vm->bind_async_flags/
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-3-chris@chris-wilson.co.uk
The premise here is to simply avoiding having to acquire the vm->mutex
inside vma create/destroy to update the vm->unbound_lists, to avoid some
nasty lock recursions later.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-2-chris@chris-wilson.co.uk
The L3 cache remapping is stored as u32 elements, and we should ensure
that the user only supplies complete slice information(u32).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004105958.1741-1-chris@chris-wilson.co.uk
On platfroms with gen10+ display, driver must set the enable bit of
AUDIO_PIN_BUF_CTL register before transactions with the HDA controller
can proceed. Add setting this bit to the audio power up sequence.
Failing to do this resulted in errors during display audio codec probe,
and failures during resume from suspend.
Note: We may also need to disable the bit afterwards, but there are
still unresolved issues with that.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111214
Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191003085531.30990-1-kai.vehmanen@linux.intel.com
Add aux_busy_last_status to intel_dp. Don't bother with initializing to
all ones; the only difference is potentially missing logging for one
error case if the readout is all zeros.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191002144138.7917-1-jani.nikula@intel.com
The Thunderbolt PLL divider values on TGL differ from the ICL ones,
update the PLL parameter calculation function accordingly.
Bspec: 49204
v2:
- Remove unused refclk config. (José)
Cc: Jose Souza <jose.souza@intel.com>
Cc: Clinton A Taylor <clinton.a.taylor@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Mika Westerberg <mika.westerberg@intel.com>
Tested-by: Mika Westerberg <mika.westerberg@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Jose Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191002204108.32242-1-imre.deak@intel.com
If execlists's lite-restore is based on the common GEM context tag
rather than the per-intel_context LRCA, then a context switch between
two intel_contexts on the same engine derived from the same GEM context
will perform a lite-restore instead of a full context switch. We can
exploit this by poisoning the ringbuffer of the first context and trying
to trick a simple RING_TAIL update (i.e. lite-restore)
v2: Also check what happens if preempt ce[0] with ce[1] (both instances
on the same engine from the same parent context) [Tvrtko]
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191002183459.26614-1-chris@chris-wilson.co.uk
All the MG registers is based on the tc_port not port, so
MG_PHY_PORT_LN() was subtracting port and PORT_C what is very
fragile.
So replacing port to tc_port in all MG register macros and users
like we have for DKL.
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191001193729.123736-1-jose.souza@intel.com
For selftests, we desire repeatability and so prefer using a prng with
known seed over true randomness. Extract random_offset() as a selftest
utility that can take the prng state.
Suggested-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191002122430.23205-1-chris@chris-wilson.co.uk
I forgot to update the g4x sprite scaling stride check when GTT
remapping was introduced. The stride of the original framebuffer
is irrelevant when remapping is used and instead we want to check
the stride of the remapped view.
Also drop the duplicate width_bytes check. We already check that
a few lines earlier.
Fixes: df79cf4419 ("drm/i915: Store the final plane stride in plane_state")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190930183045.662-1-ville.syrjala@linux.intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Drop the tv_mode NULL check since intel_tv_mode_find() never
actually returns NULL, and flip the condition around so that
the MODE_OK case is at the end, which is customary to all
the other .mode_valid() implementations.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191001154629.11063-2-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
When adding the max plane size checks to the .mode_valid() hooks
I naturally forgot about MST. Take care of that one as well.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Sean Paul <sean@poorly.run>
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Fixes: 2d20411e25 ("drm/i915: Don't advertise modes that exceed the max plane size")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191001154629.11063-1-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Split out the code related to vga client and vgaarb all over the place
into new intel_vga.[ch]. No functional changes.
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191001152506.7854-1-jani.nikula@intel.com
Unwedging the GPU requires a successful GPU reset before we restore the
default submission, or else we may see residual context switch events
that we were not expecting.
v2: Pull in the special-case reset_clobbers_display, and explain why it
should be safe in the context of unwedging.
v3: Just forget all about resets before unwedging if it will clobber the
display; risk it all.
Reported-by: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> #v1
Link: https://patchwork.freedesktop.org/patch/msgid/20190927160335.10622-1-chris@chris-wilson.co.uk
We currently test context switching on each engine as a basic stress
test (just verifying that nothing explodes if we execute 2 requests from
different contexts sequentially). What we have not tested is what
happens if we try and do so on all available engines simultaneously,
putting our SW and the HW under the maximal stress.
v2: Clone the set of engines from the first context into the secondary
contexts.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190930144919.27992-1-chris@chris-wilson.co.uk
On systems that have no runtime-pm, we mark the wakeref as being -1. We
therefore cannot use that value for the mock-gt indicator, so opt for
-ENODEV instead. The wakeref should never be an error value -- one
hopes!
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190927211749.2181-2-chris@chris-wilson.co.uk
As we execute GPU resets on a gt/ basis, and use the intel_gt as the
primary for all other reset functions, also use it for the has-reset?
predicates. Gradually simplifying the churn of pointers.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190927211749.2181-1-chris@chris-wilson.co.uk
Now that TC support was added, initialize DDIs.
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Acked-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926210659.56317-4-jose.souza@intel.com
Link training is failling when running link at 2.7GHz and 1.62GHz and
following BSpec pll algorithm.
Comparing the values calculated and the ones from the reference table
it looks like MG_CLKTOP2_CORECLKCTL1_A_DIVRATIO should not always set
to 5. For DP ports ICL mg pll algorithm sets it to 10 or 5 based on
div2 value, that matches with dkl hardcoded table.
So implementing this way as it proved to work in HW and leaving a
comment so we know why it do not match BSpec.
v4:
Using the same is_dp check as ICL, need testing on HDMI over tc port
Issue reported on BSpec 49204.
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926210659.56317-3-jose.souza@intel.com
Added DKL Phy sequences and helpers functions to program voltage
swing, clock gating and dp mode.
It is not written in DP enabling sequence but "PHY Clockgating
programming" states that clock gating should be enabled after the
link training but doing so causes all the following trainings to fail
so not enabling it for.
v2:
Setting the right HIP_INDEX_REG bits (José)
v3:
Adding the meaning of each column of tgl_dkl_phy_ddi_translations
Adding if gen >= 12 on intel_ddi_hdmi_level() and
intel_ddi_pre_enable_hdmi() instead of reuse part of gen >= 11 if
v4:
Moved the DP_MODE lane programing to another patch as ICL also
needed it
Sharing icl_phy_set_clock_gating() and icl_program_mg_dp_mode() with
TGL as bits and programing as now it almost identical to ICL
BSpec: 49292
BSpec: 49190
Cc: Imre Deak <imre.deak@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Clinton A Taylor <clinton.a.taylor@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926210659.56317-2-jose.souza@intel.com
BSpec was updated(r146548) with a new MG_DP_MODE Programming table,
now taking in consideration the pin assignment and allowing us to
optimize power by shutting down available but not needed lanes.
It was tested on ICL and TGL, with adaptors that used pin assignment
C and B, reversing the connector and going to different modes testing
the not needed lane shutdown.
v5:
Using crtc_state->lane_count instead of dp.lane_count
BSpec: 21735
BSpec: 49292
Cc: Imre Deak <imre.deak@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Clinton A Taylor <clinton.a.taylor@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926210659.56317-1-jose.souza@intel.com
We have a new version of DMC for ICL - v1.09.
This version adds the Half Refresh Rate capability
into DMC.
Cc: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190925201250.18136-1-daniele.ceraolospurio@intel.com
The HuC FW has silently switched to encoding the version the same way as
the GuC FW does, i.e. major.minor.patch instead of just major.minor. All
the current blobs follow the new scheme, but since minor and patch are
both zero there is no difference in the end results and we happily load
them. New binaries, however, will have non-zero values in there, so we
need to make sure to parse them correctly.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Anusha Srivatsa <anusha.srivatsa@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
Acked-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190925222121.4000-1-daniele.ceraolospurio@intel.com
We can use it in i915 for updating parts of unmasked registers from
within a batch. We're also adding Gen8+ versions of CS_GPR registers
(aka MI_MATH_REG in the coprocessor).
Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926100635.9416-4-michal.winiarski@intel.com
Insert structure members names into their descriptions to follow
kernel-doc format.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Anna Karas <anna.karas@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926122158.13028-1-anna.karas@intel.com
The function intel_engine_breadcrumbs_irq() is always invoked from an interrupt
handler and for that reason it invokes (as an optimisation) only spin_lock()
for locking assuming that the interrupts are already disabled. The
function intel_engine_signal_breadcrumbs() is provided to disable
interrupts while the former function is invoked so that assumption is
also true for callers from preemptible context.
On PREEMPT_RT local_irq_disable() really disables interrupts and this
forbids to invoke spin_lock() which becomes a sleeping spinlock.
This is also problematic with `threadirqs' in conjunction with
irq_work. With force threading the interrupt handler, the handler is
invoked with disabled BH but with interrupts enabled. This is okay and
the lock itself is never acquired in IRQ context. This changes with
irq_work (signal_irq_work()) which _still_ invokes
intel_engine_breadcrumbs_irq() from IRQ context. Lockdep should see this
and complain.
Acquire the locks in intel_engine_breadcrumbs_irq() with _irqsave()
suffix and let all callers invoke intel_engine_breadcrumbs_irq()
directly instead using intel_engine_signal_breadcrumbs().
Reported-by: Clark Williams <williams@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926105644.16703-2-bigeasy@linutronix.de
The lockdep_assert_irqs_disabled() check is needless. The previous
lockdep_assert_held() check ensures that the lock is acquired and while
the lock is acquired lockdep also prints a warning if the interrupts are
not disabled if they have to be.
These IRQ-off asserts trigger on PREEMPT_RT because the locks become
sleeping locks and do not really disable interrupts.
Remove lockdep_assert_irqs_disabled().
Reported-by: Clark Williams <williams@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926105644.16703-3-bigeasy@linutronix.de
Default length value of MI_LOAD_REGISTER_REG is 1.
Also move it out of cmd-parser-only registers since we're going to use
it in i915.
Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926133142.2838-3-chris@chris-wilson.co.uk
Some of our commands (MI_FLUSH_DW / PIPE_CONTROL) require a post-sync write
operation to be performed. Currently we're using dedicated VMA for
PIPE_CONTROL and global HWSP for MI_FLUSH_DW.
On execlists platforms, each of our contexts has an area that can be
used as scratch space. Let's use that instead.
Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926133142.2838-2-chris@chris-wilson.co.uk
We're currently using scratch presence as a way of identifying that we
entered wedged state at driver initialization time.
Let's use a separate flag rather than rely on scratch.
Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926133142.2838-1-chris@chris-wilson.co.uk
According to the bspec, GLK/CNL have a smaller small joiner RAM buffer
than ICL+. This feels like something that could easily change again on
future platforms, so let's just add a function to return the proper
per-platform buffer size. That may also slightly simplify the upcoming
bigjoiner enabling.
Since we have to change intel_dp_dsc_get_output_bpp()'s signature to
pass the dev_priv down for the platform check, let's take the
opportunity to also make that function static since it isn't used
outside the intel_dp file.
v2: Minor rebase on top of Maarten's changes.
Bspec: 20388
Bspec: 49259
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190925234542.24289-1-matthew.d.roper@intel.com
The memory type values have changed in TGL, so we need to translate them
differently than ICL. While we're moving it, fix up the ICL translation
for LPDDR4.
BSpec: 53998
v2: Fix up ICL LPDDR4 entry (Ville); Drop unused values from TGL (Ville)
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: James Ausmus <james.ausmus@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190924222829.13142-1-james.ausmus@intel.com
TGL added 2 more TC ports that currently are not being handled by
icl_pll_to_ddi_clk_sel(), so adding those.
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Reported-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190924210040.142075-6-jose.souza@intel.com
Extending ICL mg calculations to also support dkl calculations.
v3:
Fixing iref_trim calculation for 38400 refclock
BSpec: 49204
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190924210040.142075-5-jose.souza@intel.com
The final save operation into pll_state of the calculations done will
be different for DKL PHY. Prepare for that by reindenting code so it's
easier to check for correctness. This one has no change in behavior.
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190924210040.142075-4-jose.souza@intel.com
Add a new function to write to dkl phy pll registers. As per the
bspec all the registers are read modify write.
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190924210040.142075-2-jose.souza@intel.com
The disable function can be the same as for MG phy since the same
registers are used. The others are different as registers changed,
also adding a empty dkl_pll_write() to be implemented later.
v2:
Setting the right HIP_INDEX_REG bits (José)
v3:
Masking non-computed registers of mg_pll_tdc_coldst_bias
when getting hardware state
Sharing mg_pll_enable() with TGL
Reviewed-by: Imre Deak <imre.deak@intel.com>
Acked-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190924210040.142075-1-jose.souza@intel.com
Having decided that we only care about the promotion predicate, we can
simplify gen12_csb_parse to simply check whether we need to jump to a
new queue.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190925130845.17952-1-chris@chris-wilson.co.uk
We cannot switch between HQ and normal mode on GLK+, so only
add planes on platforms where it makes sense.
We could probably restrict it even more to only add when scaler
users toggles between 1 and 2, but lets just leave it for now.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920114235.22411-9-maarten.lankhorst@linux.intel.com
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
We had this as an optimization to not do a plane update, but we killed
it off because there are so many reasons we may have to do a plane
update or fastset that it's best to just assume everything changed.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920114235.22411-6-maarten.lankhorst@linux.intel.com
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
There was a integer wraparound when mode_clock became too high,
and we didn't correct for the FEC overhead factor when dividing,
with the calculations breaking at HBR3.
As a result our calculated bpp was way too high, and the link width
limitation never came into effect.
Print out the resulting bpp calcululations as a sanity check, just
in case we ever have to debug it later on again.
We also used the wrong factor for FEC. While bspec mentions 2.4%,
all the calculations use 1/0.972261, and the same ratio should be
applied to data M/N as well, so use it there when FEC is enabled.
This fixes the FIFO underrun we are seeing with FEC enabled.
Changes since v2:
- Handle fec_enable in intel_link_compute_m_n, so only data M/N is adjusted. (Ville)
- Fix initial hardware readout for FEC. (Ville)
Changes since v3:
- Remove bogus fec_to_mode_clock. (Ville)
Changes since v4:
- Use the correct register for icl. (Ville)
- Split hw readout to a separate patch.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Fixes: d9218c8f6c ("drm/i915/dp: Add helpers for Compressed BPP and Slice Count for DSC")
Cc: <stable@vger.kernel.org> # v5.0+
Cc: Manasi Navare <manasi.d.navare@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190925082110.17439-1-maarten.lankhorst@linux.intel.com
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
If we disable rps, it appears the Tigerlake is stable enough to run
multiple engines simultaneously in CI. As disabling rps should only
cause the execution to be slow, whereas many features depend on the
different engines, we would prefer to have the engines enabled while the
machine hangs are being debugged.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111714
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Acked-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190924173501.21956-1-chris@chris-wilson.co.uk
Currently the offset for PIPE D cursor control register is missing in
i915_reg.h due to which the cursor plane cannot be enabled for Pipe D.
This also causes kernel Warning, when a user requests to enable cursor
plane for PIPE D for Gen 12 platforms.
This patch adds the CURSOR_CTL_D register in the i915_reg.h.
v2: Rebase
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111640
Signed-off-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
[Lucas: remove extra blank line]
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1569310312-12313-1-git-send-email-ankit.k.nautiyal@intel.com
Before we submit the first context to HW, we need to construct a valid
image of the register state. This layout is defined by the HW and should
match the layout generated by HW when it saves the context image.
Asserting that this should be equivalent should help avoid any undefined
behaviour and verify that we haven't missed anything important!
Of course, having insisted that the initial register state within the
LRC should match that returned by HW, we need to ensure that it does.
v2: Drop the RELATIVE_MMIO flag from gen11, we ignore it for
constructing the lrc image.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190924145950.3011-1-chris@chris-wilson.co.uk
Added bandwidth calculation algorithm and checks,
similar way as it was done for ICL, some constants
were corrected according to BSpec 53998.
v2: Start using same icl_get_bw_info function to avoid
code duplication. Moved mpagesize to memory info
related structure as it is now dependent on memory type.
Fixed qi.t_bl field assignment.
v3: Removed mpagesize as unused. Duplicate code and redundant blankline
fixed.
v4: Changed ordering of IS_GEN checks as agreed. Minor commit
message fixes.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111600
Reviewed-by: James Ausmus <james.ausmus@intel.com>
Signed-off-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920083754.5920-1-stanislav.lisovskiy@intel.com
Since dropping the set-to-gtt-domain in commit a679f58d05 ("drm/i915:
Flush pages on acquisition"), we no longer mark the contents as dirty on
a write fault. This has the issue of us then not marking the pages as
dirty on releasing the buffer, which means the contents are not written
out to the swap device (should we ever pick that buffer as a victim).
Notably, this is visible in the dumb buffer interface used for cursors.
Having updated the cursor contents via mmap, and swapped away, if the
shrinker should evict the old cursor, upon next reuse, the cursor would
be invisible.
E.g. echo 80 > /proc/sys/kernel/sysrq ; echo f > /proc/sysrq-trigger
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111541
Fixes: a679f58d05 ("drm/i915: Flush pages on acquisition")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: <stable@vger.kernel.org> # v5.2+
Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920121821.7223-1-chris@chris-wilson.co.uk
Force bonded requests to run on distinct engines so that they cannot be
shuffled onto the same engine where timeslicing will reverse the order.
A bonded request will often wait on a semaphore signaled by its master,
creating an implicit dependency -- if we ignore that implicit dependency
and allow the bonded request to run on the same engine and before its
master, we will cause a GPU hang. [Whether it will hang the GPU is
debatable, we should keep on timeslicing and each timeslice should be
"accidentally" counted as forward progress, in which case it should run
but at one-half to one-third speed.]
We can prevent this inversion by restricting which engines we allow
ourselves to jump to upon preemption, i.e. baking in the arrangement
established at first execution. (We should also consider capturing the
implicit dependency using i915_sched_add_dependency(), but first we need
to think about the constraints that requires on the execution/retirement
ordering.)
Fixes: 8ee36e048c ("drm/i915/execlists: Minimalistic timeslicing")
References: ee1136908e ("drm/i915/execlists: Virtual engine bonding")
Testcase: igt/gem_exec_balancer/bonded-slice
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190923152844.8914-3-chris@chris-wilson.co.uk
Due to the nature of preempt-to-busy the execlists active tracking and
the schedule queue may become temporarily desync'ed (between resubmission
to HW and its ack from HW). This means that we may have unwound a
request and passed it back to the virtual engine, but it is still
inflight on the HW and may even result in a GPU hang. If we detect that
GPU hang and try to reset, the hanging request->engine will no longer
match the current engine, which means that the request is not on the
execlists active list and we should not try to find an older incomplete
request. Given that we have deduced this must be a request on a virtual
engine, it is the single active request in the context and so must be
guilty (as the context is still inflight, it is prevented from being
executed on another engine as we process the reset).
Fixes: 22b7a426bb ("drm/i915/execlists: Preempt-to-busy")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190923152844.8914-2-chris@chris-wilson.co.uk
As preempt-to-busy leaves the request on the HW as the resubmission is
processed, that request may complete in the background and even cause a
second virtual request to enter queue. This second virtual request
breaks our "single request in the virtual pipeline" assumptions.
Furthermore, as the virtual request may be completed and retired, we
lose the reference the virtual engine assumes is held. Normally, just
removing the request from the scheduler queue removes it from the
engine, but the virtual engine keeps track of its singleton request via
its ve->request. This pointer needs protecting with a reference.
v2: Drop unnecessary motion of rq->engine = owner
Fixes: 22b7a426bb ("drm/i915/execlists: Preempt-to-busy")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190923152844.8914-1-chris@chris-wilson.co.uk
Adding a enable parameters allow us to share most of the code between
enable and disable functions.
v3:
Renamed icl_phy_clock_gating() to icl_phy_set_clock_gating()
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920205810.211048-6-jose.souza@intel.com
These are the registers needed to program Dekel phy. Some register
definitions will be reused from MG PHY definitions, so adding a
comment on those.
Bspec: 49295
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: Clinton A Taylor <clinton.a.taylor@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920205810.211048-5-jose.souza@intel.com
Commit 24a7bfe0c2 ("drm/i915: Keep the TypeC port mode fixed when the
port is active") added this new hook while in parallel TGL upstream was
happening and this was missed.
Without this driver will crash when TC DDI is added and driver is
preparing to do a full modeset.
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Clinton A Taylor <clinton.a.taylor@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920205810.211048-4-jose.souza@intel.com
If platform supports and has modular FIA is enabled, the registers
bits also change, example: reading TC3 registers with modular FIA
enabled, driver should read from FIA2 but with TC1 bits offsets.
It is described in BSpec 50231 for DFLEXDPSP, other registers don't
have the BSpec description but testing in real hardware have proven
that it had moved for all other registers too.
v2:
- Caching index in tc_phy_fia_idx, instead of calculate it each time
v3:
- Setting tc_phy_fia and tc_phy_fia_idx in the same function
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920205810.211048-3-jose.souza@intel.com
Step 4.b was complete missed because it is only required to TC and TBT.
Bspec: 49190
Reviewed-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Clinton A Taylor <clinton.a.taylor@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920205810.211048-2-jose.souza@intel.com
Pull setting -EIO on the hung requests into its own utility function.
Having allowed ourselves to short-circuit submission of completed
requests, we can now do the mark_eio() prior to submission and avoid
some redundant operations.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190923110056.15176-4-chris@chris-wilson.co.uk
If we are asked to submit a completed request, just move it onto the
active-list without modifying it's payload. If we try to emit the
modified payload of a completed request, we risk racing with the
ring->head update during retirement which may advance the head past our
breadcrumb and so we generate a warning for the emission being behind
the RING_HEAD.
v2: Commentary for the sneaky, shared responsibility between functions.
v3: Spelling mistakes and bonus assertion
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190923110056.15176-3-chris@chris-wilson.co.uk
Since amalgamating the queued and active lists in commit 422d7df4f0
("drm/i915: Replace engine->timeline with a plain list"), performing a
i915_request_submit() will remove the request from the execlists
priority queue.
References: 422d7df4f0 ("drm/i915: Replace engine->timeline with a plain list")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190923110056.15176-2-chris@chris-wilson.co.uk
A gpu hang can occur at any time, given a sufficiently angry gpu. An
example is when it forgets to perform a context-switch at the end of a
request, leaving us with a hanging GPU on a completed request. Here, we
may retire the request, only leaving its context alive via the active
barrier. When we reset the GPU on a completed request, we do not modify
its context image (just updating the ring state) and can safely defer
the assertion that we have the image pinned and ready to modify.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111639
Fixes: dffa8feb30 ("drm/i915/perf: Assert locking for i915_init_oa_perf_state()")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190923110056.15176-1-chris@chris-wilson.co.uk
The i915 specific mode config init code is too specific and detailed to
have open in a high level function. Abstract away. No functional
changes.
v2: nest drm_mode_config_init() in the function too (Chris)
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920185421.17822-5-jani.nikula@intel.com
The code is too specific and detailed to have open in a high level
function. Abstract away. As a drive-by improvement switch to using
enableddisabled() in logging and git rid of a redundant !!. No
functional changes.
v2: drop the !! while at it too (Chris)
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920185421.17822-4-jani.nikula@intel.com
In general, prefer struct drm_i915_private * over struct drm_device *
when either will do. Rename the local variable to i915. Also propagate
to intel_hpd_poll_fini(). No functional changes.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920185421.17822-3-jani.nikula@intel.com
When audio power domain is suspended, the display driver must
save state of AUD_FREQ_CNTRL on Tiger Lake and Ice Lake
systems. The initial value of the register is set by BIOS and
is read by driver during the audio component init sequence.
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920083918.27057-1-kai.vehmanen@linux.intel.com
Added docbook info regarding Display State Buffer(DSB) which
is added from gen12 onwards to batch submit display HW programming.
v1: Initial version as RFC.
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Shashank Sharma <shashank.sharma@intel.com>
Reviewed-by: Shashank Sharma <shashank.sharma@intel.com>
Signed-off-by: Animesh Manna <animesh.manna@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920115930.27829-11-animesh.manna@intel.com
Gamma lut programming can be programmed using DSB
where bulk register programming can be done using indexed
register write which takes number of data and the mmio offset
to be written.
Currently enabled for 12-bit gamma LUT which is enabled by
default and later 8-bit/10-bit will be enabled in future
based on need.
v1: Initial version.
v2: Directly call dsb-api at callsites. (Jani)
v3:
- modified the code as per single dsb instance per crtc. (Shashank)
- Added dsb get/put call in platform specific load_lut hook. (Jani)
- removed dsb pointer from dev_priv. (Jani)
v4: simplified code by dropping ref-count implementation. (Shashank)
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Shashank Sharma <shashank.sharma@intel.com>
Signed-off-by: Animesh Manna <animesh.manna@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20190920115930.27829-9-animesh.manna@intel.com
Batch buffer will be created through dsb-reg-write function which can have
single/multiple request based on usecase and once the buffer is ready
commit function will trigger the execution of the batch buffer. All
the registers will be updated simultaneously.
v1: Initial version.
v2: Optimized code few places. (Chris)
v3: USed DRM_ERROR for dsb head/tail programming failure. (Shashank)
v4: reset ins_start_offset after commit. (Jani)
Cc: Imre Deak <imre.deak@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Shashank Sharma <shashank.sharma@intel.com>
Reviewed-by: Shashank Sharma <shashank.sharma@intel.com>
Signed-off-by: Animesh Manna <animesh.manna@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920115930.27829-8-animesh.manna@intel.com
DSB will be used for performance improvement for some special scenario.
DSB engine will be enabled based on need and after completion of its work
will be disabled. Api added for enable/disable operation by using DSB_CTRL
register.
v1: Initial version.
v2: POSTING_READ added after writing control register. (Shashank)
v3: cosmetic changes done. (Shashank)
Cc: Michel Thierry <michel.thierry@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Shashank Sharma <shashank.sharma@intel.com>
Reviewed-by: Shashank Sharma <shashank.sharma@intel.com>
Signed-off-by: Animesh Manna <animesh.manna@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920115930.27829-7-animesh.manna@intel.com
As per bspec check for DSB status before programming any
of its register. Inline function added to check the dsb status.
Cc: Michel Thierry <michel.thierry@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Shashank Sharma <shashank.sharma@intel.com>
Reviewed-by: Shashank Sharma <shashank.sharma@intel.com>
Signed-off-by: Animesh Manna <animesh.manna@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920115930.27829-6-animesh.manna@intel.com
DSB can program large set of data through indexed register write
(opcode 0x9) in one shot. DSB feature can be used for bulk register
programming e.g. gamma lut programming, HDR meta data programming.
v1: initial version.
v2: simplified code by using ALIGN(). (Chris)
v3: ascii table added as code comment. (Shashank)
v4: cosmetic changes done. (Shashank)
v5: reset ins_start_offset. (Jani)
v6: update ins_start_offset in inel_dsb_reg_write.
Cc: Shashank Sharma <shashank.sharma@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Shashank Sharma <shashank.sharma@intel.com>
Signed-off-by: Animesh Manna <animesh.manna@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920115930.27829-5-animesh.manna@intel.com
DSB support single register write through opcode 0x1. Generic
api created which accumulate all single register write in a batch
buffer and once DSB is triggered, it will program all the registers
at the same time.
v1: Initial version.
v2: Unused macro removed and cosmetic changes done. (Shashank)
v3: set free_pos to zero in dsb-put() instead dsb-get() and
a cosmetic change. (Shashank)
v4: macro of indexed-write is moved. (Shashank)
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Shashank Sharma <shashank.sharma@intel.com>
Reviewed-by: Shashank Sharma <shashank.sharma@intel.com>
Signed-off-by: Animesh Manna <animesh.manna@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920115930.27829-4-animesh.manna@intel.com
This patch adds a function, which will internally get the gem buffer
for DSB engine. The GEM buffer is from global GTT, and is mapped into
CPU domain, contains the data + opcode to be feed to DSB engine.
v1: Initial version.
v2:
- removed some unwanted code. (Chris)
- Used i915_gem_object_create_internal instead of _shmem. (Chris)
- cmd_buf_tail removed and can be derived through vma object. (Chris)
v3: vma realeased if i915_gem_object_pin_map() failed. (Shashank)
v4: for simplification and based on current usage added single dsb
object in intel_crtc. (Shashank)
v5: seting NULL to cmd_buf moved outside of mutex in dsb-put(). (Shashank)
v6:
- refcount machanism added.
- Used atomic_add_return and atomic_dec_and_test instead of
atomic_inc and atomic_dec. (Jani)
Cc: Imre Deak <imre.deak@intel.com>
Cc: Michel Thierry <michel.thierry@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Shashank Sharma <shashank.sharma@intel.com>
Signed-off-by: Animesh Manna <animesh.manna@intel.com>
[Jani: added #include <linux/types.h> while pushing]
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920115930.27829-3-animesh.manna@intel.com
Display State Buffer(DSB) is a new hardware capability, introduced
in GEN12 display. DSB allows a driver to batch-program display HW
registers.
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Shashank Sharma <shashank.sharma@intel.com>
Reviewed-by: Shashank Sharma <shashank.sharma@intel.com>
Signed-off-by: Animesh Manna <animesh.manna@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920115930.27829-2-animesh.manna@intel.com