With the rework of how the __string() handles dynamic strings where it
saves off the source string in field in the helper structure[1], the
assignment of that value to the trace event field is stored in the helper
value and does not need to be passed in again.
This means that with:
__string(field, mystring)
Which use to be assigned with __assign_str(field, mystring), no longer
needs the second parameter and it is unused. With this, __assign_str()
will now only get a single parameter.
There's over 700 users of __assign_str() and because coccinelle does not
handle the TRACE_EVENT() macro I ended up using the following sed script:
git grep -l __assign_str | while read a ; do
sed -e 's/\(__assign_str([^,]*[^ ,]\) *,[^;]*/\1)/' $a > /tmp/test-file;
mv /tmp/test-file $a;
done
I then searched for __assign_str() that did not end with ';' as those
were multi line assignments that the sed script above would fail to catch.
Note, the same updates will need to be done for:
__assign_str_len()
__assign_rel_str()
__assign_rel_str_len()
I tested this with both an allmodconfig and an allyesconfig (build only for both).
[1] https://lore.kernel.org/linux-trace-kernel/20240222211442.634192653@goodmis.org/
Link: https://lore.kernel.org/linux-trace-kernel/20240516133454.681ba6a0@rorschach.local.home
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Julia Lawall <Julia.Lawall@inria.fr>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Acked-by: Jani Nikula <jani.nikula@intel.com>
Acked-by: Christian König <christian.koenig@amd.com> for the amdgpu parts.
Acked-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> #for
Acked-by: Rafael J. Wysocki <rafael@kernel.org> # for thermal
Acked-by: Takashi Iwai <tiwai@suse.de>
Acked-by: Darrick J. Wong <djwong@kernel.org> # xfs
Tested-by: Guenter Roeck <linux@roeck-us.net>
-----BEGIN PGP SIGNATURE-----
iQFSBAABCAA8FiEEq68RxlopcLEwq+PEeb4+QwBBGIYFAmXb0T4eHHRvcnZhbGRz
QGxpbnV4LWZvdW5kYXRpb24ub3JnAAoJEHm+PkMAQRiG5YQH/3eCV90sNGch0Y94
8rtTdqFrVx7QPNl0pz+Mo6OUIKUUHvTuwime16ckLxG+3x2Y3I0MjP1edd1NB99C
Kje//JTpaZBPpTZ/jY4u8B1Shov2Drdx/J4NFnE/9rG6yXzKQBtvON/xAxXDCVHT
mLhst2LR0FeCSMk9jAX6CoqUPEgwlylNyAetKxaDQgoHl4GTZC7FDO17WxyjpIxe
1rVHsrV9Eq8kD4uxrzpTYWgZrwTObPmlZjvefa1JfzSwRNABIBJj/C1nra1Zc1oi
b7xVaXS1cMOxrtuuG00fmHsPnWivu0tuND7H3/yLd1mRCZAPSsVbVvrI/KNtoeV4
1euINlY=
=7IFt
-----END PGP SIGNATURE-----
Merge v6.8-rc6 into drm-next
Thomas Zimmermann asked to backmerge -rc6 for drm-misc branches,
there's a few same-area-changed conflicts (xe and amdgpu mostly) that
are getting a bit too annoying.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Rather then loop over entities until one with a ready job is found,
re-queue the run job worker when drm_sched_entity_pop_job() returns NULL.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Fixes: 66dbd9004a ("drm/sched: Drain all entities in DRM sched run job worker")
Reviewed-by: Luben Tuikov <ltuikov89@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240130030413.2031009-1-matthew.brost@intel.com
The kfree() function was called in one case by the
drm_sched_init() function during error handling
even if the passed data structure member contained a null pointer.
This issue was detected by using the Coccinelle software.
Thus adjust a jump target.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Link: https://patchwork.freedesktop.org/patch/msgid/85066512-983d-480c-a44d-32405ab1b80e@web.de
Reviewed-by: Luben Tuikov <ltuikov89@gmail.com>
Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
Reverse run-queue priority enumeration such that the higest priority is now 0,
and for each consecutive integer the prioirty diminishes.
Run-queues correspond to priorities. To an external observer a scheduler
created with a single run-queue, and another created with
DRM_SCHED_PRIORITY_COUNT number of run-queues, should always schedule
sched->sched_rq[0] with the same "priority", as that index run-queue exists in
both schedulers, i.e. a scheduler with one run-queue or many. This patch makes
it so.
In other words, the "priority" of sched->sched_rq[n], n >= 0, is the same for
any scheduler created with any allowable number of run-queues (priorities), 0
to DRM_SCHED_PRIORITY_COUNT.
Cc: Rob Clark <robdclark@gmail.com>
Cc: Abhinav Kumar <quic_abhinavk@quicinc.com>
Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Cc: Danilo Krummrich <dakr@redhat.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: linux-arm-msm@vger.kernel.org
Cc: freedreno@lists.freedesktop.org
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20231124052752.6915-6-ltuikov89@gmail.com
Rename DRM_SCHED_PRIORITY_MIN to DRM_SCHED_PRIORITY_LOW.
This mirrors DRM_SCHED_PRIORITY_HIGH, for a list of DRM scheduler priorities
in ascending order,
DRM_SCHED_PRIORITY_LOW,
DRM_SCHED_PRIORITY_NORMAL,
DRM_SCHED_PRIORITY_HIGH,
DRM_SCHED_PRIORITY_KERNEL.
Cc: Rob Clark <robdclark@gmail.com>
Cc: Abhinav Kumar <quic_abhinavk@quicinc.com>
Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Cc: Danilo Krummrich <dakr@redhat.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: linux-arm-msm@vger.kernel.org
Cc: freedreno@lists.freedesktop.org
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20231124052752.6915-5-ltuikov89@gmail.com
Currently, job flow control is implemented simply by limiting the number
of jobs in flight. Therefore, a scheduler is initialized with a credit
limit that corresponds to the number of jobs which can be sent to the
hardware.
This implies that for each job, drivers need to account for the maximum
job size possible in order to not overflow the ring buffer.
However, there are drivers, such as Nouveau, where the job size has a
rather large range. For such drivers it can easily happen that job
submissions not even filling the ring by 1% can block subsequent
submissions, which, in the worst case, can lead to the ring run dry.
In order to overcome this issue, allow for tracking the actual job size
instead of the number of jobs. Therefore, add a field to track a job's
credit count, which represents the number of credits a job contributes
to the scheduler's credit limit.
Signed-off-by: Danilo Krummrich <dakr@redhat.com>
Reviewed-by: Luben Tuikov <ltuikov89@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20231110001638.71750-1-dakr@redhat.com
Don't "wake up" the GPU scheduler unless the entity is ready, as well as we
can queue to the scheduler, i.e. there is no point in waking up the scheduler
for the entity unless the entity is ready.
Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
Fixes: bc8d6a9df9 ("drm/sched: Don't disturb the entity when in RR-mode scheduling")
Reviewed-by: Danilo Krummrich <dakr@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20231110000123.72565-2-ltuikov89@gmail.com
Don't call drm_sched_select_entity() in drm_sched_run_job_queue(). In fact,
rename __drm_sched_run_job_queue() to just drm_sched_run_job_queue(), and let
it do just that, schedule the work item for execution.
The problem is that drm_sched_run_job_queue() calls drm_sched_select_entity()
to determine if the scheduler has an entity ready in one of its run-queues,
and in the case of the Round-Robin (RR) scheduling, the function
drm_sched_rq_select_entity_rr() does just that, selects the _next_ entity
which is ready, sets up the run-queue and completion and returns that
entity. The FIFO scheduling algorithm is unaffected.
Now, since drm_sched_run_job_work() also calls drm_sched_select_entity(), then
in the case of RR scheduling, that would result in drm_sched_select_entity()
having been called twice, which may result in skipping a ready entity if more
than one entity is ready. This commit fixes this by eliminating the call to
drm_sched_select_entity() from drm_sched_run_job_queue(), and leaves it only
in drm_sched_run_job_work().
v2: Rebased on top of Tvrtko's renames series of patches. (Luben)
Add fixes-tag. (Tvrtko)
Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
Fixes: f7fe64ad0f ("drm/sched: Split free_job into own work item")
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Danilo Krummrich <dakr@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20231107041020.10035-2-ltuikov89@gmail.com
Because a) helper is exported to other parts of the scheduler and
b) there isn't a plain drm_sched_wakeup to begin with, I think we can
drop the suffix and by doing so separate the intimiate knowledge
between the scheduler components a bit better.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Luben Tuikov <ltuikov89@gmail.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20231102105538.391648-6-tvrtko.ursulin@linux.intel.com
Reviewed-by: Luben Tuikov <ltuikov89@gmail.com>
Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
"If ready" is not immediately clear what it means - is the scheduler
ready or something else? Drop the suffix, clarify kerneldoc, and employ
the same naming scheme as in drm_sched_run_free_queue:
- drm_sched_run_job_queue - enqueues if there is something to enqueue
*and* scheduler is ready (can queue)
- __drm_sched_run_job_queue - low-level helper to simply queue the job
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Luben Tuikov <ltuikov89@gmail.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20231102105538.391648-5-tvrtko.ursulin@linux.intel.com
Reviewed-by: Luben Tuikov <ltuikov89@gmail.com>
Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
The current name makes it sound like helper will free a queue, while what
it does is it enqueues the free job worker.
Rename it to drm_sched_run_free_queue to align with existing
drm_sched_run_job_queue.
Despite that creating an illusion there are two queues, while in reality
there is only one, at least it creates a consistent naming for the two
enqueuing helpers.
At the same time simplify the "if done" helper by dropping the suffix and
adding a double underscore prefix to the one which just enqueues.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Luben Tuikov <ltuikov89@gmail.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20231102105538.391648-4-tvrtko.ursulin@linux.intel.com
Reviewed-by: Luben Tuikov <ltuikov89@gmail.com>
Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
Whether or not there are more jobs to clean up does not depend on the
existance of the current job, given both drm_sched_get_finished_job and
drm_sched_free_job_queue_if_done take and drop the job list lock.
Therefore it is confusing to make it read like there is a dependency.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Luben Tuikov <ltuikov89@gmail.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20231102105538.391648-3-tvrtko.ursulin@linux.intel.com
Reviewed-by: Luben Tuikov <ltuikov89@gmail.com>
Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
"Get cleanup job" makes it sound like helper is returning a job which will
execute some cleanup, or something, while the kerneldoc itself accurately
says "fetch the next _finished_ job". So lets rename the helper to be self
documenting.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Luben Tuikov <ltuikov89@gmail.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20231102105538.391648-2-tvrtko.ursulin@linux.intel.com
Reviewed-by: Luben Tuikov <ltuikov89@gmail.com>
Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
Rather than call free_job and run_job in same work item have a dedicated
work item for each. This aligns with the design and intended use of work
queues.
v2:
- Test for DMA_FENCE_FLAG_TIMESTAMP_BIT before setting
timestamp in free_job() work item (Danilo)
v3:
- Drop forward dec of drm_sched_select_entity (Boris)
- Return in drm_sched_run_job_work if entity NULL (Boris)
v4:
- Replace dequeue with peek and invert logic (Luben)
- Wrap to 100 lines (Luben)
- Update comments for *_queue / *_queue_if_ready functions (Luben)
v5:
- Drop peek argument, blindly reinit idle (Luben)
- s/drm_sched_free_job_queue_if_ready/drm_sched_free_job_queue_if_done (Luben)
- Update work_run_job & work_free_job kernel doc (Luben)
v6:
- Do not move drm_sched_select_entity in file (Luben)
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Link: https://lore.kernel.org/r/20231031032439.1558703-4-matthew.brost@intel.com
Reviewed-by: Luben Tuikov <ltuikov89@gmail.com>
Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
In Xe, the new Intel GPU driver, a choice has made to have a 1 to 1
mapping between a drm_gpu_scheduler and drm_sched_entity. At first this
seems a bit odd but let us explain the reasoning below.
1. In Xe the submission order from multiple drm_sched_entity is not
guaranteed to be the same completion even if targeting the same hardware
engine. This is because in Xe we have a firmware scheduler, the GuC,
which allowed to reorder, timeslice, and preempt submissions. If a using
shared drm_gpu_scheduler across multiple drm_sched_entity, the TDR falls
apart as the TDR expects submission order == completion order. Using a
dedicated drm_gpu_scheduler per drm_sched_entity solve this problem.
2. In Xe submissions are done via programming a ring buffer (circular
buffer), a drm_gpu_scheduler provides a limit on number of jobs, if the
limit of number jobs is set to RING_SIZE / MAX_SIZE_PER_JOB we get flow
control on the ring for free.
A problem with this design is currently a drm_gpu_scheduler uses a
kthread for submission / job cleanup. This doesn't scale if a large
number of drm_gpu_scheduler are used. To work around the scaling issue,
use a worker rather than kthread for submission / job cleanup.
v2:
- (Rob Clark) Fix msm build
- Pass in run work queue
v3:
- (Boris) don't have loop in worker
v4:
- (Tvrtko) break out submit ready, stop, start helpers into own patch
v5:
- (Boris) default to ordered work queue
v6:
- (Luben / checkpatch) fix alignment in msm_ringbuffer.c
- (Luben) s/drm_sched_submit_queue/drm_sched_wqueue_enqueue
- (Luben) Update comment for drm_sched_wqueue_enqueue
- (Luben) Positive check for submit_wq in drm_sched_init
- (Luben) s/alloc_submit_wq/own_submit_wq
v7:
- (Luben) s/drm_sched_wqueue_enqueue/drm_sched_run_job_queue
v8:
- (Luben) Adjust var names / comments
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
Link: https://lore.kernel.org/r/20231031032439.1558703-3-matthew.brost@intel.com
Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
Add scheduler wqueue ready, stop, and start helpers to hide the
implementation details of the scheduler from the drivers.
v2:
- s/sched_wqueue/sched_wqueue (Luben)
- Remove the extra white line after the return-statement (Luben)
- update drm_sched_wqueue_ready comment (Luben)
Cc: Luben Tuikov <luben.tuikov@amd.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
Link: https://lore.kernel.org/r/20231031032439.1558703-2-matthew.brost@intel.com
Signed-off-by: Luben Tuikov <ltuikov89@gmail.com>
When a fence signals there is a very small race window where the timestamp
isn't updated yet. sync_file solves this by busy waiting for the
timestamp to appear, but on other ocassions didn't handled this
correctly.
Provide a dma_fence_timestamp() helper function for this and use it in
all appropriate cases.
Another alternative would be to grab the spinlock when that happens.
v2 by teddy: add a wait parameter to wait for the timestamp to show up, in case
the accurate timestamp is needed and/or the timestamp is not based on
ktime (e.g. hw timestamp)
v3 chk: drop the parameter again for unified handling
Signed-off-by: Yunxiang Li <Yunxiang.Li@amd.com>
Signed-off-by: Christian König <christian.koenig@amd.com>
Fixes: 1774baa64f ("drm/scheduler: Change scheduled fence track v2")
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
CC: stable@vger.kernel.org
Link: https://patchwork.freedesktop.org/patch/msgid/20230929104725.2358-1-christian.koenig@amd.com
The GPU scheduler has now a variable number of run-queues, which are set up at
drm_sched_init() time. This way, each driver announces how many run-queues it
requires (supports) per each GPU scheduler it creates. Note, that run-queues
correspond to scheduler "priorities", thus if the number of run-queues is set
to 1 at drm_sched_init(), then that scheduler supports a single run-queue,
i.e. single "priority". If a driver further sets a single entity per
run-queue, then this creates a 1-to-1 correspondence between a scheduler and
a scheduled entity.
Cc: Lucas Stach <l.stach@pengutronix.de>
Cc: Russell King <linux+etnaviv@armlinux.org.uk>
Cc: Qiang Yu <yuq825@gmail.com>
Cc: Rob Clark <robdclark@gmail.com>
Cc: Abhinav Kumar <quic_abhinavk@quicinc.com>
Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Cc: Danilo Krummrich <dakr@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Boris Brezillon <boris.brezillon@collabora.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Emma Anholt <emma@anholt.net>
Cc: etnaviv@lists.freedesktop.org
Cc: lima@lists.freedesktop.org
Cc: linux-arm-msm@vger.kernel.org
Cc: freedreno@lists.freedesktop.org
Cc: nouveau@lists.freedesktop.org
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Link: https://lore.kernel.org/r/20231023032251.164775-1-luben.tuikov@amd.com
Drivers that can delegate waits to the firmware/GPU pass the scheduled
fence to drm_sched_job_add_dependency(), and issue wait commands to
the firmware/GPU at job submission time. For this to be possible, they
need all their 'native' dependencies to have a valid parent since this
is where the actual HW fence information are encoded.
In drm_sched_main(), we currently call drm_sched_fence_set_parent()
after drm_sched_fence_scheduled(), leaving a short period of time
during which the job depending on this fence can be submitted.
Since setting parent and signaling the fence are two things that are
kinda related (you can't have a parent if the job hasn't been scheduled),
it probably makes sense to pass the parent fence to
drm_sched_fence_scheduled() and let it call drm_sched_fence_set_parent()
before it signals the scheduled fence.
Here is a detailed description of the race we are fixing here:
Thread A Thread B
- calls drm_sched_fence_scheduled()
- signals s_fence->scheduled which
wakes up thread B
- entity dep signaled, checking
the next dep
- no more deps waiting
- entity is picked for job
submission by drm_gpu_scheduler
- run_job() is called
- run_job() tries to
collect native fence info from
s_fence->parent, but it's
NULL =>
BOOM, we can't do our native
wait
- calls drm_sched_fence_set_parent()
v2:
* Fix commit message
v3:
* Add a detailed description of the race to the commit message
* Add Luben's R-b
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Cc: Frank Binns <frank.binns@imgtec.com>
Cc: Sarah Walker <sarah.walker@imgtec.com>
Cc: Donald Robson <donald.robson@imgtec.com>
Cc: Luben Tuikov <luben.tuikov@amd.com>
Cc: David Airlie <airlied@gmail.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: "Christian König" <christian.koenig@amd.com>
Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230623075204.382350-1-boris.brezillon@collabora.com
drm_sched_entity_kill_jobs_cb() logic is omitting the last fence popped
from the dependency array that was waited upon before
drm_sched_entity_kill() was called (drm_sched_entity::dependency field),
so we're basically waiting for all dependencies except one.
In theory, this wait shouldn't be needed because resources should have
their users registered to the dma_resv object, thus guaranteeing that
future jobs wanting to access these resources wait on all the previous
users (depending on the access type, of course). But we want to keep
these explicit waits in the kill entity path just in case.
Let's make sure we keep all dependencies in the array in
drm_sched_job_dependency(), so we can iterate over the array and wait
in drm_sched_entity_kill_jobs_cb().
We also make sure we wait on drm_sched_fence::finished if we were
originally asked to wait on drm_sched_fence::scheduled. In that case,
we assume the intent was to delegate the wait to the firmware/GPU or
rely on the pipelining done at the entity/scheduler level, but when
killing jobs, we really want to wait for completion not just scheduling.
v2:
- Don't evict deps in drm_sched_job_dependency()
v3:
- Always wait for drm_sched_fence::finished fences in
drm_sched_entity_kill_jobs_cb() when we see a sched_fence
v4:
- Fix commit message
- Fix a use-after-free bug
v5:
- Flag deps on which we should only wait for the scheduled event
at insertion time
v6:
- Back to v4 implementation
- Add Christian's R-b
Cc: Frank Binns <frank.binns@imgtec.com>
Cc: Sarah Walker <sarah.walker@imgtec.com>
Cc: Donald Robson <donald.robson@imgtec.com>
Cc: Luben Tuikov <luben.tuikov@amd.com>
Cc: David Airlie <airlied@gmail.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: "Christian König" <christian.koenig@amd.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Suggested-by: "Christian König" <christian.koenig@amd.com>
Reviewed-by: "Christian König" <christian.koenig@amd.com>
Acked-by: Luben Tuikov <luben.tuikov@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230619071921.3465992-1-boris.brezillon@collabora.com
-----BEGIN PGP SIGNATURE-----
iQFSBAABCAA8FiEEq68RxlopcLEwq+PEeb4+QwBBGIYFAmSPcdMeHHRvcnZhbGRz
QGxpbnV4LWZvdW5kYXRpb24ub3JnAAoJEHm+PkMAQRiGWrQH/3KmuvZsWMC4PpJY
VcF9VfF9i+Zv7DoG8sjD5VpNh47e87RsR6WNOFnKol5SUrM6vsBAb5i2rfQahNIv
NSj0fPCE4/Nj9LMecKVC9WD8CitxYdbR+CF9Is21AQj1VihUl9eHXGcAWxuaMyhk
TjPUwmbOOsRVMXXdGJzjX78cvLsxqpSv8A/5OTh16IBimbh7p+YjKJFkbfj/PMWf
aF1quFkIEXgzJcHCpP6KDZHr2KbpY+jIN9hUENnGKJxHYNso5u+KrIW1kAm8meP1
x26ETSquM0T70OAzovOWg+BeVkLDac/3Rh30ztLAI4AtajrlSzycvFsU9UNEJCc2
BnM2IZI=
=ANT5
-----END PGP SIGNATURE-----
Backmerge tag 'v6.4-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux into drm-next
Linux 6.4-rc7
Need this to pull in the msm work.
Signed-off-by: Dave Airlie <airlied@redhat.com>
[Why]
drm_sched_entity_add_dependency_cb ignores the scheduled fence and return false.
If entity's dependency is a scheduler error fence and drm_sched_stop is called
due to TDR, drm_sched_entity_pop_job will wait for the dependency infinitely.
[How]
Do not wait or ignore the scheduled error fence, add drm_sched_entity_wakeup
callback for the dependency with scheduled error fence.
Signed-off-by: ZhenGuo Yin <zhenguo.yin@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Rename drm_sched_wakeup() to drm_sched_wakeup_if_canqueue() since the former
is misleading, as it wakes up the GPU scheduler _only if_ more jobs can be
queued to the underlying hardware.
This distinction is important to make, since the wake conditional in the GPU
scheduler thread wakes up when other conditions are also true, e.g. when there
are jobs to be cleaned. For instance, a user might want to wake up the
scheduler only because there are more jobs to clean, but whether we can queue
more jobs is irrelevant.
v2: Separate "canqueue" to "can_queue". (Alex D.)
Cc: Christian König <christian.koenig@amd.com>
Cc: Alex Deucher <Alexander.Deucher@amd.com>
Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
Link: https://lore.kernel.org/r/20230517233550.377847-2-luben.tuikov@amd.com
Reviewed-by: Alex Deucher <Alexander.Deucher@amd.com>
Rename drm_sched_ready() to drm_sched_can_queue(). "ready" can mean many
things and is thus meaningless in this context. Instead, rename to a name
which precisely conveys what is being checked.
Cc: Christian König <christian.koenig@amd.com>
Cc: Alex Deucher <Alexander.Deucher@amd.com>
Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
Reviewed-by: Alex Deucher <Alexander.Deucher@amd.com>
Link: https://lore.kernel.org/r/20230517233550.377847-1-luben.tuikov@amd.com
The rq pointer points inside the drm_gpu_scheduler structure. Thus
it can't be NULL.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Fixes: c61cdbdbff ("drm/scheduler: Fix hang when sched_entity released")
Signed-off-by: Vladislav Efanov <VEfanov@ispras.ru>
Link: https://lore.kernel.org/r/20230517125247.434103-1-VEfanov@ispras.ru
Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
During an IGT GPU reset test we see again oops despite of
commit 0c8c901aaaebc9 (drm/sched: Check scheduler ready before calling
timeout handling).
It uses ready condition whether to call drm_sched_fault which unwind
the TDR leads to GPU reset.
However it looks the ready condition is overloaded with other meanings,
for example, for the following stack is related GPU reset :
0 gfx_v9_0_cp_gfx_start
1 gfx_v9_0_cp_gfx_resume
2 gfx_v9_0_cp_resume
3 gfx_v9_0_hw_init
4 gfx_v9_0_resume
5 amdgpu_device_ip_resume_phase2
does the following:
/* start the ring */
gfx_v9_0_cp_gfx_start(adev);
ring->sched.ready = true;
The same approach is for other ASICs as well :
gfx_v8_0_cp_gfx_resume
gfx_v10_0_kiq_resume, etc...
As a result, our GPU reset test causes GPU fault which calls unconditionally gfx_v9_0_fault
and then drm_sched_fault. However now it depends on whether the interrupt service routine
drm_sched_fault is executed after gfx_v9_0_cp_gfx_start is completed which sets the ready
field of the scheduler to true even for uninitialized schedulers and causes oops vs
no fault or when ISR drm_sched_fault is completed prior gfx_v9_0_cp_gfx_start and
NULL pointer dereference does not occur.
Use the field timeout_wq to prevent oops for uninitialized schedulers.
The field could be initialized by the work queue of resetting the domain.
v1: Corrections to commit message (Luben)
Fixes: 11b3b9f461 ("drm/sched: Check scheduler ready before calling timeout handling")
Signed-off-by: Vitaly Prosyak <vitaly.prosyak@amd.com>
Link: https://lore.kernel.org/r/20230510135111.58631-1-vitaly.prosyak@amd.com
Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
New drivers:
- add QAIC acceleration driver
dma-buf:
- constify kobj_type structs
- Reject prime DMA-Buf attachment if get_sg_table is missing.
fbdev:
- cmdline parser fixes
- implement fbdev emulation for GEM DMA drivers
- always use shadow buffer in fbdev emulation helpers
dma-fence:
- add deadline hint to fences
- signal private stub fence
core:
- improve DisplayID 2.0 and EDID parsing
- add gem eviction function + callback
- prep to convert shmem helper to GEM resv lock
- move suballocator from radeon/amdgpu to core for Xe
- HPD polling fixes
- Documentation improvements
- Add atomic enable_plane callback
- use tgid instead of pid for client tracking
- DP: Add SDP Error Detection Configuration Register
- Add prime import/export to vram-helper
- use pci aperture helpers in more drivers
panel:
- Radxa 8/10HD support
- Samsung AMD495QA01 support
- Elida KD50T048A
- Sony TD4353
- Novatek NT36523
- STARRY 2081101QFH032011-53G
- B133UAN01.0
- AUO NE135FBM-N41
i915:
- More MTL enabling
- fix s/r problems with MEI/PXP
- Implement fb_dirty for PSR,FBC,DRRS fixes
- Fix eDP+DSI dual panel systems
- Fix issue #6333: "list_add corruption" and full system lockup from
performance monitoring
- Don't use stolen memory or BAR for ring buffers on LLC platforms
- Make sure DSM size has correct 1MiB granularity on Gen12+
- Whitelist COMMON_SLICE_CHICKEN3 for UMD access on Gen12+
- Add engine TLB invalidation for Meteorlake
- Fix GSC races on driver load/unload on Meteorlake+
- Make kobj_type structures constant
- Move fd_install after last use of fence
- wm/vblank refactoring
- display code refactoring
- Create GSC submission targeting HDCP and PXP usages on MTL+
- Enable HDCP2.x via GSC CS
- Fix context runtime accounting on sysfs fdinfo for heavy workloads
- Use i915 instead of dev_priv insied the file_priv structure
- Replace fake flex-array with flexible-array member
amdgpu:
- Make kobj structures const
- Generalize dmabuf import to work with KFD
- Add capped/uncapped workload handling for supported APUs
- Expose additional memory stats via fdinfo
- Register vga_switcheroo for apple-gmux
- Initial NBIO7.9, GC 9.4.3, GFXHUB 1.2, MMHUB 1.8 support
- Initial DC FAM infrastructure
- Link DC backlight to connector device rather than PCI device
- Add sysfs nodes for secondary VCN clocks
amdkfd:
- Make kobj structures const
- Support for exporting buffers via dmabuf
- Multi-VMA page migration fixes
- initial GC 9.4.3 support
radeon:
- iMac fix
- convert to client based fbdev emulation
habanalabs:
- Add opcodes to the CS ioctl to allow user to stall/resume specific engines
inside Gaudi2.
- INFO ioctl the amount of device memory that the driver
and f/w reserve for themselves.
- INFO ioctl a bit-mask of the available rotator engines
- INFO ioctl the register's address of the f/w that should
be used to trigger interrupts
- INFO ioctl two new opcodes to fetch information on h/w and f/w events
- Enable graceful reset mechanism for compute-reset.
- Align to the latest firmware specs.
- Enforce the release order of the compute device and dma-buf.
msm:
- UBWC decoder programming rework
- SM8550, SM8450 bindings update
- uapi C++ fix
- a3xx and a4xx devfreq support
- GPU and GEM updates to avoid allocations which could trigger
reclaim (shrinker) in fence signaling path
- dma-fence deadline hint support and wait-boost
- a640/650 speed bin support
cirrus:
- convert to regular atomic helpers
- add damage clipping
mediatek:
- 10-bit overlay support
- mt8195 support
- Only trigger DRM HPD events if bridge is attached
- Change the aux retries times when receiving AUX_DEFER
rockchip:
- add 4K support
vc4:
- use drm_gem_objects
virtio:
- allow KMS support to be disabled
- add damage clipping
vmwgfx:
- buffer object lifetime fixes
exynos:
- move MIPI DSI driver to drm bridge for iMX sharing
- use kernel fbdev emulation
panfrost:
- add support for mali MT81xx devices
- add speed binning support
lima:
- add usage stats
tegra:
- fbdev client conversion
vkms:
- Add primary plane positioning support
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEEKbZHaGwW9KfbeusDHTzWXnEhr4FAmRGFU4ACgkQDHTzWXnE
hr5m4w/9GzutylTH5aY+otFRNR6uKWGJZ9d90RLyLOE3vjE+7/Q/36EXPOjZctVt
VgfQD1giKIGD9ENcCfwbw6iwyVAjLvinBr3Hz4NleEu1TjdXPJvgo9OW/+FQKVi6
1vWH/mcnN6o89m3Mme7T2drFtwy3Y6/l5EY18yNyI7XeQVUMaDTr9Lvcriq0Sigc
CInYxilIxViKioYZQmHihXPnZ89nNQZweN2GtDu8O9Bw1Z1eEyn0kRzb3px2Zl6T
MpQEQasrPDdF3LFlVWs0AlKmLFbhqV9Pq/OPfowfAWT5RSXpeDvO95NaL3EPzFXy
AO6jWHR7/VpvWvj4iJ6R35TLgi/CyASxjJ8Cr9k61Sb1U2WthMEmtd1BKBtI5mTq
Us7yP2WJle3LXEqXyvDKDGsZf8kOQ4nyJx+3CJof5Tbnzy3hn+JUkTiUweSDQ14x
CHEz7TI8WY5G96+zcyBcee0MWa3V6IXH0cjuMMUiSHw1uir34LuyP+plaELp3eqv
MFf5WUJEuU9DmDlxRd2W+g6fmKWaEkY2ksWcbD7H3BZrBmnxkS4LIyfC9HJirGCC
4JF4+4k55F/UAzQOi/4hQxulPtQmHug2/9c29IqZerwxekYdMRkb75rIoVf0IfF4
uLexY0u3aO+IKZ7ygSL9MAwAyiJU6ulYigMLxWMjT7vU36CF5Z8=
=NUEy
-----END PGP SIGNATURE-----
Merge tag 'drm-next-2023-04-24' of git://anongit.freedesktop.org/drm/drm
Pull drm updates from Dave Airlie:
"There is a new Qualcomm accel driver for their QAIC, dma-fence got a
deadline feature added, lots of refactoring around fbdev emulation,
and the usual pre-release hw enablements from AMD and Intel and fixes
everywhere.
New drivers:
- add QAIC acceleration driver
dma-buf:
- constify kobj_type structs
- Reject prime DMA-Buf attachment if get_sg_table is missing.
fbdev:
- cmdline parser fixes
- implement fbdev emulation for GEM DMA drivers
- always use shadow buffer in fbdev emulation helpers
dma-fence:
- add deadline hint to fences
- signal private stub fence
core:
- improve DisplayID 2.0 and EDID parsing
- add gem eviction function + callback
- prep to convert shmem helper to GEM resv lock
- move suballocator from radeon/amdgpu to core for Xe
- HPD polling fixes
- Documentation improvements
- Add atomic enable_plane callback
- use tgid instead of pid for client tracking
- DP: Add SDP Error Detection Configuration Register
- Add prime import/export to vram-helper
- use pci aperture helpers in more drivers
panel:
- Radxa 8/10HD support
- Samsung AMD495QA01 support
- Elida KD50T048A
- Sony TD4353
- Novatek NT36523
- STARRY 2081101QFH032011-53G
- B133UAN01.0
- AUO NE135FBM-N41
i915:
- More MTL enabling
- fix s/r problems with MEI/PXP
- Implement fb_dirty for PSR,FBC,DRRS fixes
- Fix eDP+DSI dual panel systems
- Fix issue #6333: "list_add corruption" and full system lockup from
performance monitoring
- Don't use stolen memory or BAR for ring buffers on LLC platforms
- Make sure DSM size has correct 1MiB granularity on Gen12+
- Whitelist COMMON_SLICE_CHICKEN3 for UMD access on Gen12+
- Add engine TLB invalidation for Meteorlake
- Fix GSC races on driver load/unload on Meteorlake+
- Make kobj_type structures constant
- Move fd_install after last use of fence
- wm/vblank refactoring
- display code refactoring
- Create GSC submission targeting HDCP and PXP usages on MTL+
- Enable HDCP2.x via GSC CS
- Fix context runtime accounting on sysfs fdinfo for heavy workloads
- Use i915 instead of dev_priv insied the file_priv structure
- Replace fake flex-array with flexible-array member
amdgpu:
- Make kobj structures const
- Generalize dmabuf import to work with KFD
- Add capped/uncapped workload handling for supported APUs
- Expose additional memory stats via fdinfo
- Register vga_switcheroo for apple-gmux
- Initial NBIO7.9, GC 9.4.3, GFXHUB 1.2, MMHUB 1.8 support
- Initial DC FAM infrastructure
- Link DC backlight to connector device rather than PCI device
- Add sysfs nodes for secondary VCN clocks
amdkfd:
- Make kobj structures const
- Support for exporting buffers via dmabuf
- Multi-VMA page migration fixes
- initial GC 9.4.3 support
radeon:
- iMac fix
- convert to client based fbdev emulation
habanalabs:
- Add opcodes to the CS ioctl to allow user to stall/resume specific
engines inside Gaudi2.
- INFO ioctl the amount of device memory that the driver and f/w
reserve for themselves.
- INFO ioctl a bit-mask of the available rotator engines
- INFO ioctl the register's address of the f/w that should be used to
trigger interrupts
- INFO ioctl two new opcodes to fetch information on h/w and f/w
events
- Enable graceful reset mechanism for compute-reset.
- Align to the latest firmware specs.
- Enforce the release order of the compute device and dma-buf.
msm:
- UBWC decoder programming rework
- SM8550, SM8450 bindings update
- uapi C++ fix
- a3xx and a4xx devfreq support
- GPU and GEM updates to avoid allocations which could trigger
reclaim (shrinker) in fence signaling path
- dma-fence deadline hint support and wait-boost
- a640/650 speed bin support
cirrus:
- convert to regular atomic helpers
- add damage clipping
mediatek:
- 10-bit overlay support
- mt8195 support
- Only trigger DRM HPD events if bridge is attached
- Change the aux retries times when receiving AUX_DEFER
rockchip:
- add 4K support
vc4:
- use drm_gem_objects
virtio:
- allow KMS support to be disabled
- add damage clipping
vmwgfx:
- buffer object lifetime fixes
exynos:
- move MIPI DSI driver to drm bridge for iMX sharing
- use kernel fbdev emulation
panfrost:
- add support for mali MT81xx devices
- add speed binning support
lima:
- add usage stats
tegra:
- fbdev client conversion
vkms:
- Add primary plane positioning support"
* tag 'drm-next-2023-04-24' of git://anongit.freedesktop.org/drm/drm: (1495 commits)
drm/i915/dp_mst: Fix active port PLL selection for secondary MST streams
drm/exynos: Implement fbdev emulation as in-kernel client
drm/exynos: Initialize fbdev DRM client
drm/exynos: Remove fb_helper from struct exynos_drm_private
drm/exynos: Remove struct exynos_drm_fbdev
drm/exynos: Remove exynos_gem from struct exynos_drm_fbdev
drm/i915: Fix memory leaks in i915 selftests
drm/i915: Make intel_get_crtc_new_encoder() less oopsy
drm/i915/gt: Avoid out-of-bounds access when loading HuC
drm/amdgpu: add some basic elements for multiple XCD case
drm/amdgpu: move vmhub out of amdgpu_ring_funcs (v4)
Revert "drm/amdgpu: enable ras for mp0 v13_0_10 on SRIOV"
drm/amdgpu: add common ip block for GC 9.4.3
drm/amd/display: Add logging when DP link training Clock recovery is Successful
drm/amdgpu: add common early init support for GC 9.4.3
drm/amdgpu: switch to v9_4_3 gfx_funcs callbacks for GC 9.4.3
drm/amd/display: Add logging when setting DP sink power state fails
drm/amdkfd: Add gfx_target_version for GC 9.4.3
drm/amdkfd: Enable HW_UPDATE_RPTR on GC 9.4.3
drm/amdgpu: reserve the old gc_11_0_*_mes.bin
...
It already happend a few times that patches slipped through which
implemented access to an entity through a job that was already removed
from the entities queue. Since jobs and entities might have different
lifecycles, this can potentially cause UAF bugs.
In order to make it obvious that a jobs entity pointer shouldn't be
accessed after drm_sched_entity_pop_job() was called successfully, set
the jobs entity pointer to NULL once the job is removed from the entity
queue.
Moreover, debugging a potential NULL pointer dereference is way easier
than potentially corrupted memory through a UAF.
Signed-off-by: Danilo Krummrich <dakr@redhat.com>
Link: https://lore.kernel.org/r/20230418100453.4433-1-dakr@redhat.com
Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>