Run shader-db on CI with the lima drm_shim as done for other drivers.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Acked-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: David Heidelberg <david.heidelberg@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/24357>
Original author of r300 patch: Pavel Ondračka <pavel.ondracka@gmail.com>
Co-author: Pavel Ondračka <pavel.ondracka@gmail.com>
Reviewed-by: Emma Anholt <emma@anholt.net>
Signed-off-by: David Heidelberg <david.heidelberg@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/19692>
This reverts commit 0464117ad9. Now that
the shim back-channel communicates with nouveau that the "GPU" is always
idle, we can get the nouveau compiler back into the CI path.
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/16150>
This is needed since we're about to reinstate the fencing mechanism on
screen destruction. Until we figure out another way to handle it, this
will cause hangs on exit with the shim.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: mesa-stable # 21.0
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/8867>
This should include coverage of the whole pipeline including the nouveau
codegen compiler across the "interesting" chips which should generate
sufficiently different code.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Eric Anholt <eric@anholt.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/8432>
This reverts commit c9df92bf79.
It turns out that gitlab-runner uses kubernetes all wrong, spawning Pods
and sshing into them to run the script instead of Jobs containing the
script to run. This means that when anything goes wrong with the pod
(autoscale, preemption, VM maintenance, cluster reconfiguration), the job
fails and only sometimes gets handled as a runner system failure. Even
worse, due to bugs in either the runner or k8s itself, some classes of
timeout-related failure end up not being reported as failures, and the job
will incorrectly report success!
Disable using the "autoscale" cluster until we can do something else
(docker-machine instead of k8s, or the custom third-party k8s-native
runner).
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
Acked-by: Daniel Stone <daniels@collabora.com>
The GKE pool we're using is 1-3 32-core VMs, preemptible (to keep
costs down), with 8 jobs concurrent per system. We have plenty of
memory (4G/core), so we run make -j8 to try to keep the cores busy even
when one job is in a single-threaded step (docker image download, git
clone, artifacts processing, etc.) When all jobs are generating work
for all the cores, they'll be scheduled fairly.
The nodes in the pool have 300GB boot disks (over-provisioned in space
to provide enough iops and throughput) mounted to /ccache, and
CACHE_DIR set pointing to them. This means that once a new
autoscaled-up node has run some jobs, it should have a hot ccache from
then on (instead of having to rely on the docker container cache
having our ccache laying around and not getting wiped out by some
other fd.o job). Local SSDs would provide higher performance, but
unfortunately are not supported with the cluster autoscaler.
For now, the softpipe/llvmpipe test runs are still on the shared
runners, until I can get them ported onto Bas's runner so they can be
parallelized in a single job.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
If we don't set DESTDIR, then the DEFAULT_DRIVER_DIR built into the
libraries is correct and we don't need to use LIBGL_DRIVERS_PATH and
friends for CI usage. Incidentally, this moves our installed paths
from /builds/anholt/mesa/install/usr/local/lib (for example) to
/builds/anholt/mesa/install/lib for simplicity.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Now that helgrind is less upset and I've completed many successful
full shader-db runs, we should be able to enable freedreno shader-db
runs for Mesa checkins on the tiny public shader-db.
Reviewed-by: Rob Clark <robdclark@gmail.com>
This provides significant compiler coverage during CI at a fairly low
cost in CPU time (~17s per thread for 4 threads on
gst-gitlab-htz-runner3).
I'm leaving wget in the docker image, as once this is in master I'm
planning on having an automatic shader-db comparison between master
and the branch included in the artifacts. I also haven't done
freedreno yet, because it has some races when run in multithreaded
mode that I'm still tracking down.
Reviewed-by: Eric Engestrom <eric@engestrom.ch>