v2:
* Use LLVM 8 from buster-backports
v3:
* Use LLVM 7 again for armhf, llvmpipe is still broken there with LLVM 8
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
This allows running the regression tests.
One downside is that we can't easily build the Vulkan overlay layer,
because only x86 binaries of the glslang validator are available. If
that's important, we could either use those binaries via qemu, or build
it from source.
v2:
* Add :amd64 suffix to existing debian-9/10 job names (Eric Engestrom)
Acked-by: Eric Engestrom <eric.engestrom@intel.com> # v1
Apparently needs: in a definition overwrites inherited ones. So
.deqp-test effectively didn't declare needs: for debian-10, which means
any jobs based on .deqp-test could spuriously run after the debian-10
job failed or was cancelled.
The one debian provides is broken in buster+, so I've just written my
own. This allows meson to find the installed zlib and prevents it from
falling back to wraps.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
While at it, rename to singular "container" for consistency.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
It simplifies the definitions of jobs using the Debian 10 image.
The needs: was previously missing from the llvmpipe/softpipe test jobs,
so they could spuriously run if the debian-10 job failed or was
cancelled.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
In preparation for testing drivers other than Panfrost in LAVA labs.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Include Panfrost's gitlab.ci.yml file from Mesa's main .gitlab-ci.yml so
we test on devices with Panfrost.
This uses LAVA to schedule jobs in the devices and will be the base for
testing Etnaviv, Lima, etc.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Instead of a single cache shared between all jobs, but reduce the
maximum cache size to 1.5G (from 5G).
Rationale for smaller cache:
Pulling & pushing a 5G cache could take a long time. Consider
https://gitlab.freedesktop.org/mesa/mesa/-/jobs/684010 (click the "Show
complete raw" button to see timestamps): Pulling the cache took
1569927241-1569927194 = 47 seconds, pushing it 1569927671-1569927519
= 152, for a total of 199 seconds. The actual build took comparable
1569927518-1569927243 = 275 seconds, despite no cache hits from ccache.
In other words, the cache transfers almost doubled the job duration,
and they would have negated any build time benefits from ccache even
with a high cache hit rate.
Also, the smaller caches avoid blowing up storage requirements for them
too much.
Rationale for per-job caches:
Making a single cache significantly smaller might result in cached
build products from one job getting evicted by another job, reducing
the likelihood of cache hits from previous pipelines.
v2:
* Move up "ccache --max-size=1500M" call (Eric Engestrom)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Without this, it was theoretically possible for the jobs to run before
the docker image was ready.
v2:
* Use - list syntax instead of [] (Eric Engestrom)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This allows most build jobs to run before the stretch or arm64 docker
images are ready.
v2:
* Use - list syntax instead of [] (Eric Engestrom)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This allows the *-old-llvm jobs to run before the buster docker images
are ready.
v2:
* Use - list syntax instead of [] (Eric Engestrom)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Pros:
* Less fragile due to not mixing packages from stretch and buster
* No longer need to use third-party LLVM packages
* The buster image now uses GCC 8 for C++ as well (previously 6 for C++,
8 for C), allowing to drop some hacks
Con:
* The stretch image now only uses GCC 6 for C as well as C++
* Need separate jobs for testing old LLVM versions
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Yes, some tests fail, but we can turn those into XFAILs at meson time.
Better to keep the things that work working than not cover them at all.
Unfortunately XPASS results will not cause the build to fail until we
update CI to meson 0.51 or newer.
Reviewed-by: Daniel Stone <daniels@collabora.com>
This might allow the arm64 tests to start running earlier.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
v2:
* Preserve setting NIR_VALIDATE=0 for all arm64_* jobs
* Preserve setting DEQP_SKIPS=deqp-default-skips.txt for
arm64_a306_gles2 jobs
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com> # v1
Reviewed-by: Eric Anholt <eric@anholt.net>
Support for multiple inheritance was added to GitLab recently.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This allows the arm64_a306_gles2 jobs to run as soon as the meson-arm64
job has finished.
Fixes: 6f0dc087b7 "freedreno: Introduce gitlab-based CI."
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Since freedreno's kernel and GPU reset seem to be totally solid, we
don't need to have the complexity of the LAVA setup that panfrost has.
Instead, we can register some boards as shared gitlab runners and have
the jobs run out of a docker container just like we do for llvmpipe.
Just make sure that the DRI device node is passed through to the
containers in the gitlab config ('devices = ["/dev/dri"]' under
runners.docker).
If a runner fails (networking dies, kernel panic, etc.) it'll take out
one build but the rest can keep going since gitlab-runner is what
pulls jobs. Since the runner pulls jobs, it also means that they can
live behind firewalls instead of needing some public address to be
accessed by gitlab.fd.o.
For now, enable it just on db410c (A307) and cheza (A630) as those are
the hardware that I have plenty of. A307 is only testing GLES2 since
running all of GLES3 takes too long for the number of boards I've
brought up.
Acked-by: Rob Clark <robdclark@chromium.org>
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
[ Michel Dänzer: Dropped jessie line from debian-install.sh again ]
This way, the test jobs can start running before all build+test jobs
have finished, once the meson-main job has.
Idea suggested by Daniel Stone on IRC.
See https://docs.gitlab.com/ce/ci/directed_acyclic_graph/ and
https://docs.gitlab.com/ce/ci/yaml/README.html#needs for details.
v2:
* Improve commit log (Daniel Stone, Eric Engestrom)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
The GLES2 CTS takes about 8 minutes of total runtime (at parallel 4 is
~2 minutes in the test stage if runners are free), while GLES3 takes
about 25. Since the GLES3 run is pretty expensive, just do a cheap
touch test of 1 out of every 10 tests in the test list on MRs, until
we can get the runtime down.
v2: Drop the full run for now until we can bring runtime down or bring
up a dedicated mesa runner.
Reviewed-by: Eric Engestrom <eric@engestrom.ch> (v1)
Reviewed-By: Gert Wollny <gert.wollny@collabora.com> (v1)
This is the start of doing CTS tests on merges to Mesa master. We use
the surfaceless platform so that we don't need to bother bringing up
weston or X11. The surface size is kept low to reduce runtime, but
this comes at the cost of many rendering tests skipping due to
too-small render targets (as we see the impact of Mesa on the shared
runner pool, we can reevaluate this and what set of CTS tests we want
to run).
We split the job up across 4 runners (each at 4 llvmpipe threads), so
that the job can load-balance across our shared runners and finish
sooner (since dEQP is very single-thread-performance bound).
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Now that we're running the drivers we build, building with
optimization is important for keeping our runtime down. Shaves about
4 minutes of runtime off of GLES2 CTS of llvmpipe at 64x64.
v2: Only switch meson-main until we enable CTS for other builds
on request by Michel.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
I want to enable CI of llvmpipe out of the meson-main build. So, kick
classic swrast/osmesa to meson-i386, then promote llvmpipe to
meson-main (along with nine, now that classic osmesa isn't keeping it
out of there).
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
These could've been deleted a long time ago, but apparent we forgot.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
We also need to update wayland-protocols and libXrandr (and randrproto),
as they are too old for gdk3 (which gtk3 depends on).
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This line was mistakenly added while there is already a `-D tools=all`
a few lines below.
Fixes: f60defa72d ("gitlab-ci: Add a shader-db run using v3d on drm-shim.")
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This provides significant compiler coverage during CI at a fairly low
cost in CPU time (~17s per thread for 4 threads on
gst-gitlab-htz-runner3).
I'm leaving wget in the docker image, as once this is in master I'm
planning on having an automatic shader-db comparison between master
and the branch included in the artifacts. I also haven't done
freedreno yet, because it has some races when run in multithreaded
mode that I'm still tracking down.
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
On a build failure, we were tarring up the whole ccache directory,
build.ninja, build products, etc. This was over 400MB compressed on a
recent early meson-main build failure, which fd.o then has to hang on
to for 4 weeks. The build logs are probably the interesting part, are
potentially useful regardless ("how did CI's build flags differ from
mine?"), and are <500k uncompressed on my personal meson build.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
I introduced libdir for cross-builds so we could point at the
resulting drivers without per-arch dependencies, but I'd rather not
have to type x86_64-linux-whatever for non-cross-builds either.
Reviewed-by: Eric Engestrom <eric@engestrom.ch>