mesa/.gitlab-ci.yml

1347 lines
32 KiB
YAML
Raw Normal View History

variables:
FDO_UPSTREAM_REPO: mesa/mesa
CI: reduce bandwidth for git pull Over the last 7 days, git pulls represented a total of 1.7 TB. On those 1.7 TB, we can see: - ~300 GB for the CI farm on hetzner - ~730 GB for the CI farm on packet.net - ~680 GB for the rest of the world We can not really change the rest of the world*, but we can certainly reduce the egress costs towards our CI farms. Right now, the gitlab runners are not doing a good job at caching the git trees for the various jobs we make, and we end up with a lot of cache-misses. A typical pipeline ends up with a good 2.8GB of git pull data. (a compressed archive of the mesa folder accounts for 280MB) In this patch, we implemented what was suggested in https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576 - we host a brand new MinIO server on packet - jobs can upload files on 2 locations: * git-cache/<namespace>/<project>/<branch-name>.tar.gz * artifacts/<namespace>/<project>/<pipeline-id>/ - the authorization is handled by gitlab with short tokens valid only for the time of the job is running - whenever a job runs, the runner are configured to execute (eval) $CI_PRE_CLONE_SCRIPT - this variable is set globally to download the current cache from the MinIO packet server, unpack it and replace the possibly out of date cache found on the runner - then git fetch is run by the runner, and only the delta between the upstream tree and the local tree gets pulled. We can rebuild the git cache in a schedule job (once a day seems sufficient), and then we can stop the cache miss entirely. First results showed that instead of pulling 280MB of data in my fork, I got a pull of only 250KB. That should help us. * arguably, there are other farms in the rest of the world, so hopefully we can change those too. Reviewed-by: Michel Dänzer <mdaenzer@redhat.com> Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com> Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>
2020-06-11 23:16:28 +08:00
CI_PRE_CLONE_SCRIPT: |-
set -o xtrace
/usr/bin/wget -q -O- ${CI_PROJECT_URL}/-/raw/${CI_COMMIT_SHA}/.gitlab-ci/download-git-cache.sh | sh -
set +o xtrace
include:
- project: 'freedesktop/ci-templates'
CI: reduce bandwidth for git pull Over the last 7 days, git pulls represented a total of 1.7 TB. On those 1.7 TB, we can see: - ~300 GB for the CI farm on hetzner - ~730 GB for the CI farm on packet.net - ~680 GB for the rest of the world We can not really change the rest of the world*, but we can certainly reduce the egress costs towards our CI farms. Right now, the gitlab runners are not doing a good job at caching the git trees for the various jobs we make, and we end up with a lot of cache-misses. A typical pipeline ends up with a good 2.8GB of git pull data. (a compressed archive of the mesa folder accounts for 280MB) In this patch, we implemented what was suggested in https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576 - we host a brand new MinIO server on packet - jobs can upload files on 2 locations: * git-cache/<namespace>/<project>/<branch-name>.tar.gz * artifacts/<namespace>/<project>/<pipeline-id>/ - the authorization is handled by gitlab with short tokens valid only for the time of the job is running - whenever a job runs, the runner are configured to execute (eval) $CI_PRE_CLONE_SCRIPT - this variable is set globally to download the current cache from the MinIO packet server, unpack it and replace the possibly out of date cache found on the runner - then git fetch is run by the runner, and only the delta between the upstream tree and the local tree gets pulled. We can rebuild the git cache in a schedule job (once a day seems sufficient), and then we can stop the cache miss entirely. First results showed that instead of pulling 280MB of data in my fork, I got a pull of only 250KB. That should help us. * arguably, there are other farms in the rest of the world, so hopefully we can change those too. Reviewed-by: Michel Dänzer <mdaenzer@redhat.com> Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com> Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>
2020-06-11 23:16:28 +08:00
ref: &ci-templates-sha 52dd4a94044449c8481d18dcdc221a3c636366d2
file: '/templates/debian.yml'
CI: reduce bandwidth for git pull Over the last 7 days, git pulls represented a total of 1.7 TB. On those 1.7 TB, we can see: - ~300 GB for the CI farm on hetzner - ~730 GB for the CI farm on packet.net - ~680 GB for the rest of the world We can not really change the rest of the world*, but we can certainly reduce the egress costs towards our CI farms. Right now, the gitlab runners are not doing a good job at caching the git trees for the various jobs we make, and we end up with a lot of cache-misses. A typical pipeline ends up with a good 2.8GB of git pull data. (a compressed archive of the mesa folder accounts for 280MB) In this patch, we implemented what was suggested in https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576 - we host a brand new MinIO server on packet - jobs can upload files on 2 locations: * git-cache/<namespace>/<project>/<branch-name>.tar.gz * artifacts/<namespace>/<project>/<pipeline-id>/ - the authorization is handled by gitlab with short tokens valid only for the time of the job is running - whenever a job runs, the runner are configured to execute (eval) $CI_PRE_CLONE_SCRIPT - this variable is set globally to download the current cache from the MinIO packet server, unpack it and replace the possibly out of date cache found on the runner - then git fetch is run by the runner, and only the delta between the upstream tree and the local tree gets pulled. We can rebuild the git cache in a schedule job (once a day seems sufficient), and then we can stop the cache miss entirely. First results showed that instead of pulling 280MB of data in my fork, I got a pull of only 250KB. That should help us. * arguably, there are other farms in the rest of the world, so hopefully we can change those too. Reviewed-by: Michel Dänzer <mdaenzer@redhat.com> Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com> Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>
2020-06-11 23:16:28 +08:00
- project: 'freedesktop/ci-templates'
ref: *ci-templates-sha
file: '/templates/alpine.yml'
- local: '.gitlab-ci/lava-gitlab-ci.yml'
- local: '.gitlab-ci/test-source-dep.yml'
stages:
- container+docs
- container-2
CI: reduce bandwidth for git pull Over the last 7 days, git pulls represented a total of 1.7 TB. On those 1.7 TB, we can see: - ~300 GB for the CI farm on hetzner - ~730 GB for the CI farm on packet.net - ~680 GB for the rest of the world We can not really change the rest of the world*, but we can certainly reduce the egress costs towards our CI farms. Right now, the gitlab runners are not doing a good job at caching the git trees for the various jobs we make, and we end up with a lot of cache-misses. A typical pipeline ends up with a good 2.8GB of git pull data. (a compressed archive of the mesa folder accounts for 280MB) In this patch, we implemented what was suggested in https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576 - we host a brand new MinIO server on packet - jobs can upload files on 2 locations: * git-cache/<namespace>/<project>/<branch-name>.tar.gz * artifacts/<namespace>/<project>/<pipeline-id>/ - the authorization is handled by gitlab with short tokens valid only for the time of the job is running - whenever a job runs, the runner are configured to execute (eval) $CI_PRE_CLONE_SCRIPT - this variable is set globally to download the current cache from the MinIO packet server, unpack it and replace the possibly out of date cache found on the runner - then git fetch is run by the runner, and only the delta between the upstream tree and the local tree gets pulled. We can rebuild the git cache in a schedule job (once a day seems sufficient), and then we can stop the cache miss entirely. First results showed that instead of pulling 280MB of data in my fork, I got a pull of only 250KB. That should help us. * arguably, there are other farms in the rest of the world, so hopefully we can change those too. Reviewed-by: Michel Dänzer <mdaenzer@redhat.com> Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com> Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>
2020-06-11 23:16:28 +08:00
- git-archive
- deploy
- meson-x86_64
- scons
- meson-misc
- llvmpipe
- softpipe
- freedreno
- panfrost
- radv
- lima
- virgl
- radeonsi
- success
# Generic rule to not run the job during scheduled pipelines
# ----------------------------------------------------------
.scheduled_pipelines-rules:
rules: &ignore_scheduled_pipelines
if: '$CI_PIPELINE_SOURCE == "schedule"'
when: never
.docs-base:
extends: .ci-run-policy
image: alpine
script:
- apk --no-cache add py3-pip graphviz
- pip3 install sphinx sphinx_rtd_theme
- sphinx-build -b html docs public
pages:
extends: .docs-base
stage: deploy
artifacts:
paths:
- public
rules:
- *ignore_scheduled_pipelines
- if: '$CI_PROJECT_NAMESPACE == "mesa" && $CI_COMMIT_REF_NAME == "master"'
changes: &docs-or-ci
- docs/**/*
- .gitlab-ci.yml
when: always
# Other cases default to never
test-docs:
extends: .docs-base
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
stage: container+docs
rules:
- *ignore_scheduled_pipelines
- if: '$CI_PROJECT_NAMESPACE == "mesa"'
when: never
- if: '$GITLAB_USER_LOGIN == "marge-bot" && $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME == $CI_COMMIT_REF_NAME'
changes: *docs-or-ci
when: on_success
- changes: *docs-or-ci
when: manual
# Other cases default to never
# When to automatically run the CI
.ci-run-policy:
rules:
- *ignore_scheduled_pipelines
# If any files affecting the pipeline are changed, build/test jobs run
# automatically once all dependency jobs have passed
- changes: &all_paths
- VERSION
- bin/git_sha1_gen.py
- bin/install_megadrivers.py
- bin/meson_get_version.py
- bin/symbols-check.py
# GitLab CI
- .gitlab-ci.yml
- .gitlab-ci/**/*
# Meson
- meson*
- build-support/**/*
- subprojects/**/*
# SCons
- SConstruct
- scons/**/*
- common.py
# Source code
- include/**/*
- src/**/*
when: on_success
# Otherwise, build/test jobs won't run
- when: never
retry:
max: 2
when:
- runner_system_failure
success:
stage: success
image: debian:stable-slim
rules:
- *ignore_scheduled_pipelines
- if: '$CI_PROJECT_NAMESPACE == "mesa"'
when: never
- if: '$GITLAB_USER_LOGIN == "marge-bot"'
changes: *docs-or-ci
when: never
- changes: *all_paths
when: never
- when: on_success
variables:
GIT_STRATEGY: none
script:
- echo "Dummy job to make sure every merge request pipeline runs at least one job"
.ci-deqp-artifacts:
artifacts:
name: "mesa_${CI_JOB_NAME}"
when: always
untracked: false
paths:
# Watch out! Artifacts are relative to the build dir.
# https://gitlab.com/gitlab-org/gitlab-ce/commit/8788fb925706cad594adf6917a6c5f6587dd1521
- artifacts
# Build the CI docker images.
#
# FDO_DISTRIBUTION_TAG is the tag of the docker image used by later stage jobs. If the
# image doesn't exist yet, the container stage job generates it.
#
# In order to generate a new image, one should generally change the tag.
# While removing the image from the registry would also work, that's not
# recommended except for ephemeral images during development: Replacing
# an image after a significant amount of time might pull in newer
# versions of gcc/clang or other packages, which might break the build
# with older commits using the same tag.
#
# After merging a change resulting in generating a new image to the
# main repository, it's recommended to remove the image from the source
# repository's container registry, so that the image from the main
# repository's registry will be used there as well.
.container:
stage: container+docs
extends:
- .ci-run-policy
rules:
- *ignore_scheduled_pipelines
# Run pipeline by default in the main project if any CI pipeline
# configuration files were changed, to ensure docker images are up to date
- if: '$CI_PROJECT_PATH == "mesa/mesa"'
changes:
- .gitlab-ci.yml
- .gitlab-ci/**/*
when: on_success
# Run pipeline by default if it was triggered by Marge Bot, is for a
# merge request, and any files affecting the pipeline were changed
- if: '$GITLAB_USER_LOGIN == "marge-bot" && $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME == $CI_COMMIT_REF_NAME'
changes:
*all_paths
when: on_success
# Run pipeline by default in the main project if it was not triggered by
# Marge Bot, and any files affecting the pipeline were changed
- if: '$GITLAB_USER_LOGIN != "marge-bot" && $CI_PROJECT_PATH == "mesa/mesa"'
changes:
*all_paths
when: on_success
# Allow triggering jobs manually in other cases if any files affecting the
# pipeline were changed
- changes:
*all_paths
when: manual
# Otherwise, container jobs won't run
- when: never
variables:
FDO_DISTRIBUTION_VERSION: buster-slim
FDO_REPO_SUFFIX: "debian/$CI_JOB_NAME"
FDO_DISTRIBUTION_EXEC: 'env FDO_CI_CONCURRENT=${FDO_CI_CONCURRENT} bash .gitlab-ci/container/${CI_JOB_NAME}.sh'
# no need to pull the whole repo to build the container image
GIT_STRATEGY: none
# Debian 10 based x86 build image base
x86_build-base:
extends:
- .fdo.container-build@debian
- .container
variables:
FDO_DISTRIBUTION_TAG: &x86_build-base "2020-07-28-x86-2"
.use-x86_build-base:
extends:
- x86_build-base
- .ci-run-policy
stage: container-2
variables:
BASE_TAG: *x86_build-base
FDO_BASE_IMAGE: "$CI_REGISTRY_IMAGE/debian/x86_build-base:$BASE_TAG"
needs:
- x86_build-base
# Debian 10 based x86 main build image
x86_build:
extends:
- .use-x86_build-base
variables:
FDO_DISTRIBUTION_TAG: &x86_build "2020-08-08-glvnd"
.use-x86_build:
variables:
TAG: *x86_build
image: "$CI_REGISTRY_IMAGE/debian/x86_build:$TAG"
needs:
- x86_build
# Debian 10 based i386 cross-build image
i386_build:
extends:
- .use-x86_build-base
variables:
FDO_DISTRIBUTION_TAG: &i386_build "2020-07-28-x86-2"
.use-i386_build:
variables:
TAG: *i386_build
image: "$CI_REGISTRY_IMAGE/debian/i386_build:$TAG"
needs:
- i386_build
# Debian 10 based ppc64el cross-build image
ppc64el_build:
extends:
- .use-x86_build-base
variables:
FDO_DISTRIBUTION_TAG: &ppc64el_build "2020-07-28-x86-2"
.use-ppc64el_build:
variables:
TAG: *ppc64el_build
image: "$CI_REGISTRY_IMAGE/debian/ppc64el_build:$TAG"
needs:
- ppc64el_build
# Debian 10 based s390x cross-build image
s390x_build:
extends:
- .use-x86_build-base
variables:
FDO_DISTRIBUTION_TAG: &s390x_build "2020-07-28-x86-2"
.use-s390x_build:
variables:
TAG: *s390x_build
image: "$CI_REGISTRY_IMAGE/debian/s390x_build:$TAG"
needs:
- s390x_build
# Debian 10 based x86 test image base
x86_test-base:
extends: x86_build-base
variables:
FDO_DISTRIBUTION_TAG: &x86_test-base "2020-07-28-x86-2"
.use-x86_test-base:
extends:
- x86_build-base
- .ci-run-policy
stage: container-2
variables:
BASE_TAG: *x86_test-base
FDO_BASE_IMAGE: "$CI_REGISTRY_IMAGE/debian/x86_test-base:$BASE_TAG"
needs:
- x86_test-base
# Debian 10 based x86 test image for GL
x86_test-gl:
extends: .use-x86_test-base
variables:
FDO_DISTRIBUTION_TAG: &x86_test-gl "2020-07-28-x86-2"
# Debian 10 based x86 test image for VK
x86_test-vk:
extends: .use-x86_test-base
variables:
FDO_DISTRIBUTION_TAG: &x86_test-vk "2020-07-28-x86-2"
# Debian 9 based x86 build image (old LLVM)
x86_build_old:
extends: x86_build-base
variables:
FDO_DISTRIBUTION_TAG: &x86_build_old "2020-07-28-x86-2"
FDO_DISTRIBUTION_VERSION: stretch-slim
.use-x86_build_old:
variables:
TAG: *x86_build_old
image: "$CI_REGISTRY_IMAGE/debian/x86_build_old:$TAG"
needs:
- x86_build_old
# Debian 10 based ARM build image
arm_build:
extends:
- .fdo.container-build@debian@arm64v8
- .container
variables:
FDO_DISTRIBUTION_TAG: &arm_build "2020-08-04-nfs-2"
.use-arm_build:
variables:
TAG: *arm_build
image: "$CI_REGISTRY_IMAGE/debian/arm_build:$TAG"
needs:
- arm_build
# Debian 10 based x86 baremetal image base
arm_test-base:
extends:
- .fdo.container-build@debian
- .container
variables:
FDO_DISTRIBUTION_TAG: &arm_test-base "2020-07-28-libdrm"
.use-arm_test-base:
extends:
- arm_test-base
- .ci-run-policy
stage: container-2
variables:
BASE_TAG: *arm_test-base
FDO_BASE_IMAGE: "$CI_REGISTRY_IMAGE/debian/arm_test-base:$BASE_TAG"
needs:
- arm_test-base
# x86 image with ARM64 rootfs for baremetal testing.
arm64_test:
extends:
- .use-arm_test-base
variables:
FDO_DISTRIBUTION_TAG: &arm64_test "2020-08-04-nfs-2"
.use-arm64_test:
variables:
TAG: *arm64_test
image: "$CI_REGISTRY_IMAGE/debian/arm64_test:$TAG"
needs:
- arm64_test
# Native Windows docker builds
#
# Unlike the above Linux-based builds - including MinGW/SCons builds which
# cross-compile for Windows - which use the freedesktop ci-templates, we
# cannot use the same scheme here. As Windows lacks support for
# Docker-in-Docker, and Podman does not run natively on Windows, we have
# to open-code much of the same ourselves.
#
# This is achieved by first running in a native Windows shell instance
# (host PowerShell) in the container stage to build and push the image,
# then in the build stage by executing inside Docker.
.windows-docker-vs2019:
variables:
WINDOWS_TAG: "2020-05-05-llvm"
WINDOWS_IMAGE: "$CI_REGISTRY_IMAGE/windows/x64_build:$WINDOWS_TAG"
WINDOWS_UPSTREAM_IMAGE: "$CI_REGISTRY/$FDO_UPSTREAM_REPO/windows/x64_build:$WINDOWS_TAG"
.windows_build_vs2019:
extends:
- .container
- .windows-docker-vs2019
stage: container+docs
variables:
GIT_STRATEGY: fetch # we do actually need the full repository though
timeout: 4h # LLVM takes ages
tags:
- windows
- shell
- "1809"
- mesa
script:
- .\.gitlab-ci\windows\mesa_container.ps1 $CI_REGISTRY $CI_REGISTRY_USER $CI_REGISTRY_PASSWORD $WINDOWS_IMAGE $WINDOWS_UPSTREAM_IMAGE
.use-windows_build_vs2019:
extends: .windows-docker-vs2019
image: "$WINDOWS_IMAGE"
needs:
- windows_build_vs2019
CI: reduce bandwidth for git pull Over the last 7 days, git pulls represented a total of 1.7 TB. On those 1.7 TB, we can see: - ~300 GB for the CI farm on hetzner - ~730 GB for the CI farm on packet.net - ~680 GB for the rest of the world We can not really change the rest of the world*, but we can certainly reduce the egress costs towards our CI farms. Right now, the gitlab runners are not doing a good job at caching the git trees for the various jobs we make, and we end up with a lot of cache-misses. A typical pipeline ends up with a good 2.8GB of git pull data. (a compressed archive of the mesa folder accounts for 280MB) In this patch, we implemented what was suggested in https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576 - we host a brand new MinIO server on packet - jobs can upload files on 2 locations: * git-cache/<namespace>/<project>/<branch-name>.tar.gz * artifacts/<namespace>/<project>/<pipeline-id>/ - the authorization is handled by gitlab with short tokens valid only for the time of the job is running - whenever a job runs, the runner are configured to execute (eval) $CI_PRE_CLONE_SCRIPT - this variable is set globally to download the current cache from the MinIO packet server, unpack it and replace the possibly out of date cache found on the runner - then git fetch is run by the runner, and only the delta between the upstream tree and the local tree gets pulled. We can rebuild the git cache in a schedule job (once a day seems sufficient), and then we can stop the cache miss entirely. First results showed that instead of pulling 280MB of data in my fork, I got a pull of only 250KB. That should help us. * arguably, there are other farms in the rest of the world, so hopefully we can change those too. Reviewed-by: Michel Dänzer <mdaenzer@redhat.com> Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com> Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>
2020-06-11 23:16:28 +08:00
git_archive:
extends: .fdo.container-build@alpine
stage: container+docs
rules:
- if: '$CI_PIPELINE_SOURCE == "schedule"'
when: always
CI: reduce bandwidth for git pull Over the last 7 days, git pulls represented a total of 1.7 TB. On those 1.7 TB, we can see: - ~300 GB for the CI farm on hetzner - ~730 GB for the CI farm on packet.net - ~680 GB for the rest of the world We can not really change the rest of the world*, but we can certainly reduce the egress costs towards our CI farms. Right now, the gitlab runners are not doing a good job at caching the git trees for the various jobs we make, and we end up with a lot of cache-misses. A typical pipeline ends up with a good 2.8GB of git pull data. (a compressed archive of the mesa folder accounts for 280MB) In this patch, we implemented what was suggested in https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576 - we host a brand new MinIO server on packet - jobs can upload files on 2 locations: * git-cache/<namespace>/<project>/<branch-name>.tar.gz * artifacts/<namespace>/<project>/<pipeline-id>/ - the authorization is handled by gitlab with short tokens valid only for the time of the job is running - whenever a job runs, the runner are configured to execute (eval) $CI_PRE_CLONE_SCRIPT - this variable is set globally to download the current cache from the MinIO packet server, unpack it and replace the possibly out of date cache found on the runner - then git fetch is run by the runner, and only the delta between the upstream tree and the local tree gets pulled. We can rebuild the git cache in a schedule job (once a day seems sufficient), and then we can stop the cache miss entirely. First results showed that instead of pulling 280MB of data in my fork, I got a pull of only 250KB. That should help us. * arguably, there are other farms in the rest of the world, so hopefully we can change those too. Reviewed-by: Michel Dänzer <mdaenzer@redhat.com> Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com> Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>
2020-06-11 23:16:28 +08:00
variables:
FDO_REPO_SUFFIX: &git-archive-suffix "alpine/git_archive"
FDO_DISTRIBUTION_EXEC: 'pip3 install git+http://gitlab.freedesktop.org/freedesktop/ci-templates@6f5af7e5574509726c79109e3c147cee95e81366'
CI: reduce bandwidth for git pull Over the last 7 days, git pulls represented a total of 1.7 TB. On those 1.7 TB, we can see: - ~300 GB for the CI farm on hetzner - ~730 GB for the CI farm on packet.net - ~680 GB for the rest of the world We can not really change the rest of the world*, but we can certainly reduce the egress costs towards our CI farms. Right now, the gitlab runners are not doing a good job at caching the git trees for the various jobs we make, and we end up with a lot of cache-misses. A typical pipeline ends up with a good 2.8GB of git pull data. (a compressed archive of the mesa folder accounts for 280MB) In this patch, we implemented what was suggested in https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576 - we host a brand new MinIO server on packet - jobs can upload files on 2 locations: * git-cache/<namespace>/<project>/<branch-name>.tar.gz * artifacts/<namespace>/<project>/<pipeline-id>/ - the authorization is handled by gitlab with short tokens valid only for the time of the job is running - whenever a job runs, the runner are configured to execute (eval) $CI_PRE_CLONE_SCRIPT - this variable is set globally to download the current cache from the MinIO packet server, unpack it and replace the possibly out of date cache found on the runner - then git fetch is run by the runner, and only the delta between the upstream tree and the local tree gets pulled. We can rebuild the git cache in a schedule job (once a day seems sufficient), and then we can stop the cache miss entirely. First results showed that instead of pulling 280MB of data in my fork, I got a pull of only 250KB. That should help us. * arguably, there are other farms in the rest of the world, so hopefully we can change those too. Reviewed-by: Michel Dänzer <mdaenzer@redhat.com> Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com> Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>
2020-06-11 23:16:28 +08:00
# no need to pull the whole repo to build the container image
GIT_STRATEGY: none
FDO_DISTRIBUTION_TAG: &git-archive-tag "2020-07-07"
CI: reduce bandwidth for git pull Over the last 7 days, git pulls represented a total of 1.7 TB. On those 1.7 TB, we can see: - ~300 GB for the CI farm on hetzner - ~730 GB for the CI farm on packet.net - ~680 GB for the rest of the world We can not really change the rest of the world*, but we can certainly reduce the egress costs towards our CI farms. Right now, the gitlab runners are not doing a good job at caching the git trees for the various jobs we make, and we end up with a lot of cache-misses. A typical pipeline ends up with a good 2.8GB of git pull data. (a compressed archive of the mesa folder accounts for 280MB) In this patch, we implemented what was suggested in https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576 - we host a brand new MinIO server on packet - jobs can upload files on 2 locations: * git-cache/<namespace>/<project>/<branch-name>.tar.gz * artifacts/<namespace>/<project>/<pipeline-id>/ - the authorization is handled by gitlab with short tokens valid only for the time of the job is running - whenever a job runs, the runner are configured to execute (eval) $CI_PRE_CLONE_SCRIPT - this variable is set globally to download the current cache from the MinIO packet server, unpack it and replace the possibly out of date cache found on the runner - then git fetch is run by the runner, and only the delta between the upstream tree and the local tree gets pulled. We can rebuild the git cache in a schedule job (once a day seems sufficient), and then we can stop the cache miss entirely. First results showed that instead of pulling 280MB of data in my fork, I got a pull of only 250KB. That should help us. * arguably, there are other farms in the rest of the world, so hopefully we can change those too. Reviewed-by: Michel Dänzer <mdaenzer@redhat.com> Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com> Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>
2020-06-11 23:16:28 +08:00
FDO_DISTRIBUTION_PACKAGES: git py3-pip
# Git archive
make git archive:
stage: git-archive
extends: .fdo.suffixed-image@alpine
rules:
- if: '$CI_PIPELINE_SOURCE == "schedule"'
when: on_success
CI: reduce bandwidth for git pull Over the last 7 days, git pulls represented a total of 1.7 TB. On those 1.7 TB, we can see: - ~300 GB for the CI farm on hetzner - ~730 GB for the CI farm on packet.net - ~680 GB for the rest of the world We can not really change the rest of the world*, but we can certainly reduce the egress costs towards our CI farms. Right now, the gitlab runners are not doing a good job at caching the git trees for the various jobs we make, and we end up with a lot of cache-misses. A typical pipeline ends up with a good 2.8GB of git pull data. (a compressed archive of the mesa folder accounts for 280MB) In this patch, we implemented what was suggested in https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576 - we host a brand new MinIO server on packet - jobs can upload files on 2 locations: * git-cache/<namespace>/<project>/<branch-name>.tar.gz * artifacts/<namespace>/<project>/<pipeline-id>/ - the authorization is handled by gitlab with short tokens valid only for the time of the job is running - whenever a job runs, the runner are configured to execute (eval) $CI_PRE_CLONE_SCRIPT - this variable is set globally to download the current cache from the MinIO packet server, unpack it and replace the possibly out of date cache found on the runner - then git fetch is run by the runner, and only the delta between the upstream tree and the local tree gets pulled. We can rebuild the git cache in a schedule job (once a day seems sufficient), and then we can stop the cache miss entirely. First results showed that instead of pulling 280MB of data in my fork, I got a pull of only 250KB. That should help us. * arguably, there are other farms in the rest of the world, so hopefully we can change those too. Reviewed-by: Michel Dänzer <mdaenzer@redhat.com> Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com> Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>
2020-06-11 23:16:28 +08:00
# ensure we are running on packet
tags:
- packet.net
variables:
FDO_DISTRIBUTION_TAG: *git-archive-tag
FDO_REPO_SUFFIX: *git-archive-suffix
needs:
- git_archive
script:
# compress the current folder
- tar -cvzf ../$CI_PROJECT_NAME.tar.gz .
# login with the JWT token
- ci-fairy minio login $CI_JOB_JWT
- ci-fairy minio cp ../$CI_PROJECT_NAME.tar.gz minio://minio-packet.freedesktop.org/git-cache/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/$CI_PROJECT_NAME.tar.gz
# BUILD
# Shared between windows and Linux
.build-common:
extends: .ci-run-policy
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
artifacts:
name: "mesa_${CI_JOB_NAME}"
when: always
paths:
- _build/meson-logs/*.txt
# scons:
- build/*/config.log
- shader-db
# Just Linux
.build-linux:
extends: .build-common
variables:
CCACHE_COMPILERCHECK: "content"
CCACHE_COMPRESS: "true"
CCACHE_DIR: /cache/mesa/ccache
# Use ccache transparently, and print stats before/after
before_script:
- export PATH="/usr/lib/ccache:$PATH"
- export CCACHE_BASEDIR="$PWD"
- ccache --show-stats
after_script:
- ccache --show-stats
.build-windows:
extends: .build-common
tags:
- windows
- docker
- "1809"
- mesa
cache:
key: ${CI_JOB_NAME}
paths:
- subprojects/packagecache
.meson-build:
extends:
- .build-linux
- .use-x86_build
stage: meson-x86_64
variables:
LLVM_VERSION: 9
script:
- .gitlab-ci/meson-build.sh
.scons-build:
extends:
- .build-linux
- .use-x86_build
stage: scons
script:
- env SCONSFLAGS="-j${FDO_CI_CONCURRENT:-4}" .gitlab-ci/scons-build.sh
meson-testing:
extends:
- .meson-build
- .ci-deqp-artifacts
variables:
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D platforms=x11
GALLIUM_ST: >
-D dri3=enabled
GALLIUM_DRIVERS: "swrast,virgl,radeonsi"
VULKAN_DRIVERS: amd
BUILDTYPE: "debugoptimized"
EXTRA_OPTION: >
-D werror=true
UPLOAD_FOR_LAVA: 1
DEBIAN_ARCH: amd64
script:
- .gitlab-ci/meson-build.sh
- .gitlab-ci/prepare-artifacts.sh
meson-gallium:
extends: .meson-build
variables:
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-xvmc=enabled
-D gallium-omx=bellagio
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=true
-D gallium-opencl=disabled
GALLIUM_DRIVERS: "iris,nouveau,kmsro,r300,r600,freedreno,swr,swrast,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink"
EXTRA_OPTION: >
-D osmesa=gallium
-D tools=all
-D werror=true
script:
- .gitlab-ci/meson-build.sh
- .gitlab-ci/run-shader-db.sh
- src/freedreno/.gitlab-ci/run-fdtools.sh
meson-classic:
extends: .meson-build
variables:
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D platforms=x11,wayland,drm,surfaceless
DRI_DRIVERS: "auto"
EXTRA_OPTION: >
-D osmesa=classic
-D tools=all
-D werror=true
meson-android:
extends: .meson-build
variables:
UNWIND: "disabled"
DRI_LOADERS: >
-D glx=disabled
-D gbm=disabled
-D egl=enabled
-D platforms=android
GALLIUM_DRIVERS: freedreno
VULKAN_DRIVERS: freedreno,intel,amd
EXTRA_OPTION: >
-D android-stub=true
-D werror=true
GALLIUM_ST: >
-D dri3=disabled
-D gallium-vdpau=disabled
-D gallium-xvmc=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-opencl=disabled
.meson-cross:
extends:
- .meson-build
stage: meson-misc
variables:
UNWIND: "disabled"
DRI_LOADERS: >
-D glx=disabled
-D gbm=disabled
-D egl=enabled
-D platforms=[]
-D osmesa=none
GALLIUM_ST: >
-D dri3=disabled
-D gallium-vdpau=disabled
-D gallium-xvmc=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
LLVM_VERSION: "8"
.meson-arm:
extends:
- .meson-cross
- .use-arm_build
variables:
VULKAN_DRIVERS: freedreno
GALLIUM_DRIVERS: "etnaviv,freedreno,kmsro,lima,nouveau,panfrost,swrast,tegra,v3d,vc4"
BUILDTYPE: "debugoptimized"
tags:
- aarch64
meson-armhf:
extends:
- .meson-arm
- .ci-deqp-artifacts
variables:
CROSS: armhf
LLVM_VERSION: "7"
EXTRA_OPTION: >
-D llvm=disabled
UPLOAD_FOR_LAVA: 1
DEBIAN_ARCH: armhf
script:
- .gitlab-ci/meson-build.sh
- .gitlab-ci/prepare-artifacts.sh
meson-arm64:
extends:
- .meson-arm
- .ci-deqp-artifacts
variables:
VULKAN_DRIVERS: "freedreno"
EXTRA_OPTION: >
-D llvm=disabled
UPLOAD_FOR_LAVA: 1
DEBIAN_ARCH: arm64
script:
- .gitlab-ci/meson-build.sh
- .gitlab-ci/prepare-artifacts.sh
meson-arm64-build-test:
extends:
- .meson-arm
- .ci-deqp-artifacts
variables:
VULKAN_DRIVERS: "amd"
EXTRA_OPTION: >
-Dtools=panfrost
script:
- .gitlab-ci/meson-build.sh
meson-clang:
extends: .meson-build
variables:
UNWIND: "enabled"
DRI_LOADERS: >
-D glvnd=true
DRI_DRIVERS: "auto"
GALLIUM_DRIVERS: "auto"
VULKAN_DRIVERS: intel,amd,freedreno
CC: "ccache clang-9"
CXX: "ccache clang++-9"
.meson-windows-vs2019:
extends:
- .build-windows
- .use-windows_build_vs2019
stage: meson-misc
script:
- . .\.gitlab-ci\windows\mesa_build.ps1
scons-win64:
extends: .scons-build
variables:
SCONS_TARGET: platform=windows machine=x86_64 debug=1
SCONS_CHECK_COMMAND: "true"
allow_failure: true
meson-clover:
extends: .meson-build
variables:
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=disabled
-D egl=disabled
-D gbm=disabled
GALLIUM_DRIVERS: "r600,radeonsi"
GALLIUM_ST: >
-D dri3=disabled
-D gallium-vdpau=disabled
-D gallium-xvmc=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-opencl=icd
script:
- .gitlab-ci/meson-build.sh
- LLVM_VERSION=8 .gitlab-ci/meson-build.sh
meson-clover-old-llvm:
extends:
- meson-clover
- .use-x86_build_old
variables:
UNWIND: "disabled"
DRI_LOADERS: >
-D glx=disabled
-D egl=disabled
-D gbm=disabled
-D platforms=[]
GALLIUM_DRIVERS: "i915,r600"
script:
- LLVM_VERSION=3.9 .gitlab-ci/meson-build.sh
- LLVM_VERSION=4.0 .gitlab-ci/meson-build.sh
- LLVM_VERSION=5.0 .gitlab-ci/meson-build.sh
- LLVM_VERSION=6.0 .gitlab-ci/meson-build.sh
- LLVM_VERSION=7 .gitlab-ci/meson-build.sh
meson-vulkan:
extends: .meson-build
variables:
UNWIND: "disabled"
DRI_LOADERS: >
-D glx=disabled
-D gbm=disabled
-D egl=disabled
-D platforms=x11,wayland
-D osmesa=none
GALLIUM_ST: >
-D dri3=enabled
-D gallium-vdpau=disabled
-D gallium-xvmc=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-opencl=disabled
-D b_sanitize=undefined
-D c_args=-fno-sanitize-recover=all
-D cpp_args=-fno-sanitize-recover=all
UBSAN_OPTIONS: "print_stacktrace=1"
VULKAN_DRIVERS: intel,amd,freedreno
EXTRA_OPTION: >
-D vulkan-overlay-layer=true
-D build-aco-tests=true
-D werror=true
meson-i386:
extends:
- .meson-cross
- .use-i386_build
variables:
CROSS: i386
VULKAN_DRIVERS: intel,amd
GALLIUM_DRIVERS: "iris,r300,radeonsi,swrast,virgl"
EXTRA_OPTION: >
-D vulkan-overlay-layer=true
-D werror=true
meson-s390x:
extends:
- .meson-cross
- .use-s390x_build
tags:
- kvm
variables:
CROSS: s390x
EXTRA_OPTION: >
-D werror=true
GALLIUM_DRIVERS: "swrast"
meson-ppc64el:
extends:
- meson-s390x
- .use-ppc64el_build
variables:
CROSS: ppc64el
EXTRA_OPTION: ""
GALLIUM_DRIVERS: "nouveau,radeonsi,swrast,virgl"
VULKAN_DRIVERS: "amd"
meson-mingw32-x86_64:
extends: .meson-build
stage: meson-misc
variables:
UNWIND: "disabled"
DRI_DRIVERS: ""
GALLIUM_DRIVERS: "swrast"
EXTRA_OPTION: >
-Dllvm=disabled
-Dosmesa=gallium
--cross-file=.gitlab-ci/x86_64-w64-mingw32
.test:
extends:
- .ci-run-policy
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
variables:
GIT_STRATEGY: none # testing doesn't build anything from source
before_script:
# Note: Build dir (and thus install) may be dirty due to GIT_STRATEGY
- rm -rf install
- tar -xf artifacts/install.tar
- LD_LIBRARY_PATH=install/lib find install/lib -name "*.so" -print -exec ldd {} \;
artifacts:
when: always
name: "mesa_${CI_JOB_NAME}"
paths:
- results/
.test-gl:
extends:
- .test
variables:
TAG: *x86_test-gl
image: "$CI_REGISTRY_IMAGE/debian/x86_test-gl:$TAG"
needs:
- meson-testing
- x86_test-gl
.test-vk:
extends:
- .test
variables:
TAG: *x86_test-vk
image: "$CI_REGISTRY_IMAGE/debian/x86_test-vk:$TAG"
needs:
- meson-testing
- x86_test-vk
.piglit-test:
extends:
- .test-gl
- .llvmpipe-rules
artifacts:
when: on_failure
name: "mesa_${CI_JOB_NAME}"
paths:
- summary/
variables:
LIBGL_ALWAYS_SOFTWARE: 1
PIGLIT_NO_WINDOW: 1
script:
- install/piglit/run.sh
piglit-quick_gl:
extends: .piglit-test
variables:
LP_NUM_THREADS: 0
NIR_VALIDATE: 0
PIGLIT_OPTIONS: >
--process-isolation false
-x egl_ext_device_
-x egl_ext_platform_device
-x ext_timer_query@time-elapsed
-x glx-multithread-clearbuffer
-x glx-multithread-shader-compile
-x max-texture-size
-x maxsize
PIGLIT_PROFILES: quick_gl
piglit-glslparser:
extends: .piglit-test
variables:
LP_NUM_THREADS: 0
NIR_VALIDATE: 0
PIGLIT_PROFILES: glslparser
piglit-quick_shader:
extends: .piglit-test
variables:
LP_NUM_THREADS: 1
NIR_VALIDATE: 0
PIGLIT_PROFILES: quick_shader
.deqp-test:
variables:
DEQP_SKIPS: deqp-default-skips.txt
script:
- ./install/deqp-runner.sh
.deqp-test-gl:
extends:
- .test-gl
- .deqp-test
.deqp-test-vk:
extends:
- .test-vk
- .deqp-test
variables:
DEQP_VER: vk
.fossilize-test:
extends: .test-vk
script:
- ./install/fossilize-runner.sh
artifacts:
when: on_failure
name: "mesa_${CI_JOB_NAME}"
paths:
- results/
llvmpipe-gles2:
variables:
DEQP_VER: gles2
NIR_VALIDATE: 0
# Don't use threads inside llvmpipe, we've already got all cores
# busy at the deqp-runner level.
ci: Use cts_runner for our dEQP runs. This runner is a little project by Bas, written in C++, that spawns threads that then loop grabbing chunks of the (randomly shuffled but consistently so) test list and hand it to a dEQP instance. As the remaining list gets shorter, so do the chunks, so hopefully the threads all complete effectively at once. It also handles restarting after crashes automatically. I've extended the runner a bit to do what I was doing in the bash scripts before, like the skip list and expected failures handling. This project should also be a good baseline for extending to handle retesting of intermittent failures. By switching to it, we can have the swrast tests just take up one job slot on the shared runners and keep their allotment of CPUs busy, instead of taking up job slots with single-threaded dEQP jobs. It will also let us (eventually, once I reprovision) switch the freedreno runners over to threading within the job instead of running concurrent jobs, so that memory scribbles in one pipeline don't affect unrelated pipelines, and I can experiment with their parallelism (particularly on a306 where we are frequently backed up) without trashing other people's jobs. What we lose in this process is per-test output in the log (not a big loss, I think, since we summarize fails at the end and reducing log length keeps chrome from choking on our logs so badly). We also drop the renderer sanity checking, since it's not saving qpa files for us to go poke through. Given that all the drivers involved have fail lists, if we got the wrong renderer somehow, we'd get a job failure anyway. v2: Rebase on droppong of the autoscale cluster and the arm64 build/test split. Use a script to deduplicate the cts-runner build. v3: Rebase on the amd64 build/test container split. Acked-by: Daniel Stone <daniels@collabora.com> (v1) Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com> (v2)
2019-11-05 02:54:41 +08:00
LP_NUM_THREADS: 0
DEQP_EXPECTED_FAILS: deqp-llvmpipe-fails.txt
LIBGL_ALWAYS_SOFTWARE: "true"
DEQP_EXPECTED_RENDERER: llvmpipe
extends:
- .deqp-test-gl
- .llvmpipe-rules
softpipe-gles2:
extends:
- llvmpipe-gles2
- .softpipe-rules
variables:
DEQP_EXPECTED_FAILS: deqp-softpipe-fails.txt
ci: Use cts_runner for our dEQP runs. This runner is a little project by Bas, written in C++, that spawns threads that then loop grabbing chunks of the (randomly shuffled but consistently so) test list and hand it to a dEQP instance. As the remaining list gets shorter, so do the chunks, so hopefully the threads all complete effectively at once. It also handles restarting after crashes automatically. I've extended the runner a bit to do what I was doing in the bash scripts before, like the skip list and expected failures handling. This project should also be a good baseline for extending to handle retesting of intermittent failures. By switching to it, we can have the swrast tests just take up one job slot on the shared runners and keep their allotment of CPUs busy, instead of taking up job slots with single-threaded dEQP jobs. It will also let us (eventually, once I reprovision) switch the freedreno runners over to threading within the job instead of running concurrent jobs, so that memory scribbles in one pipeline don't affect unrelated pipelines, and I can experiment with their parallelism (particularly on a306 where we are frequently backed up) without trashing other people's jobs. What we lose in this process is per-test output in the log (not a big loss, I think, since we summarize fails at the end and reducing log length keeps chrome from choking on our logs so badly). We also drop the renderer sanity checking, since it's not saving qpa files for us to go poke through. Given that all the drivers involved have fail lists, if we got the wrong renderer somehow, we'd get a job failure anyway. v2: Rebase on droppong of the autoscale cluster and the arm64 build/test split. Use a script to deduplicate the cts-runner build. v3: Rebase on the amd64 build/test container split. Acked-by: Daniel Stone <daniels@collabora.com> (v1) Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com> (v2)
2019-11-05 02:54:41 +08:00
DEQP_SKIPS: deqp-softpipe-skips.txt
GALLIUM_DRIVER: "softpipe"
DEQP_EXPECTED_RENDERER: softpipe
softpipe-gles3:
variables:
DEQP_VER: gles3
extends: softpipe-gles2
softpipe-gles31:
parallel: 2
variables:
DEQP_VER: gles31
extends: softpipe-gles2
virgl-gles2-on-gl:
variables:
DEQP_VER: gles2
NIR_VALIDATE: 0
DEQP_NO_SAVE_RESULTS: 1
# Don't use threads inside llvmpipe, we've already got all cores
# busy at the deqp-runner level.
LP_NUM_THREADS: 0
DEQP_EXPECTED_FAILS: deqp-virgl-gl-fails.txt
DEQP_OPTIONS: "--deqp-log-images=disable"
LIBGL_ALWAYS_SOFTWARE: "true"
GALLIUM_DRIVER: "virpipe"
DEQP_EXPECTED_RENDERER: virgl
extends:
- .deqp-test-gl
- .virgl-rules
virgl-gles3-on-gl:
variables:
DEQP_VER: gles3
DEQP_RUNNER_OPTIONS: "--timeout 180"
extends: virgl-gles2-on-gl
virgl-gles31-on-gl:
parallel: 2
variables:
DEQP_VER: gles31
MESA_GLES_VERSION_OVERRIDE: "3.1"
MESA_GLSL_VERSION_OVERRIDE: "310"
MESA_EXTENSION_OVERRIDE: "-GL_OES_tessellation_shader"
extends: virgl-gles3-on-gl
virgl-gl30-on-gl:
variables:
DEQP_VER: gl30
extends: virgl-gles2-on-gl
virgl-gl31-on-gl:
variables:
DEQP_VER: gl31
extends: virgl-gles2-on-gl
virgl-gl32-on-gl:
variables:
DEQP_VER: gl32
extends: virgl-gles2-on-gl
# Rules for tests that should not be present in MRs or the main
# project's pipeline (don't block marge or report red on
# mesa/mesamaster) but should be present on pipelines in personal
# branches (so you can opt in to running the flaky test when you want
# to).
.test-manual:
rules:
- *ignore_scheduled_pipelines
- if: '$CI_PROJECT_PATH != "mesa/mesa" && $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME != $CI_COMMIT_REF_NAME'
changes:
*all_paths
when: manual
- when: never
virgl-gles2-on-gles:
variables:
VIRGL_HOST_API: GLES
DEQP_EXPECTED_FAILS: deqp-virgl-gles-fails.txt
extends:
- virgl-gles2-on-gl
- .test-manual
virgl-gles3-on-gles:
variables:
VIRGL_HOST_API: GLES
DEQP_EXPECTED_FAILS: deqp-virgl-gles-fails.txt
extends:
- virgl-gles3-on-gl
- .test-manual
virgl-gles31-on-gles:
variables:
VIRGL_HOST_API: GLES
DEQP_EXPECTED_FAILS: deqp-virgl-gles-fails.txt
extends:
- virgl-gles31-on-gl
- .test-manual
arm64_a630_gles2:
extends:
- arm64_a306_gles2
variables:
BM_KERNEL: /lava-files/cheza-kernel
BM_CMDLINE: "ip=dhcp console=ttyMSM0,115200n8 root=/dev/nfs rw nfsrootdebug nfsroot=,tcp,nfsvers=4.2 init=/init"
DEQP_EXPECTED_FAILS: deqp-freedreno-a630-fails.txt
DEQP_SKIPS: deqp-freedreno-a630-skips.txt
GIT_STRATEGY: none
DEQP_EXPECTED_RENDERER: FD630
DEQP_NO_SAVE_RESULTS: ""
tags:
- google-freedreno-cheza
script:
- ./install/bare-metal/cros-servo.sh
arm64_a630_gles31:
extends: arm64_a630_gles2
variables:
DEQP_VER: gles31
# gles31 is about 12 minutes with validation enabled.
NIR_VALIDATE: 0
arm64_a630_gles3:
extends: arm64_a630_gles2
variables:
DEQP_VER: gles3
# gles3 is about 15 minutes with validation enabled.
NIR_VALIDATE: 0
# We almost always manage to lower UBOs back to constant uploads in
# the test suite, so get a little testing for it here.
arm64_a630_noubo:
extends: arm64_a630_gles31
variables:
DEQP_VER: gles31
IR3_SHADER_DEBUG: nouboopt
DEQP_CASELIST_FILTER: "functional.*ubo"
# The driver does some guessing as to whether to render using gmem
# or bypass, and some GLES3.1 features interact with either one.
# Do a little testing with gmem and bypass forced.
arm64_a630_bypass:
extends: arm64_a630_gles31
variables:
CI_NODE_INDEX: 1
CI_NODE_TOTAL: 5
FD_MESA_DEBUG: nogmem
DEQP_EXPECTED_FAILS: deqp-freedreno-a630-bypass-fails.txt
arm64_a630_traces:
extends:
- arm64_a630_gles2
variables:
BARE_METAL_TEST_SCRIPT: "/install/tracie-runner-gl.sh"
DEVICE_NAME: "freedreno-a630"
TRACIE_NO_UNIT_TESTS: 1
TRACIE_UPLOAD_TO_MINIO: 1
# This lets us run several more traces which don't use any features we're
# missing.
MESA_GLSL_VERSION_OVERRIDE: "460"
MESA_GL_VERSION_OVERRIDE: "4.6"
# Along with checking gmem path, check that we don't get obvious nir
# validation failures (though it's too expensive to have it on for the
# full CTS)
arm64_a630_gmem:
extends: arm64_a630_gles31
variables:
CI_NODE_INDEX: 1
CI_NODE_TOTAL: 5
FD_MESA_DEBUG: nobypass
NIR_VALIDATE: 1
arm64_a630_vk:
extends: arm64_a630_gles2
variables:
DEQP_VER: vk
CI_NODE_INDEX: 1
CI_NODE_TOTAL: 50
VK_DRIVER: freedreno
# Force binning in the main run, which makes sure we render at
# least 2 bins. This is the path that impacts the most different
# features. However, we end up with flaky results in
# dEQP-VK.binding_model.*.geometry and dEQP-VK.glsl.*_vertex.
TU_DEBUG: forcebin
# Do a separate sysmem pass over the testcases that really affect sysmem
# rendering. This is currently very flaky, leave it as an option for devs
# to click play on in their branches.
arm64_a630_vk_sysmem:
extends:
- arm64_a630_vk
variables:
CI_NODE_INDEX: 1
CI_NODE_TOTAL: 10
DEQP_CASELIST_FILTER: "dEQP-VK.renderpass.*"
DEQP_EXPECTED_FAILS: deqp-freedreno-a630-bypass-fails.txt
TU_DEBUG: sysmem
.baremetal-test:
extends:
- .ci-run-policy
- .test
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
stage: test
artifacts:
when: always
name: "mesa_${CI_JOB_NAME}"
paths:
- results/
- serial*.txt
arm64_a306_gles2:
extends:
- .baremetal-test
- .use-arm64_test
- .freedreno-rules
variables:
BM_KERNEL: /lava-files/Image.gz
BM_DTB: /lava-files/apq8016-sbc.dtb
BM_ROOTFS: /lava-files/rootfs-arm64
BM_CMDLINE: "ip=dhcp console=ttyMSM0,115200n8"
FLAKES_CHANNEL: "#freedreno-ci"
BARE_METAL_TEST_SCRIPT: "/install/deqp-runner.sh"
DEQP_EXPECTED_FAILS: deqp-freedreno-a307-fails.txt
DEQP_SKIPS: deqp-freedreno-a307-skips.txt
DEQP_VER: gles2
DEQP_PARALLEL: 4
DEQP_EXPECTED_RENDERER: FD307
# Since we can't get artifacts back yet, skip making them.
DEQP_NO_SAVE_RESULTS: 1
# NIR_VALIDATE=0 left intentionally unset as a3xx is fast enough at its small testsuite.
script:
- ./install/bare-metal/fastboot.sh
needs:
- arm64_test
- meson-arm64
tags:
- google-freedreno-db410c
# Fractional run, single threaded, due to flaky results
arm64_a306_gles3:
extends:
- arm64_a306_gles2
variables:
DEQP_VER: gles3
DEQP_PARALLEL: 1
CI_NODE_INDEX: 1
CI_NODE_TOTAL: 25
NIR_VALIDATE: 0
# Fractional runs with debug options. Note that since we're not
# hitting the iommu faults, we can run in parallel (derive from gles2, not gles3).
arm64_a306_gles3_options:
extends: arm64_a306_gles2
variables:
DEQP_VER: gles3
script:
# Check that the non-constbuf UBO case works.
- DEQP_RUN_SUFFIX=-nouboopt IR3_SHADER_DEBUG=nouboopt DEQP_CASELIST_FILTER="functional.*ubo" ./install/bare-metal/fastboot.sh
arm64_a530_gles2:
extends:
- arm64_a306_gles2
variables:
BM_KERNEL: /lava-files/db820c-kernel
BM_DTB: /lava-files/db820c.dtb
# Disable SMP because only CPU 0 is at a freq higher than 19mhz on
# current upstream kernel.
BM_CMDLINE: "ip=dhcp console=ttyMSM0,115200n8 nosmp"
DEQP_EXPECTED_FAILS: deqp-freedreno-a530-fails.txt
DEQP_SKIPS: deqp-freedreno-a530-skips.txt
DEQP_EXPECTED_RENDERER: FD530
NIR_VALIDATE: 0
tags:
- google-freedreno-db820c
arm64_a530_gles3:
extends:
- arm64_a530_gles2
variables:
DEQP_VER: gles3
DEQP_PARALLEL: 1
CI_NODE_INDEX: 1
CI_NODE_TOTAL: 40
arm64_a530_gles31:
extends:
- arm64_a530_gles3
variables:
DEQP_VER: gles31
CI_NODE_INDEX: 1
CI_NODE_TOTAL: 10
# RADV CI
.test-radv:
extends: .radv-rules
stage: radv
variables:
VK_DRIVER: radeon
ACO_DEBUG: validateir,validatera
# Can only be triggered manually on personal branches because RADV is the only
# driver that does Vulkan testing at the moment.
radv_polaris10_vkcts:
extends:
- .deqp-test-vk
- .test-radv
- .test-manual
variables:
DEQP_SKIPS: deqp-radv-polaris10-skips.txt
tags:
- polaris10
radv-fossils:
extends:
- .fossilize-test
- .test-radv
script:
# Pitcairn (GFX6)
- export RADV_FORCE_FAMILY="pitcairn"
- ./install/fossilize-runner.sh
# Bonaire (GFX7)
- export RADV_FORCE_FAMILY="bonaire"
- ./install/fossilize-runner.sh
# Polaris10 (GFX8)
- export RADV_FORCE_FAMILY="polaris10"
- ./install/fossilize-runner.sh
# Vega10 (GFX9)
- export RADV_FORCE_FAMILY="gfx900"
- ./install/fossilize-runner.sh
# Navi10 (GFX10)
- export RADV_FORCE_FAMILY="gfx1010"
- ./install/fossilize-runner.sh
# Traces CI
.traces-test:
cache:
key: ${CI_JOB_NAME}
paths:
- traces-db/
variables:
TRACIE_UPLOAD_TO_MINIO: 1
.traces-test-gl:
extends:
- .test-gl
- .traces-test
script:
- ./install/tracie-runner-gl.sh
.traces-test-vk:
extends:
- .test-vk
- .traces-test
script:
- ./install/tracie-runner-vk.sh
llvmpipe-traces:
extends:
- .traces-test-gl
- .llvmpipe-rules
variables:
LIBGL_ALWAYS_SOFTWARE: "true"
GALLIUM_DRIVER: "llvmpipe"
DEVICE_NAME: "gl-vmware-llvmpipe"
radv-polaris10-traces:
extends:
- .traces-test-vk
- .test-radv
- .test-manual
variables:
DEVICE_NAME: "vk-amd-polaris10"
tags:
- polaris10
radv-raven-traces:
extends:
- .traces-test-vk
- .test-radv
- .test-manual
variables:
DEVICE_NAME: "vk-amd-raven"
tags:
- raven
virgl-traces:
extends:
- .traces-test-gl
- .virgl-rules
variables:
LIBGL_ALWAYS_SOFTWARE: "true"
GALLIUM_DRIVER: "virpipe"
DEVICE_NAME: "gl-virgl"
MESA_GLES_VERSION_OVERRIDE: "3.1"
MESA_GLSL_VERSION_OVERRIDE: "310"