2019-01-20 19:21:45 +08:00
|
|
|
variables:
|
2020-03-07 05:23:20 +08:00
|
|
|
FDO_UPSTREAM_REPO: mesa/mesa
|
2021-04-01 17:56:17 +08:00
|
|
|
MESA_TEMPLATES_COMMIT: &ci-templates-commit 290b79e0e78eab67a83766f4e9691be554fc4afd
|
CI: reduce bandwidth for git pull
Over the last 7 days, git pulls represented a total of 1.7 TB.
On those 1.7 TB, we can see:
- ~300 GB for the CI farm on hetzner
- ~730 GB for the CI farm on packet.net
- ~680 GB for the rest of the world
We can not really change the rest of the world*, but we can
certainly reduce the egress costs towards our CI farms.
Right now, the gitlab runners are not doing a good job at
caching the git trees for the various jobs we make, and
we end up with a lot of cache-misses. A typical pipeline
ends up with a good 2.8GB of git pull data. (a compressed
archive of the mesa folder accounts for 280MB)
In this patch, we implemented what was suggested in
https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576
- we host a brand new MinIO server on packet
- jobs can upload files on 2 locations:
* git-cache/<namespace>/<project>/<branch-name>.tar.gz
* artifacts/<namespace>/<project>/<pipeline-id>/
- the authorization is handled by gitlab with short tokens
valid only for the time of the job is running
- whenever a job runs, the runner are configured to execute
(eval) $CI_PRE_CLONE_SCRIPT
- this variable is set globally to download the current cache
from the MinIO packet server, unpack it and replace the
possibly out of date cache found on the runner
- then git fetch is run by the runner, and only the delta
between the upstream tree and the local tree gets pulled.
We can rebuild the git cache in a schedule job (once a day
seems sufficient), and then we can stop the cache miss
entirely.
First results showed that instead of pulling 280MB of data
in my fork, I got a pull of only 250KB. That should help us.
* arguably, there are other farms in the rest of the world, so
hopefully we can change those too.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>
2020-06-11 23:16:28 +08:00
|
|
|
CI_PRE_CLONE_SCRIPT: |-
|
|
|
|
set -o xtrace
|
2021-12-03 15:07:47 +08:00
|
|
|
wget -q -O download-git-cache.sh ${CI_PROJECT_URL}/-/raw/${CI_COMMIT_SHA}/.gitlab-ci/download-git-cache.sh
|
|
|
|
bash download-git-cache.sh
|
|
|
|
rm download-git-cache.sh
|
CI: reduce bandwidth for git pull
Over the last 7 days, git pulls represented a total of 1.7 TB.
On those 1.7 TB, we can see:
- ~300 GB for the CI farm on hetzner
- ~730 GB for the CI farm on packet.net
- ~680 GB for the rest of the world
We can not really change the rest of the world*, but we can
certainly reduce the egress costs towards our CI farms.
Right now, the gitlab runners are not doing a good job at
caching the git trees for the various jobs we make, and
we end up with a lot of cache-misses. A typical pipeline
ends up with a good 2.8GB of git pull data. (a compressed
archive of the mesa folder accounts for 280MB)
In this patch, we implemented what was suggested in
https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576
- we host a brand new MinIO server on packet
- jobs can upload files on 2 locations:
* git-cache/<namespace>/<project>/<branch-name>.tar.gz
* artifacts/<namespace>/<project>/<pipeline-id>/
- the authorization is handled by gitlab with short tokens
valid only for the time of the job is running
- whenever a job runs, the runner are configured to execute
(eval) $CI_PRE_CLONE_SCRIPT
- this variable is set globally to download the current cache
from the MinIO packet server, unpack it and replace the
possibly out of date cache found on the runner
- then git fetch is run by the runner, and only the delta
between the upstream tree and the local tree gets pulled.
We can rebuild the git cache in a schedule job (once a day
seems sufficient), and then we can stop the cache miss
entirely.
First results showed that instead of pulling 280MB of data
in my fork, I got a pull of only 250KB. That should help us.
* arguably, there are other farms in the rest of the world, so
hopefully we can change those too.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>
2020-06-11 23:16:28 +08:00
|
|
|
set +o xtrace
|
2022-02-23 23:44:33 +08:00
|
|
|
CI_JOB_JWT_FILE: /minio_jwt
|
2020-08-24 04:32:40 +08:00
|
|
|
MINIO_HOST: minio-packet.freedesktop.org
|
2021-06-10 23:24:48 +08:00
|
|
|
# per-pipeline artifact storage on MinIO
|
|
|
|
PIPELINE_ARTIFACTS_BASE: ${MINIO_HOST}/artifacts/${CI_PROJECT_PATH}/${CI_PIPELINE_ID}
|
2021-06-10 23:29:39 +08:00
|
|
|
# per-job artifact storage on MinIO
|
|
|
|
JOB_ARTIFACTS_BASE: ${PIPELINE_ARTIFACTS_BASE}/${CI_JOB_ID}
|
2021-06-12 17:19:36 +08:00
|
|
|
# reference images stored for traces
|
|
|
|
PIGLIT_REPLAY_REFERENCE_IMAGES_BASE: "${MINIO_HOST}/mesa-tracie-results/$FDO_UPSTREAM_REPO"
|
2021-08-28 01:06:16 +08:00
|
|
|
# Individual CI farm status, set to "offline" to disable jobs
|
|
|
|
# running on a particular CI farm (ie. for outages, etc):
|
2022-06-17 21:13:41 +08:00
|
|
|
FD_FARM: "online"
|
2022-07-15 23:27:44 +08:00
|
|
|
COLLABORA_FARM: "online"
|
2022-05-28 00:24:33 +08:00
|
|
|
MICROSOFT_FARM: "online"
|
2022-06-23 23:15:39 +08:00
|
|
|
LIMA_FARM: "online"
|
2022-06-22 07:12:29 +08:00
|
|
|
IGALIA_FARM: "online"
|
2019-04-02 15:24:00 +08:00
|
|
|
|
2021-12-02 21:13:10 +08:00
|
|
|
default:
|
|
|
|
before_script:
|
|
|
|
- echo -e "\e[0Ksection_start:$(date +%s):unset_env_vars_section[collapsed=true]\r\e[0KUnsetting vulnerable environment variables"
|
|
|
|
- echo -n "${CI_JOB_JWT}" > "${CI_JOB_JWT_FILE}"
|
|
|
|
- unset CI_JOB_JWT
|
|
|
|
- echo -e "\e[0Ksection_end:$(date +%s):unset_env_vars_section\r\e[0K"
|
|
|
|
|
|
|
|
after_script:
|
|
|
|
- >
|
|
|
|
set +x
|
|
|
|
|
|
|
|
test -e "${CI_JOB_JWT_FILE}" &&
|
|
|
|
export CI_JOB_JWT="$(<${CI_JOB_JWT_FILE})" &&
|
|
|
|
rm "${CI_JOB_JWT_FILE}"
|
|
|
|
|
2022-07-08 02:32:45 +08:00
|
|
|
# Retry build or test jobs up to twice when the gitlab-runner itself fails somehow.
|
|
|
|
retry:
|
|
|
|
max: 2
|
|
|
|
when:
|
|
|
|
- runner_system_failure
|
|
|
|
|
2019-04-02 15:24:00 +08:00
|
|
|
include:
|
2020-03-07 05:23:20 +08:00
|
|
|
- project: 'freedesktop/ci-templates'
|
2021-11-05 11:13:01 +08:00
|
|
|
ref: 34f4ade99434043f88e164933f570301fd18b125
|
2020-11-25 00:02:13 +08:00
|
|
|
file:
|
2020-11-19 01:23:29 +08:00
|
|
|
- '/templates/ci-fairy.yml'
|
2020-12-03 00:37:16 +08:00
|
|
|
- project: 'freedesktop/ci-templates'
|
2020-12-16 01:02:04 +08:00
|
|
|
ref: *ci-templates-commit
|
2020-12-03 00:37:16 +08:00
|
|
|
file:
|
2020-11-25 00:02:13 +08:00
|
|
|
- '/templates/debian.yml'
|
2021-05-12 22:42:37 +08:00
|
|
|
- '/templates/fedora.yml'
|
2021-11-10 05:38:33 +08:00
|
|
|
- local: '.gitlab-ci/image-tags.yml'
|
2021-06-10 18:10:10 +08:00
|
|
|
- local: '.gitlab-ci/lava/lava-gitlab-ci.yml'
|
2022-04-08 19:29:04 +08:00
|
|
|
- local: '.gitlab-ci/container/gitlab-ci.yml'
|
|
|
|
- local: '.gitlab-ci/build/gitlab-ci.yml'
|
|
|
|
- local: '.gitlab-ci/test/gitlab-ci.yml'
|
2019-10-31 03:39:08 +08:00
|
|
|
- local: '.gitlab-ci/test-source-dep.yml'
|
2021-02-19 03:12:56 +08:00
|
|
|
- local: 'src/amd/ci/gitlab-ci.yml'
|
2021-07-15 19:11:17 +08:00
|
|
|
- local: 'src/broadcom/ci/gitlab-ci.yml'
|
2020-06-12 19:23:44 +08:00
|
|
|
- local: 'src/etnaviv/ci/gitlab-ci.yml'
|
2021-02-19 03:12:56 +08:00
|
|
|
- local: 'src/freedreno/ci/gitlab-ci.yml'
|
2021-11-24 03:25:41 +08:00
|
|
|
- local: 'src/gallium/drivers/crocus/ci/gitlab-ci.yml'
|
2021-11-21 22:38:20 +08:00
|
|
|
- local: 'src/gallium/drivers/d3d12/ci/gitlab-ci.yml'
|
2021-05-18 06:06:34 +08:00
|
|
|
- local: 'src/gallium/drivers/i915/ci/gitlab-ci.yml'
|
2021-03-02 04:50:51 +08:00
|
|
|
- local: 'src/gallium/drivers/lima/ci/gitlab-ci.yml'
|
2021-02-19 03:12:56 +08:00
|
|
|
- local: 'src/gallium/drivers/llvmpipe/ci/gitlab-ci.yml'
|
2021-12-03 08:14:30 +08:00
|
|
|
- local: 'src/gallium/drivers/nouveau/ci/gitlab-ci.yml'
|
2021-03-02 04:50:51 +08:00
|
|
|
- local: 'src/gallium/drivers/radeonsi/ci/gitlab-ci.yml'
|
2021-02-19 03:12:56 +08:00
|
|
|
- local: 'src/gallium/drivers/softpipe/ci/gitlab-ci.yml'
|
|
|
|
- local: 'src/gallium/drivers/virgl/ci/gitlab-ci.yml'
|
|
|
|
- local: 'src/gallium/drivers/zink/ci/gitlab-ci.yml'
|
2021-03-19 03:58:04 +08:00
|
|
|
- local: 'src/gallium/frontends/lavapipe/ci/gitlab-ci.yml'
|
2021-12-06 17:01:52 +08:00
|
|
|
- local: 'src/intel/ci/gitlab-ci.yml'
|
2021-11-21 22:41:37 +08:00
|
|
|
- local: 'src/microsoft/ci/gitlab-ci.yml'
|
2021-07-05 15:25:57 +08:00
|
|
|
- local: 'src/panfrost/ci/gitlab-ci.yml'
|
2019-09-18 22:03:36 +08:00
|
|
|
|
2019-01-20 19:21:45 +08:00
|
|
|
stages:
|
2020-11-19 01:32:05 +08:00
|
|
|
- sanity
|
2020-08-06 23:10:08 +08:00
|
|
|
- container
|
CI: reduce bandwidth for git pull
Over the last 7 days, git pulls represented a total of 1.7 TB.
On those 1.7 TB, we can see:
- ~300 GB for the CI farm on hetzner
- ~730 GB for the CI farm on packet.net
- ~680 GB for the rest of the world
We can not really change the rest of the world*, but we can
certainly reduce the egress costs towards our CI farms.
Right now, the gitlab runners are not doing a good job at
caching the git trees for the various jobs we make, and
we end up with a lot of cache-misses. A typical pipeline
ends up with a good 2.8GB of git pull data. (a compressed
archive of the mesa folder accounts for 280MB)
In this patch, we implemented what was suggested in
https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576
- we host a brand new MinIO server on packet
- jobs can upload files on 2 locations:
* git-cache/<namespace>/<project>/<branch-name>.tar.gz
* artifacts/<namespace>/<project>/<pipeline-id>/
- the authorization is handled by gitlab with short tokens
valid only for the time of the job is running
- whenever a job runs, the runner are configured to execute
(eval) $CI_PRE_CLONE_SCRIPT
- this variable is set globally to download the current cache
from the MinIO packet server, unpack it and replace the
possibly out of date cache found on the runner
- then git fetch is run by the runner, and only the delta
between the upstream tree and the local tree gets pulled.
We can rebuild the git cache in a schedule job (once a day
seems sufficient), and then we can stop the cache miss
entirely.
First results showed that instead of pulling 280MB of data
in my fork, I got a pull of only 250KB. That should help us.
* arguably, there are other farms in the rest of the world, so
hopefully we can change those too.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>
2020-06-11 23:16:28 +08:00
|
|
|
- git-archive
|
2021-06-16 00:23:33 +08:00
|
|
|
- build-x86_64
|
2020-12-02 21:50:38 +08:00
|
|
|
- build-misc
|
2020-12-02 21:50:38 +08:00
|
|
|
- amd
|
2020-12-18 11:40:02 +08:00
|
|
|
- intel
|
2021-12-03 08:14:30 +08:00
|
|
|
- nouveau
|
2020-12-02 21:50:38 +08:00
|
|
|
- arm
|
2021-01-25 19:33:52 +08:00
|
|
|
- broadcom
|
2020-12-02 21:50:38 +08:00
|
|
|
- freedreno
|
2020-06-12 19:23:44 +08:00
|
|
|
- etnaviv
|
2020-12-02 21:57:47 +08:00
|
|
|
- software-renderer
|
2020-12-02 21:50:38 +08:00
|
|
|
- layered-backends
|
2020-12-05 17:15:38 +08:00
|
|
|
- deploy
|
2019-01-20 19:21:45 +08:00
|
|
|
|
2020-07-07 21:02:35 +08:00
|
|
|
|
2020-09-08 18:20:39 +08:00
|
|
|
# YAML anchors for rule conditions
|
|
|
|
# --------------------------------
|
|
|
|
.rules-anchors:
|
|
|
|
rules:
|
2021-12-16 19:03:18 +08:00
|
|
|
# Pipeline for forked project branch
|
|
|
|
- if: &is-forked-branch '$CI_COMMIT_BRANCH && $CI_PROJECT_NAMESPACE != "mesa"'
|
2020-09-08 23:52:24 +08:00
|
|
|
when: manual
|
2021-04-22 19:17:40 +08:00
|
|
|
# Forked project branch / pre-merge pipeline not for Marge bot
|
2021-12-16 19:09:10 +08:00
|
|
|
- if: &is-forked-branch-or-pre-merge-not-for-marge '$CI_PROJECT_NAMESPACE != "mesa" || ($GITLAB_USER_LOGIN != "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event")'
|
2020-09-08 23:30:49 +08:00
|
|
|
when: manual
|
2021-04-27 04:16:55 +08:00
|
|
|
# Pipeline runs for the main branch of the upstream Mesa project
|
2021-12-16 19:03:18 +08:00
|
|
|
- if: &is-mesa-main '$CI_PROJECT_NAMESPACE == "mesa" && $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH && $CI_COMMIT_BRANCH'
|
2020-09-08 18:20:39 +08:00
|
|
|
when: always
|
2020-09-08 23:47:18 +08:00
|
|
|
# Post-merge pipeline
|
2021-12-16 19:03:18 +08:00
|
|
|
- if: &is-post-merge '$CI_PROJECT_NAMESPACE == "mesa" && $CI_COMMIT_BRANCH'
|
2020-09-08 23:47:18 +08:00
|
|
|
when: on_success
|
2020-09-08 18:36:11 +08:00
|
|
|
# Post-merge pipeline, not for Marge Bot
|
2021-12-16 19:03:18 +08:00
|
|
|
- if: &is-post-merge-not-for-marge '$CI_PROJECT_NAMESPACE == "mesa" && $GITLAB_USER_LOGIN != "marge-bot" && $CI_COMMIT_BRANCH'
|
2020-09-08 18:36:11 +08:00
|
|
|
when: on_success
|
2020-09-08 23:58:32 +08:00
|
|
|
# Pre-merge pipeline
|
2021-12-16 19:09:10 +08:00
|
|
|
- if: &is-pre-merge '$CI_PIPELINE_SOURCE == "merge_request_event"'
|
2020-09-08 23:58:32 +08:00
|
|
|
when: on_success
|
2020-09-08 18:31:08 +08:00
|
|
|
# Pre-merge pipeline for Marge Bot
|
2021-12-16 19:09:10 +08:00
|
|
|
- if: &is-pre-merge-for-marge '$GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event"'
|
2020-09-08 18:31:08 +08:00
|
|
|
when: on_success
|
2020-09-08 18:20:39 +08:00
|
|
|
|
|
|
|
|
2020-06-22 17:10:29 +08:00
|
|
|
.docs-base:
|
2020-11-19 01:23:29 +08:00
|
|
|
extends:
|
|
|
|
- .fdo.ci-fairy
|
2022-07-08 02:36:05 +08:00
|
|
|
- .build-rules
|
2019-05-27 23:12:10 +08:00
|
|
|
script:
|
2021-06-15 05:55:21 +08:00
|
|
|
- apk --no-cache add graphviz doxygen
|
2022-03-02 19:58:58 +08:00
|
|
|
- pip3 install sphinx breathe mako sphinx_rtd_theme
|
2021-06-15 05:55:21 +08:00
|
|
|
- docs/doxygen-wrapper.py --out-dir=docs/doxygen_xml
|
2021-03-30 18:46:49 +08:00
|
|
|
- sphinx-build -W -b html docs public
|
2020-06-22 17:10:29 +08:00
|
|
|
|
|
|
|
pages:
|
|
|
|
extends: .docs-base
|
2020-06-22 17:10:40 +08:00
|
|
|
stage: deploy
|
2019-05-27 23:12:10 +08:00
|
|
|
artifacts:
|
|
|
|
paths:
|
|
|
|
- public
|
2020-08-06 23:12:11 +08:00
|
|
|
needs: []
|
2020-06-22 17:21:06 +08:00
|
|
|
rules:
|
2022-06-29 01:43:34 +08:00
|
|
|
- !reference [.no_scheduled_pipelines-rules, rules]
|
2021-04-27 04:16:55 +08:00
|
|
|
- if: *is-mesa-main
|
2020-06-22 17:21:06 +08:00
|
|
|
changes: &docs-or-ci
|
|
|
|
- docs/**/*
|
|
|
|
- .gitlab-ci.yml
|
2020-07-01 23:41:06 +08:00
|
|
|
when: always
|
2020-06-22 17:21:06 +08:00
|
|
|
# Other cases default to never
|
2019-01-20 19:21:45 +08:00
|
|
|
|
2020-06-22 17:10:29 +08:00
|
|
|
test-docs:
|
|
|
|
extends: .docs-base
|
2020-07-21 22:13:37 +08:00
|
|
|
# Cancel job if a newer commit is pushed to the same branch
|
|
|
|
interruptible: true
|
2020-08-06 23:10:08 +08:00
|
|
|
stage: deploy
|
2020-12-04 01:25:54 +08:00
|
|
|
needs: []
|
|
|
|
rules:
|
2022-06-29 01:43:34 +08:00
|
|
|
- !reference [.no_scheduled_pipelines-rules, rules]
|
2020-12-04 01:25:54 +08:00
|
|
|
- if: *is-forked-branch
|
|
|
|
changes: *docs-or-ci
|
|
|
|
when: manual
|
|
|
|
# Other cases default to never
|
|
|
|
|
|
|
|
test-docs-mr:
|
|
|
|
extends:
|
|
|
|
- test-docs
|
2020-11-20 18:05:56 +08:00
|
|
|
needs:
|
|
|
|
- sanity
|
2021-01-09 05:08:56 +08:00
|
|
|
artifacts:
|
|
|
|
expose_as: 'Documentation preview'
|
|
|
|
paths:
|
2021-02-10 19:20:25 +08:00
|
|
|
- public/
|
2020-06-22 17:13:05 +08:00
|
|
|
rules:
|
2020-12-04 01:25:54 +08:00
|
|
|
- if: *is-pre-merge
|
2020-09-01 17:44:54 +08:00
|
|
|
changes: *docs-or-ci
|
2021-01-09 05:08:56 +08:00
|
|
|
when: on_success
|
2020-06-22 17:13:05 +08:00
|
|
|
# Other cases default to never
|
2020-06-22 17:10:29 +08:00
|
|
|
|
2022-07-08 02:36:05 +08:00
|
|
|
# When to automatically run the CI for build jobs
|
|
|
|
.build-rules:
|
2019-09-26 15:27:27 +08:00
|
|
|
rules:
|
2022-06-29 01:43:34 +08:00
|
|
|
- !reference [.no_scheduled_pipelines-rules, rules]
|
2020-04-03 18:50:11 +08:00
|
|
|
# If any files affecting the pipeline are changed, build/test jobs run
|
|
|
|
# automatically once all dependency jobs have passed
|
|
|
|
- changes: &all_paths
|
2020-01-13 16:45:57 +08:00
|
|
|
- VERSION
|
2020-05-15 04:51:38 +08:00
|
|
|
- bin/git_sha1_gen.py
|
|
|
|
- bin/install_megadrivers.py
|
|
|
|
- bin/meson_get_version.py
|
|
|
|
- bin/symbols-check.py
|
2020-01-13 16:45:57 +08:00
|
|
|
# GitLab CI
|
|
|
|
- .gitlab-ci.yml
|
|
|
|
- .gitlab-ci/**/*
|
|
|
|
# Meson
|
|
|
|
- meson*
|
|
|
|
- build-support/**/*
|
|
|
|
- subprojects/**/*
|
|
|
|
# Source code
|
|
|
|
- include/**/*
|
|
|
|
- src/**/*
|
|
|
|
when: on_success
|
2022-06-29 06:15:54 +08:00
|
|
|
# Otherwise, build/test jobs won't run because no rule matched.
|
2019-02-22 23:52:08 +08:00
|
|
|
|
2020-01-13 16:45:57 +08:00
|
|
|
|
2019-09-06 23:35:52 +08:00
|
|
|
.ci-deqp-artifacts:
|
2019-06-29 07:35:32 +08:00
|
|
|
artifacts:
|
2020-03-06 19:35:17 +08:00
|
|
|
name: "mesa_${CI_JOB_NAME}"
|
2019-06-29 07:35:32 +08:00
|
|
|
when: always
|
|
|
|
untracked: false
|
|
|
|
paths:
|
|
|
|
# Watch out! Artifacts are relative to the build dir.
|
|
|
|
# https://gitlab.com/gitlab-org/gitlab-ce/commit/8788fb925706cad594adf6917a6c5f6587dd1521
|
|
|
|
- artifacts
|
2021-02-10 00:43:35 +08:00
|
|
|
- _build/meson-logs/*.txt
|
2021-03-04 19:58:56 +08:00
|
|
|
- _build/meson-logs/strace
|
2019-02-22 23:52:08 +08:00
|
|
|
|
2022-04-08 19:29:04 +08:00
|
|
|
.container-rules:
|
2020-04-03 18:50:11 +08:00
|
|
|
rules:
|
2022-06-29 01:43:34 +08:00
|
|
|
- !reference [.no_scheduled_pipelines-rules, rules]
|
2020-06-29 17:33:13 +08:00
|
|
|
# Run pipeline by default in the main project if any CI pipeline
|
|
|
|
# configuration files were changed, to ensure docker images are up to date
|
2020-09-08 23:47:18 +08:00
|
|
|
- if: *is-post-merge
|
2020-06-29 17:33:13 +08:00
|
|
|
changes:
|
|
|
|
- .gitlab-ci.yml
|
|
|
|
- .gitlab-ci/**/*
|
|
|
|
when: on_success
|
2020-04-03 17:46:12 +08:00
|
|
|
# Run pipeline by default if it was triggered by Marge Bot, is for a
|
2020-06-29 17:33:13 +08:00
|
|
|
# merge request, and any files affecting the pipeline were changed
|
2020-09-08 18:31:08 +08:00
|
|
|
- if: *is-pre-merge-for-marge
|
2020-04-03 18:50:11 +08:00
|
|
|
changes:
|
|
|
|
*all_paths
|
|
|
|
when: on_success
|
2020-06-29 17:33:13 +08:00
|
|
|
# Run pipeline by default in the main project if it was not triggered by
|
|
|
|
# Marge Bot, and any files affecting the pipeline were changed
|
2020-09-08 18:36:11 +08:00
|
|
|
- if: *is-post-merge-not-for-marge
|
2020-04-03 18:50:11 +08:00
|
|
|
changes:
|
|
|
|
*all_paths
|
|
|
|
when: on_success
|
2020-06-29 17:33:13 +08:00
|
|
|
# Allow triggering jobs manually in other cases if any files affecting the
|
|
|
|
# pipeline were changed
|
2020-04-03 17:46:12 +08:00
|
|
|
- changes:
|
2020-04-03 18:50:11 +08:00
|
|
|
*all_paths
|
|
|
|
when: manual
|
2022-06-29 06:15:54 +08:00
|
|
|
# Otherwise, container jobs won't run because no rule matched.
|
CI: reduce bandwidth for git pull
Over the last 7 days, git pulls represented a total of 1.7 TB.
On those 1.7 TB, we can see:
- ~300 GB for the CI farm on hetzner
- ~730 GB for the CI farm on packet.net
- ~680 GB for the rest of the world
We can not really change the rest of the world*, but we can
certainly reduce the egress costs towards our CI farms.
Right now, the gitlab runners are not doing a good job at
caching the git trees for the various jobs we make, and
we end up with a lot of cache-misses. A typical pipeline
ends up with a good 2.8GB of git pull data. (a compressed
archive of the mesa folder accounts for 280MB)
In this patch, we implemented what was suggested in
https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576
- we host a brand new MinIO server on packet
- jobs can upload files on 2 locations:
* git-cache/<namespace>/<project>/<branch-name>.tar.gz
* artifacts/<namespace>/<project>/<pipeline-id>/
- the authorization is handled by gitlab with short tokens
valid only for the time of the job is running
- whenever a job runs, the runner are configured to execute
(eval) $CI_PRE_CLONE_SCRIPT
- this variable is set globally to download the current cache
from the MinIO packet server, unpack it and replace the
possibly out of date cache found on the runner
- then git fetch is run by the runner, and only the delta
between the upstream tree and the local tree gets pulled.
We can rebuild the git cache in a schedule job (once a day
seems sufficient), and then we can stop the cache miss
entirely.
First results showed that instead of pulling 280MB of data
in my fork, I got a pull of only 250KB. That should help us.
* arguably, there are other farms in the rest of the world, so
hopefully we can change those too.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>
2020-06-11 23:16:28 +08:00
|
|
|
|
|
|
|
# Git archive
|
|
|
|
|
|
|
|
make git archive:
|
2020-11-19 01:23:29 +08:00
|
|
|
extends:
|
|
|
|
- .fdo.ci-fairy
|
CI: reduce bandwidth for git pull
Over the last 7 days, git pulls represented a total of 1.7 TB.
On those 1.7 TB, we can see:
- ~300 GB for the CI farm on hetzner
- ~730 GB for the CI farm on packet.net
- ~680 GB for the rest of the world
We can not really change the rest of the world*, but we can
certainly reduce the egress costs towards our CI farms.
Right now, the gitlab runners are not doing a good job at
caching the git trees for the various jobs we make, and
we end up with a lot of cache-misses. A typical pipeline
ends up with a good 2.8GB of git pull data. (a compressed
archive of the mesa folder accounts for 280MB)
In this patch, we implemented what was suggested in
https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576
- we host a brand new MinIO server on packet
- jobs can upload files on 2 locations:
* git-cache/<namespace>/<project>/<branch-name>.tar.gz
* artifacts/<namespace>/<project>/<pipeline-id>/
- the authorization is handled by gitlab with short tokens
valid only for the time of the job is running
- whenever a job runs, the runner are configured to execute
(eval) $CI_PRE_CLONE_SCRIPT
- this variable is set globally to download the current cache
from the MinIO packet server, unpack it and replace the
possibly out of date cache found on the runner
- then git fetch is run by the runner, and only the delta
between the upstream tree and the local tree gets pulled.
We can rebuild the git cache in a schedule job (once a day
seems sufficient), and then we can stop the cache miss
entirely.
First results showed that instead of pulling 280MB of data
in my fork, I got a pull of only 250KB. That should help us.
* arguably, there are other farms in the rest of the world, so
hopefully we can change those too.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>
2020-06-11 23:16:28 +08:00
|
|
|
stage: git-archive
|
2020-07-07 21:02:35 +08:00
|
|
|
rules:
|
2022-06-29 01:43:34 +08:00
|
|
|
- !reference [.scheduled_pipeline-rules, rules]
|
CI: reduce bandwidth for git pull
Over the last 7 days, git pulls represented a total of 1.7 TB.
On those 1.7 TB, we can see:
- ~300 GB for the CI farm on hetzner
- ~730 GB for the CI farm on packet.net
- ~680 GB for the rest of the world
We can not really change the rest of the world*, but we can
certainly reduce the egress costs towards our CI farms.
Right now, the gitlab runners are not doing a good job at
caching the git trees for the various jobs we make, and
we end up with a lot of cache-misses. A typical pipeline
ends up with a good 2.8GB of git pull data. (a compressed
archive of the mesa folder accounts for 280MB)
In this patch, we implemented what was suggested in
https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576
- we host a brand new MinIO server on packet
- jobs can upload files on 2 locations:
* git-cache/<namespace>/<project>/<branch-name>.tar.gz
* artifacts/<namespace>/<project>/<pipeline-id>/
- the authorization is handled by gitlab with short tokens
valid only for the time of the job is running
- whenever a job runs, the runner are configured to execute
(eval) $CI_PRE_CLONE_SCRIPT
- this variable is set globally to download the current cache
from the MinIO packet server, unpack it and replace the
possibly out of date cache found on the runner
- then git fetch is run by the runner, and only the delta
between the upstream tree and the local tree gets pulled.
We can rebuild the git cache in a schedule job (once a day
seems sufficient), and then we can stop the cache miss
entirely.
First results showed that instead of pulling 280MB of data
in my fork, I got a pull of only 250KB. That should help us.
* arguably, there are other farms in the rest of the world, so
hopefully we can change those too.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>
2020-06-11 23:16:28 +08:00
|
|
|
# ensure we are running on packet
|
|
|
|
tags:
|
|
|
|
- packet.net
|
|
|
|
script:
|
2020-12-04 21:55:54 +08:00
|
|
|
# Compactify the .git directory
|
|
|
|
- git gc --aggressive
|
CI: reduce bandwidth for git pull
Over the last 7 days, git pulls represented a total of 1.7 TB.
On those 1.7 TB, we can see:
- ~300 GB for the CI farm on hetzner
- ~730 GB for the CI farm on packet.net
- ~680 GB for the rest of the world
We can not really change the rest of the world*, but we can
certainly reduce the egress costs towards our CI farms.
Right now, the gitlab runners are not doing a good job at
caching the git trees for the various jobs we make, and
we end up with a lot of cache-misses. A typical pipeline
ends up with a good 2.8GB of git pull data. (a compressed
archive of the mesa folder accounts for 280MB)
In this patch, we implemented what was suggested in
https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576
- we host a brand new MinIO server on packet
- jobs can upload files on 2 locations:
* git-cache/<namespace>/<project>/<branch-name>.tar.gz
* artifacts/<namespace>/<project>/<pipeline-id>/
- the authorization is handled by gitlab with short tokens
valid only for the time of the job is running
- whenever a job runs, the runner are configured to execute
(eval) $CI_PRE_CLONE_SCRIPT
- this variable is set globally to download the current cache
from the MinIO packet server, unpack it and replace the
possibly out of date cache found on the runner
- then git fetch is run by the runner, and only the delta
between the upstream tree and the local tree gets pulled.
We can rebuild the git cache in a schedule job (once a day
seems sufficient), and then we can stop the cache miss
entirely.
First results showed that instead of pulling 280MB of data
in my fork, I got a pull of only 250KB. That should help us.
* arguably, there are other farms in the rest of the world, so
hopefully we can change those too.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>
2020-06-11 23:16:28 +08:00
|
|
|
# compress the current folder
|
|
|
|
- tar -cvzf ../$CI_PROJECT_NAME.tar.gz .
|
|
|
|
|
2021-12-02 21:10:26 +08:00
|
|
|
# login with the JWT token file
|
|
|
|
- ci-fairy minio login --token-file "${CI_JOB_JWT_FILE}"
|
2020-08-24 04:32:40 +08:00
|
|
|
- ci-fairy minio cp ../$CI_PROJECT_NAME.tar.gz minio://$MINIO_HOST/git-cache/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/$CI_PROJECT_NAME.tar.gz
|
CI: reduce bandwidth for git pull
Over the last 7 days, git pulls represented a total of 1.7 TB.
On those 1.7 TB, we can see:
- ~300 GB for the CI farm on hetzner
- ~730 GB for the CI farm on packet.net
- ~680 GB for the rest of the world
We can not really change the rest of the world*, but we can
certainly reduce the egress costs towards our CI farms.
Right now, the gitlab runners are not doing a good job at
caching the git trees for the various jobs we make, and
we end up with a lot of cache-misses. A typical pipeline
ends up with a good 2.8GB of git pull data. (a compressed
archive of the mesa folder accounts for 280MB)
In this patch, we implemented what was suggested in
https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576
- we host a brand new MinIO server on packet
- jobs can upload files on 2 locations:
* git-cache/<namespace>/<project>/<branch-name>.tar.gz
* artifacts/<namespace>/<project>/<pipeline-id>/
- the authorization is handled by gitlab with short tokens
valid only for the time of the job is running
- whenever a job runs, the runner are configured to execute
(eval) $CI_PRE_CLONE_SCRIPT
- this variable is set globally to download the current cache
from the MinIO packet server, unpack it and replace the
possibly out of date cache found on the runner
- then git fetch is run by the runner, and only the delta
between the upstream tree and the local tree gets pulled.
We can rebuild the git cache in a schedule job (once a day
seems sufficient), and then we can stop the cache miss
entirely.
First results showed that instead of pulling 280MB of data
in my fork, I got a pull of only 250KB. That should help us.
* arguably, there are other farms in the rest of the world, so
hopefully we can change those too.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>
2020-06-11 23:16:28 +08:00
|
|
|
|
|
|
|
|
2020-08-06 23:37:33 +08:00
|
|
|
# Sanity checks of MR settings and commit logs
|
2020-11-19 01:48:47 +08:00
|
|
|
sanity:
|
2020-11-19 01:23:29 +08:00
|
|
|
extends:
|
|
|
|
- .fdo.ci-fairy
|
2020-08-06 23:37:33 +08:00
|
|
|
stage: sanity
|
|
|
|
rules:
|
2020-12-10 19:48:32 +08:00
|
|
|
- if: *is-pre-merge
|
2020-08-06 23:37:33 +08:00
|
|
|
when: on_success
|
|
|
|
# Other cases default to never
|
2020-12-04 01:58:09 +08:00
|
|
|
variables:
|
|
|
|
GIT_STRATEGY: none
|
2020-08-06 23:37:33 +08:00
|
|
|
script:
|
2020-12-01 21:27:42 +08:00
|
|
|
# ci-fairy check-commits --junit-xml=check-commits.xml
|
2020-11-19 01:48:47 +08:00
|
|
|
- ci-fairy check-merge-request --require-allow-collaboration --junit-xml=check-merge-request.xml
|
|
|
|
artifacts:
|
|
|
|
when: on_failure
|
|
|
|
reports:
|
|
|
|
junit: check-*.xml
|
2020-08-06 23:37:33 +08:00
|
|
|
|
2022-06-17 06:26:53 +08:00
|
|
|
# Rules for tests that should not block merging, but should be available to
|
|
|
|
# optionally run with the "play" button in the UI in pre-merge non-marge
|
2022-06-29 06:15:54 +08:00
|
|
|
# pipelines. This should appear in "extends:" after any includes of
|
|
|
|
# test-source-dep.yml rules, so that these rules replace those.
|
2021-04-22 19:17:40 +08:00
|
|
|
.test-manual-mr:
|
|
|
|
rules:
|
2022-06-29 01:43:34 +08:00
|
|
|
- !reference [.no_scheduled_pipelines-rules, rules]
|
2021-04-22 19:17:40 +08:00
|
|
|
- if: *is-forked-branch-or-pre-merge-not-for-marge
|
|
|
|
changes:
|
|
|
|
*all_paths
|
|
|
|
when: manual
|
|
|
|
variables:
|
2021-10-22 22:23:21 +08:00
|
|
|
JOB_TIMEOUT: 80
|
2021-04-22 19:17:40 +08:00
|
|
|
|