As for the S3 bucket where the kernel image is stored has been identified and
labeled, the other buckets in use can also be identified and labeled.
cc: mesa-stable
Signed-off-by: Sergi Blanch Torne <sergi.blanch.torne@collabora.com>
Co-developed-by: Guilherme Gallo <guilherme.gallo@collabora.com
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28979>
Better error handling is more reliable.
Options:
-L, follow location
--retry, number of retries
--retry-all-errors, does not fail on ALL errors, that's why there is -f
-f, fail fast with no output at all on server errors
--retry-delay, make curl sleep this amount of time before each retry
Signed-off-by: David Heidelberg <david.heidelberg@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/20788>
Over the last 7 days, git pulls represented a total of 1.7 TB.
On those 1.7 TB, we can see:
- ~300 GB for the CI farm on hetzner
- ~730 GB for the CI farm on packet.net
- ~680 GB for the rest of the world
We can not really change the rest of the world*, but we can
certainly reduce the egress costs towards our CI farms.
Right now, the gitlab runners are not doing a good job at
caching the git trees for the various jobs we make, and
we end up with a lot of cache-misses. A typical pipeline
ends up with a good 2.8GB of git pull data. (a compressed
archive of the mesa folder accounts for 280MB)
In this patch, we implemented what was suggested in
https://gitlab.com/gitlab-org/gitlab/-/issues/215591#note_334642576
- we host a brand new MinIO server on packet
- jobs can upload files on 2 locations:
* git-cache/<namespace>/<project>/<branch-name>.tar.gz
* artifacts/<namespace>/<project>/<pipeline-id>/
- the authorization is handled by gitlab with short tokens
valid only for the time of the job is running
- whenever a job runs, the runner are configured to execute
(eval) $CI_PRE_CLONE_SCRIPT
- this variable is set globally to download the current cache
from the MinIO packet server, unpack it and replace the
possibly out of date cache found on the runner
- then git fetch is run by the runner, and only the delta
between the upstream tree and the local tree gets pulled.
We can rebuild the git cache in a schedule job (once a day
seems sufficient), and then we can stop the cache miss
entirely.
First results showed that instead of pulling 280MB of data
in my fork, I got a pull of only 250KB. That should help us.
* arguably, there are other farms in the rest of the world, so
hopefully we can change those too.
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5428>