I’ve got a few CI/CD jobs running on GitLab that produce long logs, which in turn get truncated. Since the most interesting stuff normally happens towards the end of the logs (like errors!), this can be really counter-productive.
Job's log exceeded limit of 4194304 bytes.
There’s a fundamental problem with this though: if something’s going to break then it’s inevitably going to happen after the logs have been truncated so I won’t be able to actually see what’s broken.
Fortunately this is easily fixed, provided that you have access to the configuration for your GitLab Runner. At Fathom Data we’ve got a dedicated EC2 instance with GitLab Runner on Docker.
docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Names}}"
CONTAINER ID IMAGE NAMES
6e14e22024a9 gitlab/gitlab-runner:latest gitlab-runner
First we’ll need to fire up BASH on that container.
docker exec -it gitlab-runner /bin/bash
Now edit the GitLab Runner configuration in config.toml
.
vim /etc/gitlab-runner/config.toml
Find the section for the runner in question and insert an output_limit
entry (units are kilobytes, with a default of 4096). I bumped mine up to 16384 (16 MB) and this proved to be sufficient. YMMV.
[[runners]]
name = "Verbose"
url = "https://gitlab.com/"
executor = "docker"
output_limit = 16384
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "alpine"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0