“500 Internal Server Error” with job artifacts on minio - amazon-s3
I'm running gitlab-ce on-prem with min.io as a local S3 service. CI/CD caching is working, and basic connectivity with the S3-compatible minio is good. (Versions: gitlab-ce:13.9.2-ce.0, gitlab-runner:v13.9.0, and minio/minio:latest currently c253244b6fb0.)
Is there additional configuration to differentiate between job-artifacts and pipeline-artifacts and storing them in on-prem S3-compatible object storage?
In my test repo, the "build" stage builds a sparse R package. When I was using local in-gitlab job artifacts, it succeeds and moves on to the "test" and "deploy" stages, no problems. (And that works with S3-stored cache, though that configuration is solely within gitlab-runner.) Now that I've configured minio as a local S3-compatible object storage for artifacts, though, it fails.
...
Created cache
Uploading artifacts for successful job
Uploading artifacts...
/builds/git/mygroup/citest/ci/build/*.tar.gz: found 1 matching files and directories
/builds/git/mygroup/citest/ci/lib: found 67 matching files and directories
WARNING: Uploading artifacts as "archive" to coordinator... failed id=397 responseStatus=500 Internal Server Error status=500 token=q42snHs9
WARNING: Retrying... context=artifacts-uploader error=invalid argument
WARNING: Uploading artifacts as "archive" to coordinator... failed id=397 responseStatus=500 Internal Server Error status=500 token=q42snHs9
WARNING: Retrying... context=artifacts-uploader error=invalid argument
WARNING: Uploading artifacts as "archive" to coordinator... failed id=397 responseStatus=500 Internal Server Error status=500 token=q42snHs9
FATAL: invalid argument
Cleaning up file based variables
ERROR: Job failed: exit code 1
The only fail is in the artifact-storage attempt. (With in-gitlab artifacts, this all succeeded and worked.)
I can find no reference of "invalid argument" within the minio logs (which say nothing during this time). I'm not certain that it is a minio-problem.
The relevant portion of the gitlab config:
gitlab_rails['object_store']['enabled'] = true
gitlab_rails['object_store']['proxy_download'] = false
gitlab_rails['object_store']['connection'] = {
'provider' => 'AWS',
'host' => "minio.mydomain.com",
# 'region' => '',
'aws_access_key_id' => '<AWS_ACCESS_KEY_ID>',
'aws_secret_access_key' => '<AWS_SECRET_ACCESS_KEY>',
'path_style' => true
}
gitlab_rails['object_store']['objects']['artifacts']['bucket'] = 'gitlab-artifacts-storage'
gitlab_rails['object_store']['objects']['external_diffs']['enabled'] = false
gitlab_rails['object_store']['objects']['lfs']['enabled'] = false
gitlab_rails['object_store']['objects']['uploads']['bucket'] = 'gitlab-uploads-storage'
gitlab_rails['object_store']['objects']['packages']['enabled'] = false
gitlab_rails['object_store']['objects']['dependency_proxy']['enabled'] = false
gitlab_rails['object_store']['objects']['terraform_state']['enabled'] = false
gitlab_rails['object_store']['objects']['pages']['enabled'] = false
That configuration is adapted from https://docs.gitlab.com/ee/administration/object_storage.html, deselecting storage of components I don't think I need. I added 'path_style' => true because without it, the default of bucket.minio.mydomain.com was not resolved correctly (so this is mostly-S3-compatible, not perfectly AWS). I have also tried this with 'proxy_download' of true, no change.
I'm logged into the minio console, and the gitlab-artifacts-storage bucket exists. In fact, after these failed "build" tests, I'm seeing newly created job artifacts stored in this bucket (as job.log), so I know that basic connectivity (i.e., access key and secret) works:
In my research, I've seen similar errors linked to nginx rev-proxy issues, which suggests that something in the traefik configuration (or just its presence) might be an issue. It is merely passing traffic, it is doing no path-translation/stripping. Regardless, all of the rev-proxy discussion was on nginx, often attempting path-munging of some sort, and was resolved with some seemingly-unrelated change to the nginx configuration. I haven't found any that map into the traefik domain.
traefik access.log
{"BackendAddr":"172.19.0.2:9000","BackendName":"backend-minio-myswarm","BackendURL":{"Scheme":"http","Opaque":"","User":null,"Host":"172.19.0.2:9000","Path":"","RawPath":"","ForceQuery":false,"RawQuery":"","Fragment":""},"ClientAddr":"172.19.0.1:44742","ClientHost":"172.19.0.1","ClientPort":"44742","ClientUsername":"-","DownstreamContentSize":329,"DownstreamStatus":200,"DownstreamStatusLine":"200 OK","Duration":3472517,"FrontendName":"Host-minio-mydomain-com-2","OriginContentSize":329,"OriginDuration":3422850,"OriginStatus":200,"OriginStatusLine":"200 OK","Overhead":49667,"RequestAddr":"minio.mydomain.com","RequestContentSize":0,"RequestCount":131470,"RequestHost":"minio.mydomain.com","RequestLine":"POST /gitlab-artifacts-storage/tmp/uploads/1616099873-1316-0002-0683-2dfe06e7a451447ff0b4a5518c8e19c6?uploads HTTP/1.1","RequestMethod":"POST","RequestPath":"/gitlab-artifacts-storage/tmp/uploads/1616099873-1316-0002-0683-2dfe06e7a451447ff0b4a5518c8e19c6?uploads","RequestPort":"-","RequestProtocol":"HTTP/1.1","RetryAttempts":0,"StartLocal":"2021-03-18T20:37:53.890788923Z","StartUTC":"2021-03-18T20:37:53.890788923Z","downstream_Accept-Ranges":"bytes","downstream_Content-Length":"329","downstream_Content-Security-Policy":"block-all-mixed-content","downstream_Content-Type":"application/xml","downstream_Date":"Thu, 18 Mar 2021 20:37:53 GMT","downstream_Referrer-Policy":"same-origin","downstream_Server":"MinIO","downstream_Strict-Transport-Security":"max-age=315360000","downstream_Vary":"Origin","downstream_X-Amz-Request-Id":"166D8A45D8DDC4B3","downstream_X-Xss-Protection":"1; mode=block","level":"info","msg":"","origin_Accept-Ranges":"bytes","origin_Content-Length":"329","origin_Content-Security-Policy":"block-all-mixed-content","origin_Content-Type":"application/xml","origin_Date":"Thu, 18 Mar 2021 20:37:53 GMT","origin_Referrer-Policy":"same-origin","origin_Server":"MinIO","origin_Strict-Transport-Security":"max-age=315360000","origin_Vary":"Origin","origin_X-Amz-Request-Id":"166D8A45D8DDC4B3","origin_X-Xss-Protection":"1; mode=block","request_Authorization":"AWS4-HMAC-SHA256 Credential=MyS3AccessKey/20210318//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=SomeSignature","request_Content-Length":"0","request_User-Agent":"fog-core/2.1.0","request_X-Amz-Content-Sha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","request_X-Amz-Date":"20210318T203753Z","time":"2021-03-18T20:37:53Z"}
{"BackendAddr":"172.19.0.2:9000","BackendName":"backend-minio-myswarm","BackendURL":{"Scheme":"http","Opaque":"","User":null,"Host":"172.19.0.2:9000","Path":"","RawPath":"","ForceQuery":false,"RawQuery":"","Fragment":""},"ClientAddr":"172.19.0.1:44748","ClientHost":"172.19.0.1","ClientPort":"44748","ClientUsername":"-","DownstreamContentSize":467,"DownstreamStatus":404,"DownstreamStatusLine":"404 Not Found","Duration":1571234,"FrontendName":"Host-minio-mydomain-com-2","OriginContentSize":467,"OriginDuration":1525159,"OriginStatus":404,"OriginStatusLine":"404 Not Found","Overhead":46075,"RequestAddr":"minio.mydomain.com","RequestContentSize":0,"RequestCount":131472,"RequestHost":"minio.mydomain.com","RequestLine":"GET /gitlab-artifacts-storage/tmp/uploads/1616099873-1316-0002-0683-2dfe06e7a451447ff0b4a5518c8e19c6?X-Amz-Expires=15300\u0026X-Amz-Date=20210318T203753Z\u0026X-Amz-Algorithm=AWS4-HMAC-SHA256\u0026X-Amz-Credential=MyS3AccessKey%2F20210318%2F%2Fs3%2Faws4_request\u0026X-Amz-SignedHeaders=host\u0026X-Amz-Signature=SomeSignature HTTP/1.1","RequestMethod":"GET","RequestPath":"/gitlab-artifacts-storage/tmp/uploads/1616099873-1316-0002-0683-2dfe06e7a451447ff0b4a5518c8e19c6?X-Amz-Expires=15300\u0026X-Amz-Date=20210318T203753Z\u0026X-Amz-Algorithm=AWS4-HMAC-SHA256\u0026X-Amz-Credential=MyS3AccessKey%2F20210318%2F%2Fs3%2Faws4_request\u0026X-Amz-SignedHeaders=host\u0026X-Amz-Signature=SomeSignature","RequestPort":"-","RequestProtocol":"HTTP/1.1","RetryAttempts":0,"StartLocal":"2021-03-18T20:37:54.090024071Z","StartUTC":"2021-03-18T20:37:54.090024071Z","downstream_Accept-Ranges":"bytes","downstream_Content-Length":"467","downstream_Content-Security-Policy":"block-all-mixed-content","downstream_Content-Type":"application/xml","downstream_Date":"Thu, 18 Mar 2021 20:37:54 GMT","downstream_Referrer-Policy":"same-origin","downstream_Server":"MinIO","downstream_Strict-Transport-Security":"max-age=315360000","downstream_Vary":"Origin","downstream_X-Amz-Request-Id":"166D8A45E4BD6115","downstream_X-Xss-Protection":"1; mode=block","level":"info","msg":"","origin_Accept-Ranges":"bytes","origin_Content-Length":"467","origin_Content-Security-Policy":"block-all-mixed-content","origin_Content-Type":"application/xml","origin_Date":"Thu, 18 Mar 2021 20:37:54 GMT","origin_Referrer-Policy":"same-origin","origin_Server":"MinIO","origin_Strict-Transport-Security":"max-age=315360000","origin_Vary":"Origin","origin_X-Amz-Request-Id":"166D8A45E4BD6115","origin_X-Xss-Protection":"1; mode=block","request_User-Agent":"Go-http-client/1.1","time":"2021-03-18T20:37:54Z"}
{"BackendAddr":"172.19.0.2:9000","BackendName":"backend-minio-myswarm","BackendURL":{"Scheme":"http","Opaque":"","User":null,"Host":"172.19.0.2:9000","Path":"","RawPath":"","ForceQuery":false,"RawQuery":"","Fragment":""},"ClientAddr":"172.19.0.1:44754","ClientHost":"172.19.0.1","ClientPort":"44754","ClientUsername":"-","DownstreamContentSize":0,"DownstreamStatus":404,"DownstreamStatusLine":"404 Not Found","Duration":1205368,"FrontendName":"Host-minio-mydomain-com-2","OriginContentSize":0,"OriginDuration":1145885,"OriginStatus":404,"OriginStatusLine":"404 Not Found","Overhead":59483,"RequestAddr":"minio.mydomain.com","RequestContentSize":0,"RequestCount":131473,"RequestHost":"minio.mydomain.com","RequestLine":"HEAD /gitlab-artifacts-storage/tmp/uploads/1616099873-1316-0002-0683-2dfe06e7a451447ff0b4a5518c8e19c6 HTTP/1.1","RequestMethod":"HEAD","RequestPath":"/gitlab-artifacts-storage/tmp/uploads/1616099873-1316-0002-0683-2dfe06e7a451447ff0b4a5518c8e19c6","RequestPort":"-","RequestProtocol":"HTTP/1.1","RetryAttempts":0,"StartLocal":"2021-03-18T20:37:54.162829022Z","StartUTC":"2021-03-18T20:37:54.162829022Z","downstream_Accept-Ranges":"bytes","downstream_Content-Length":"0","downstream_Content-Security-Policy":"block-all-mixed-content","downstream_Date":"Thu, 18 Mar 2021 20:37:54 GMT","downstream_Referrer-Policy":"same-origin","downstream_Server":"MinIO","downstream_Strict-Transport-Security":"max-age=315360000","downstream_Vary":"Origin","downstream_X-Amz-Request-Id":"166D8A45E9114D33","downstream_X-Xss-Protection":"1; mode=block","level":"info","msg":"","origin_Accept-Ranges":"bytes","origin_Content-Length":"0","origin_Content-Security-Policy":"block-all-mixed-content","origin_Date":"Thu, 18 Mar 2021 20:37:54 GMT","origin_Referrer-Policy":"same-origin","origin_Server":"MinIO","origin_Strict-Transport-Security":"max-age=315360000","origin_Vary":"Origin","origin_X-Amz-Request-Id":"166D8A45E9114D33","origin_X-Xss-Protection":"1; mode=block","request_Authorization":"AWS4-HMAC-SHA256 Credential=MyS3AccessKey/20210318//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=SomeSignature","request_Content-Length":"0","request_User-Agent":"fog-core/2.1.0","request_X-Amz-Content-Sha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","request_X-Amz-Date":"20210318T203754Z","time":"2021-03-18T20:37:54Z"}
{"BackendAddr":"172.19.0.2:9000","BackendName":"backend-minio-myswarm","BackendURL":{"Scheme":"http","Opaque":"","User":null,"Host":"172.19.0.2:9000","Path":"","RawPath":"","ForceQuery":false,"RawQuery":"","Fragment":""},"ClientAddr":"172.19.0.1:44758","ClientHost":"172.19.0.1","ClientPort":"44758","ClientUsername":"-","DownstreamContentSize":0,"DownstreamStatus":404,"DownstreamStatusLine":"404 Not Found","Duration":1087332,"FrontendName":"Host-minio-mydomain-com-2","OriginContentSize":0,"OriginDuration":1031618,"OriginStatus":404,"OriginStatusLine":"404 Not Found","Overhead":55714,"RequestAddr":"minio.mydomain.com","RequestContentSize":0,"RequestCount":131474,"RequestHost":"minio.mydomain.com","RequestLine":"HEAD /gitlab-artifacts-storage/tmp/uploads/1616099873-1316-0002-0683-2dfe06e7a451447ff0b4a5518c8e19c6 HTTP/1.1","RequestMethod":"HEAD","RequestPath":"/gitlab-artifacts-storage/tmp/uploads/1616099873-1316-0002-0683-2dfe06e7a451447ff0b4a5518c8e19c6","RequestPort":"-","RequestProtocol":"HTTP/1.1","RetryAttempts":0,"StartLocal":"2021-03-18T20:37:54.206750001Z","StartUTC":"2021-03-18T20:37:54.206750001Z","downstream_Accept-Ranges":"bytes","downstream_Content-Length":"0","downstream_Content-Security-Policy":"block-all-mixed-content","downstream_Date":"Thu, 18 Mar 2021 20:37:54 GMT","downstream_Referrer-Policy":"same-origin","downstream_Server":"MinIO","downstream_Strict-Transport-Security":"max-age=315360000","downstream_Vary":"Origin","downstream_X-Amz-Request-Id":"166D8A45EBAE7A4E","downstream_X-Xss-Protection":"1; mode=block","level":"info","msg":"","origin_Accept-Ranges":"bytes","origin_Content-Length":"0","origin_Content-Security-Policy":"block-all-mixed-content","origin_Date":"Thu, 18 Mar 2021 20:37:54 GMT","origin_Referrer-Policy":"same-origin","origin_Server":"MinIO","origin_Strict-Transport-Security":"max-age=315360000","origin_Vary":"Origin","origin_X-Amz-Request-Id":"166D8A45EBAE7A4E","origin_X-Xss-Protection":"1; mode=block","request_Authorization":"AWS4-HMAC-SHA256 Credential=MyS3AccessKey/20210318//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=SomeSignature","request_Content-Length":"0","request_User-Agent":"fog-core/2.1.0","request_X-Amz-Content-Sha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","request_X-Amz-Date":"20210318T203754Z","time":"2021-03-18T20:37:54Z"}
{"BackendAddr":"172.19.0.2:9000","BackendName":"backend-minio-myswarm","BackendURL":{"Scheme":"http","Opaque":"","User":null,"Host":"172.19.0.2:9000","Path":"","RawPath":"","ForceQuery":false,"RawQuery":"","Fragment":""},"ClientAddr":"172.19.0.1:44762","ClientHost":"172.19.0.1","ClientPort":"44762","ClientUsername":"-","DownstreamContentSize":0,"DownstreamStatus":404,"DownstreamStatusLine":"404 Not Found","Duration":1126408,"FrontendName":"Host-minio-mydomain-com-2","OriginContentSize":0,"OriginDuration":1068170,"OriginStatus":404,"OriginStatusLine":"404 Not Found","Overhead":58238,"RequestAddr":"minio.mydomain.com","RequestContentSize":0,"RequestCount":131475,"RequestHost":"minio.mydomain.com","RequestLine":"HEAD /gitlab-artifacts-storage/tmp/uploads/1616099873-1316-0002-0683-2dfe06e7a451447ff0b4a5518c8e19c6 HTTP/1.1","RequestMethod":"HEAD","RequestPath":"/gitlab-artifacts-storage/tmp/uploads/1616099873-1316-0002-0683-2dfe06e7a451447ff0b4a5518c8e19c6","RequestPort":"-","RequestProtocol":"HTTP/1.1","RetryAttempts":0,"StartLocal":"2021-03-18T20:37:54.243629562Z","StartUTC":"2021-03-18T20:37:54.243629562Z","downstream_Accept-Ranges":"bytes","downstream_Content-Length":"0","downstream_Content-Security-Policy":"block-all-mixed-content","downstream_Date":"Thu, 18 Mar 2021 20:37:54 GMT","downstream_Referrer-Policy":"same-origin","downstream_Server":"MinIO","downstream_Strict-Transport-Security":"max-age=315360000","downstream_Vary":"Origin","downstream_X-Amz-Request-Id":"166D8A45EDE0062E","downstream_X-Xss-Protection":"1; mode=block","level":"info","msg":"","origin_Accept-Ranges":"bytes","origin_Content-Length":"0","origin_Content-Security-Policy":"block-all-mixed-content","origin_Date":"Thu, 18 Mar 2021 20:37:54 GMT","origin_Referrer-Policy":"same-origin","origin_Server":"MinIO","origin_Strict-Transport-Security":"max-age=315360000","origin_Vary":"Origin","origin_X-Amz-Request-Id":"166D8A45EDE0062E","origin_X-Xss-Protection":"1; mode=block","request_Authorization":"AWS4-HMAC-SHA256 Credential=MyS3AccessKey/20210318//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=SomeSignature","request_Content-Length":"0","request_User-Agent":"fog-core/2.1.0","request_X-Amz-Content-Sha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","request_X-Amz-Date":"20210318T203754Z","time":"2021-03-18T20:37:54Z"}
{"BackendAddr":"172.19.0.2:9000","BackendName":"backend-minio-myswarm","BackendURL":{"Scheme":"http","Opaque":"","User":null,"Host":"172.19.0.2:9000","Path":"","RawPath":"","ForceQuery":false,"RawQuery":"","Fragment":""},"ClientAddr":"172.19.0.1:44766","ClientHost":"172.19.0.1","ClientPort":"44766","ClientUsername":"-","DownstreamContentSize":0,"DownstreamStatus":404,"DownstreamStatusLine":"404 Not Found","Duration":1279861,"FrontendName":"Host-minio-mydomain-com-2","OriginContentSize":0,"OriginDuration":1227773,"OriginStatus":404,"OriginStatusLine":"404 Not Found","Overhead":52088,"RequestAddr":"minio.mydomain.com","RequestContentSize":0,"RequestCount":131476,"RequestHost":"minio.mydomain.com","RequestLine":"HEAD /gitlab-artifacts-storage/tmp/uploads/1616099873-1316-0002-0683-2dfe06e7a451447ff0b4a5518c8e19c6 HTTP/1.1","RequestMethod":"HEAD","RequestPath":"/gitlab-artifacts-storage/tmp/uploads/1616099873-1316-0002-0683-2dfe06e7a451447ff0b4a5518c8e19c6","RequestPort":"-","RequestProtocol":"HTTP/1.1","RetryAttempts":0,"StartLocal":"2021-03-18T20:37:54.277572935Z","StartUTC":"2021-03-18T20:37:54.277572935Z","downstream_Accept-Ranges":"bytes","downstream_Content-Length":"0","downstream_Content-Security-Policy":"block-all-mixed-content","downstream_Date":"Thu, 18 Mar 2021 20:37:54 GMT","downstream_Referrer-Policy":"same-origin","downstream_Server":"MinIO","downstream_Strict-Transport-Security":"max-age=315360000","downstream_Vary":"Origin","downstream_X-Amz-Request-Id":"166D8A45EFE57F90","downstream_X-Xss-Protection":"1; mode=block","level":"info","msg":"","origin_Accept-Ranges":"bytes","origin_Content-Length":"0","origin_Content-Security-Policy":"block-all-mixed-content","origin_Date":"Thu, 18 Mar 2021 20:37:54 GMT","origin_Referrer-Policy":"same-origin","origin_Server":"MinIO","origin_Strict-Transport-Security":"max-age=315360000","origin_Vary":"Origin","origin_X-Amz-Request-Id":"166D8A45EFE57F90","origin_X-Xss-Protection":"1; mode=block","request_Authorization":"AWS4-HMAC-SHA256 Credential=MyS3AccessKey/20210318//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=SomeSignature","request_Content-Length":"0","request_User-Agent":"fog-core/2.1.0","request_X-Amz-Content-Sha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","request_X-Amz-Date":"20210318T203754Z","time":"2021-03-18T20:37:54Z"}
{"BackendAddr":"172.19.0.2:9000","BackendName":"backend-minio-myswarm","BackendURL":{"Scheme":"http","Opaque":"","User":null,"Host":"172.19.0.2:9000","Path":"","RawPath":"","ForceQuery":false,"RawQuery":"","Fragment":""},"ClientAddr":"172.19.0.1:44774","ClientHost":"172.19.0.1","ClientPort":"44774","ClientUsername":"-","DownstreamContentSize":329,"DownstreamStatus":200,"DownstreamStatusLine":"200 OK","Duration":4317260,"FrontendName":"Host-minio-mydomain-com-2","OriginContentSize":329,"OriginDuration":4261951,"OriginStatus":200,"OriginStatusLine":"200 OK","Overhead":55309,"RequestAddr":"minio.mydomain.com","RequestContentSize":0,"RequestCount":131480,"RequestHost":"minio.mydomain.com","RequestLine":"POST /gitlab-artifacts-storage/tmp/uploads/1616099875-1422-0002-1921-4427532294f444d4582d12ec3b75ba3a?uploads HTTP/1.1","RequestMethod":"POST","RequestPath":"/gitlab-artifacts-storage/tmp/uploads/1616099875-1422-0002-1921-4427532294f444d4582d12ec3b75ba3a?uploads","RequestPort":"-","RequestProtocol":"HTTP/1.1","RetryAttempts":0,"StartLocal":"2021-03-18T20:37:55.503920248Z","StartUTC":"2021-03-18T20:37:55.503920248Z","downstream_Accept-Ranges":"bytes","downstream_Content-Length":"329","downstream_Content-Security-Policy":"block-all-mixed-content","downstream_Content-Type":"application/xml","downstream_Date":"Thu, 18 Mar 2021 20:37:55 GMT","downstream_Referrer-Policy":"same-origin","downstream_Server":"MinIO","downstream_Strict-Transport-Security":"max-age=315360000","downstream_Vary":"Origin","downstream_X-Amz-Request-Id":"166D8A4639004631","downstream_X-Xss-Protection":"1; mode=block","level":"info","msg":"","origin_Accept-Ranges":"bytes","origin_Content-Length":"329","origin_Content-Security-Policy":"block-all-mixed-content","origin_Content-Type":"application/xml","origin_Date":"Thu, 18 Mar 2021 20:37:55 GMT","origin_Referrer-Policy":"same-origin","origin_Server":"MinIO","origin_Strict-Transport-Security":"max-age=315360000","origin_Vary":"Origin","origin_X-Amz-Request-Id":"166D8A4639004631","origin_X-Xss-Protection":"1; mode=block","request_Authorization":"AWS4-HMAC-SHA256 Credential=MyS3AccessKey/20210318//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=SomeSignature","request_Content-Length":"0","request_User-Agent":"fog-core/2.1.0","request_X-Amz-Content-Sha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","request_X-Amz-Date":"20210318T203755Z","time":"2021-03-18T20:37:55Z"}
{"BackendAddr":"172.19.0.2:9000","BackendName":"backend-minio-myswarm","BackendURL":{"Scheme":"http","Opaque":"","User":null,"Host":"172.19.0.2:9000","Path":"","RawPath":"","ForceQuery":false,"RawQuery":"","Fragment":""},"ClientAddr":"172.19.0.1:44778","ClientHost":"172.19.0.1","ClientPort":"44778","ClientUsername":"-","DownstreamContentSize":467,"DownstreamStatus":404,"DownstreamStatusLine":"404 Not Found","Duration":1879630,"FrontendName":"Host-minio-mydomain-com-2","OriginContentSize":467,"OriginDuration":1830988,"OriginStatus":404,"OriginStatusLine":"404 Not Found","Overhead":48642,"RequestAddr":"minio.mydomain.com","RequestContentSize":0,"RequestCount":131482,"RequestHost":"minio.mydomain.com","RequestLine":"GET /gitlab-artifacts-storage/tmp/uploads/1616099875-1422-0002-1921-4427532294f444d4582d12ec3b75ba3a?X-Amz-Expires=15300\u0026X-Amz-Date=20210318T203755Z\u0026X-Amz-Algorithm=AWS4-HMAC-SHA256\u0026X-Amz-Credential=MyS3AccessKey%2F20210318%2F%2Fs3%2Faws4_request\u0026X-Amz-SignedHeaders=host\u0026X-Amz-Signature=SomeSignature HTTP/1.1","RequestMethod":"GET","RequestPath":"/gitlab-artifacts-storage/tmp/uploads/1616099875-1422-0002-1921-4427532294f444d4582d12ec3b75ba3a?X-Amz-Expires=15300\u0026X-Amz-Date=20210318T203755Z\u0026X-Amz-Algorithm=AWS4-HMAC-SHA256\u0026X-Amz-Credential=MyS3AccessKey%2F20210318%2F%2Fs3%2Faws4_request\u0026X-Amz-SignedHeaders=host\u0026X-Amz-Signature=SomeSignature","RequestPort":"-","RequestProtocol":"HTTP/1.1","RetryAttempts":0,"StartLocal":"2021-03-18T20:37:55.703174362Z","StartUTC":"2021-03-18T20:37:55.703174362Z","downstream_Accept-Ranges":"bytes","downstream_Content-Length":"467","downstream_Content-Security-Policy":"block-all-mixed-content","downstream_Content-Type":"application/xml","downstream_Date":"Thu, 18 Mar 2021 20:37:55 GMT","downstream_Referrer-Policy":"same-origin","downstream_Server":"MinIO","downstream_Strict-Transport-Security":"max-age=315360000","downstream_Vary":"Origin","downstream_X-Amz-Request-Id":"166D8A4644E482E7","downstream_X-Xss-Protection":"1; mode=block","level":"info","msg":"","origin_Accept-Ranges":"bytes","origin_Content-Length":"467","origin_Content-Security-Policy":"block-all-mixed-content","origin_Content-Type":"application/xml","origin_Date":"Thu, 18 Mar 2021 20:37:55 GMT","origin_Referrer-Policy":"same-origin","origin_Server":"MinIO","origin_Strict-Transport-Security":"max-age=315360000","origin_Vary":"Origin","origin_X-Amz-Request-Id":"166D8A4644E482E7","origin_X-Xss-Protection":"1; mode=block","request_User-Agent":"Go-http-client/1.1","time":"2021-03-18T20:37:55Z"}
{"BackendAddr":"172.19.0.2:9000","BackendName":"backend-minio-myswarm","BackendURL":{"Scheme":"http","Opaque":"","User":null,"Host":"172.19.0.2:9000","Path":"","RawPath":"","ForceQuery":false,"RawQuery":"","Fragment":""},"ClientAddr":"172.19.0.1:44782","ClientHost":"172.19.0.1","ClientPort":"44782","ClientUsername":"-","DownstreamContentSize":0,"DownstreamStatus":404,"DownstreamStatusLine":"404 Not Found","Duration":2076600,"FrontendName":"Host-minio-mydomain-com-2","OriginContentSize":0,"OriginDuration":2009920,"OriginStatus":404,"OriginStatusLine":"404 Not Found","Overhead":66680,"RequestAddr":"minio.mydomain.com","RequestContentSize":0,"RequestCount":131484,"RequestHost":"minio.mydomain.com","RequestLine":"HEAD /gitlab-artifacts-storage/tmp/uploads/1616099875-1422-0002-1921-4427532294f444d4582d12ec3b75ba3a HTTP/1.1","RequestMethod":"HEAD","RequestPath":"/gitlab-artifacts-storage/tmp/uploads/1616099875-1422-0002-1921-4427532294f444d4582d12ec3b75ba3a","RequestPort":"-","RequestProtocol":"HTTP/1.1","RetryAttempts":0,"StartLocal":"2021-03-18T20:37:56.652353505Z","StartUTC":"2021-03-18T20:37:56.652353505Z","downstream_Accept-Ranges":"bytes","downstream_Content-Length":"0","downstream_Content-Security-Policy":"block-all-mixed-content","downstream_Date":"Thu, 18 Mar 2021 20:37:56 GMT","downstream_Referrer-Policy":"same-origin","downstream_Server":"MinIO","downstream_Strict-Transport-Security":"max-age=315360000","downstream_Vary":"Origin","downstream_X-Amz-Request-Id":"166D8A467D7A9332","downstream_X-Xss-Protection":"1; mode=block","level":"info","msg":"","origin_Accept-Ranges":"bytes","origin_Content-Length":"0","origin_Content-Security-Policy":"block-all-mixed-content","origin_Date":"Thu, 18 Mar 2021 20:37:56 GMT","origin_Referrer-Policy":"same-origin","origin_Server":"MinIO","origin_Strict-Transport-Security":"max-age=315360000","origin_Vary":"Origin","origin_X-Amz-Request-Id":"166D8A467D7A9332","origin_X-Xss-Protection":"1; mode=block","request_Authorization":"AWS4-HMAC-SHA256 Credential=MyS3AccessKey/20210318//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=SomeSignature","request_Content-Length":"0","request_User-Agent":"fog-core/2.1.0","request_X-Amz-Content-Sha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","request_X-Amz-Date":"20210318T203756Z","time":"2021-03-18T20:37:56Z"}
I understand there is a difference between job artifacts and pipeline artifacts, so I think what I'm missing is a configuration distinction between "job" and "pipeline".
.gitlab-ci.yml snippet
variables:
GIT_DEPTH: 10
R_LIBS_USER: "$CI_PROJECT_DIR/ci/lib"
BUILD_DIR: "$CI_PROJECT_DIR/ci/build"
CHECK_DIR: "$CI_PROJECT_DIR/ci/logs"
BUILD_LOGS_DIR: "$CI_PROJECT_DIR/ci/logs/$CI_PROJECT_NAME.Rcheck"
default:
image: rocker/shiny-verse:4.0.3
interruptible: true
build-package:
stage: build
script:
- mkdir -p "$R_LIBS_USER" "$BUILD_DIR"
- R -e '
devtools::install_deps(dependencies = TRUE, lib = Sys.getenv("R_LIBS_USER")) ;
devtools::build(path = Sys.getenv("BUILD_DIR")) ;'
artifacts:
paths:
- $BUILD_DIR/*.tar.gz
- $R_LIBS_USER
cache:
key: "${CI_COMMIT_REF_SLUG}__cilib"
paths:
- $R_LIBS_USER
(Yes, it is both cached and an artifact. This is an instance of me testing the CI subsystem, not something I intend to maintain.)
This is in a docker-swarm behind a traefik reverse-proxy (and SSL terminator).
The answer is to bypass the empty-string test; the underlying protocol does not support region-less configuration, nor is there a configuration option to support it.
The trick is able to work because the use of 'endpoint' causes the 'region' to be ignored. With that, setting the region to something and forcing the endpoint allows it to work:
gitlab_rails['object_store']['connection'] = {
'provider' => 'AWS',
'host' => "minio.mydomain.com",
'region' => 'us-east-1', # this must be non-empty, but is ignored ...
'endpoint' => 'https://minio.mydomain.com', # ... because of 'endpoint'
'aws_access_key_id' => '<AWS_ACCESS_KEY_ID>',
'aws_secret_access_key' => '<AWS_SECRET_ACCESS_KEY>',
'path_style' => true
}
(I owe discovery of this to Florian, in gitlab-org/gitlab#297227.)
Related
Redis on GKE is running out of disk space
I just installed redis (actually a reinstall and upgrade) on GKE via helm. It was a pretty standard install and nothing too out of the norm. Unfortunately my "redis-master" container logs are showing sync errors over and over again: Info 2022-02-01 12:58:22.733 MST redis1:M 01 Feb 2022 19:58:22.733 * Waiting for end of BGSAVE for SYNC Info 2022-02-01 12:58:22.733 MST redis 8085:C 01 Feb 2022 19:58:22.733 # Write error saving DB on disk: No space left on device Info 2022-02-01 12:58:22.830 MST redis 1:M 01 Feb 2022 19:58:22.829 # Background saving error Info 2022-02-01 12:58:22.830 MST redis 1:M 01 Feb 2022 19:58:22.829 # Connection with replica redis-replicas-0.:6379 lost. Info 2022-02-01 12:58:22.830 MST redis 1:M 01 Feb 2022 19:58:22.829 # SYNC failed. BGSAVE child returned an error Info 2022-02-01 12:58:22.830 MST redis 1:M 01 Feb 2022 19:58:22.829 # Connection with replica redis-replicas-1.:6379 lost. Info 2022-02-01 12:58:22.830 MST redis 1:M 01 Feb 2022 19:58:22.829 # SYNC failed. BGSAVE child returned an error Info 2022-02-01 12:58:22.832 MST redis 1:M 01 Feb 2022 19:58:22.832 * Replica redis-replicas-0.:6379 asks for synchronization Info 2022-02-01 12:58:22.832 MST redis 1:M 01 Feb 2022 19:58:22.832 * Full resync requested by replica redis-replicas-0.:6379 Info 2022-02-01 12:58:22.832 MST redis 1:M 01 Feb 2022 19:58:22.832 * Starting BGSAVE for SYNC with target: disk Info 2022-02-01 12:58:22.833 MST redis 1:M 01 Feb 2022 19:58:22.833 * Background saving started by pid 8086 I then looked at my persistent volume claim specification "redis-data" and it is in the "Pending" Phase and never seems to get out of that phase. If I look at all my PVCs though then they are all bound and appear to be healthy. Clearly something isn't as healthy as it seems but I am not sure how to diagnose. Any help would be appreciated.
i know it late to the party but to add more if any of get stuck into the same scenario and can't delete the PVC they can increase size of the PVC in GKE. Check storageclass : apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: … provisioner: kubernetes.io/gce-pd allowVolumeExpansion: true Edit the PVC spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi Field that you need to update in PVC spec: accessModes: - ReadWriteOnce resources: requests: <== make sure in requests section storage: 30Gi <========= Once changes are applied for PVC and saved just Restart the POD now. Sharing linke below : https://medium.com/#harsh.manvar111/resizing-pvc-disk-in-gke-c5b882c90f7b
So I was pretty close on the heels of it, in my case when I uninstalled redis it didn't remove the PVC (which makes some sense) and then when I reinstalled it tried to use the same PVC. Unfortunately, that pvc had run out of memory. I was able to manually delete the PVC's that previously existed (we didn't need to keep the data) and then reinstall redis via helm. At that point, it created new PVC's and worked fine.
How to extend the validity of openshift kublet-server, kublet-client certificates of all the nodes?
I have deployed openshift(okd) 3.11 using : https://github.com/openshift/openshift-ansible/tree/release-3.11 I would want to extend the validity of all the certificates to 5 years or more. I have tried set following variables in the inventory: openshift_hosted_registry_cert_expire_days=1825 openshift_ca_cert_expire_days=1825 openshift_master_cert_expire_days=1825 etcd_ca_default_days=1825 and i have run the re-deploy certificate play referring to https://docs.openshift.com/container-platform/3.11/install_config/redeploying_certificates.html#redeploying-all-certificates-current-ca ansible-playbook -i openshift-ansible/playbooks/inventory.ini openshift-ansible/playbooks/redeploy-certificates.yml After the completion of above command, i see many of the certificates getting updated to 5 years(1825 days) validity, but kublet-server, kublet-client certificates remain default as original i.e 1 year master-228-rak.167.254.xx.xxx.nip.io - /etc/origin/node/certificates/kubelet-client-2020-11-05-22-07-35.pem Validity Not Before: Nov 5 22:03:00 2020 GMT Not After : Nov 5 22:03:00 2021 GMT master-228-rak.167.254.xx.xxx.nip.io - /etc/origin/node/certificates/kubelet-server-2020-11-05-22-10-56.pem Validity Not Before: Nov 5 22:06:00 2020 GMT Not After : Nov 5 22:06:00 2021 GMT node1.167.254.xx.xxx.nip.io - /etc/origin/node/certificates/kubelet-client-2020-11-05-22-10-54.pem Validity Not Before: Nov 5 22:06:00 2020 GMT Not After : Nov 5 22:06:00 2021 GMT node1.167.254.xx.xxx.nip.io - /etc/origin/node/certificates/kubelet-server-2020-11-05-22-10-56.pem Validity Not Before: Nov 5 22:06:00 2020 GMT Not After : Nov 5 22:06:00 2021 GMT How can i renew these certificates to have desired value as certificate validity?
These certificates are always generated for one year and are automatically rotated. You can force redeployment by redeploying a new CA by using the -e openshift_redeploy_openshift_ca=true flag as described in the documentation: Redeploying Node Certificates By default, node certificates are valid for one year. OKD automatically rotates node certificates when they get close to expiring. If automatic approval is not configured, you must manually approve the certificate signing requests (CSRs). If you need to redeploy certificates because the CA certificate was changed, you can use the playbooks/redeploy-certificates.yml playbook with the -e openshift_redeploy_openshift_ca=true flag. See Redeploying All Certificates Using the Current OpenShift Container Platform and etcd CA for details. When running this playbook, the CSRs are automatically approved. As far as I know, since this is an automatic process, you cannot change the validity period to be different from 1 year. Make sure you are using openshift_master_bootstrap_auto_approve=true to make the renewal automatic.
Redis timeout with almost no data in the database, using the .NET client
I received this error: StackExchange.Redis.RedisTimeoutException: Timeout performing GET (5000ms), next: GET RetryCount, inst: 3, qu: 0, qs: 1, aw: False, rs: ReadAsync, ws: Idle, in: 7, in-pipe: 0, out-pipe: 0, serverEndpoint: redis:6379, mc: 1/1/0, mgr: 10 of 10 available, clientName: 18745af38fec, IOCP: (Busy=0,Free=1000,Min=1,Max=1000), WORKER: (Busy=6,Free=32761,Min=1,Max=32767), v: 2.1.58.34321 (Please take a look at this article for some common client-side issues that can cause timeouts: https://stackexchange.github.io/StackExchange.Redis/Timeouts) We can see that there is only a single message in the queue (qs=1) and that there are only 7 bytes waiting to be read (in=7). Redis is used by 2 processes and holds settings for the system and store logs. It was a re-install so no logs were written and the database has probably 2-3kb of data :) This is the only output from Redis: 1:C 12 Sep 2020 15:20:49.293 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 1:C 12 Sep 2020 15:20:49.293 # Redis version=6.0.8, bits=64, commit=00000000, modified=0, pid=1, just started 1:C 12 Sep 2020 15:20:49.293 # Configuration loaded 1:M 12 Sep 2020 15:20:49.296 * Running mode=standalone, port=6379. 1:M 12 Sep 2020 15:20:49.296 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. 1:M 12 Sep 2020 15:20:49.296 # Server initialized 1:M 12 Sep 2020 15:20:49.296 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memor y=1' for this to take effect. 1:M 12 Sep 2020 15:20:49.296 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo madvise > /sys/kernel/mm/transparent_hugepag e/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled (set to 'madvise' or 'never'). 1:M 12 Sep 2020 15:20:49.305 * DB loaded from append only file: 0.000 seconds 1:M 12 Sep 2020 15:20:49.305 * Ready to accept connections so it looks like nothing went wrong on that side. The 2 processes accessing it are in docker containers, so does Redis. All on a single AWS instance with a lot of ram and disk available. this is also a one time event, it has never happened before with the same config. I'm not very experienced with Redis; is there anything in the error message that would look suspicious?
Failed s3fs mount due to Timezone skew
Apr 22 05:54:59 ubuntuserver s3fs[10143]: s3fs.cpp:set_s3fs_log_level(297): change debug level from [CRT] to [INF] Apr 22 05:54:59 ubuntuserver s3fs[10143]: PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) Apr 22 05:54:59 ubuntuserver s3fs[10145]: init v1.85(commit:381835e) with OpenSSL Apr 22 05:54:59 ubuntuserver s3fs[10145]: check services. Apr 22 05:54:59 ubuntuserver s3fs[10145]: check a bucket. Apr 22 05:54:59 ubuntuserver s3fs[10145]: curl.cpp:ResetHandle(1879): The S3FS_CURLOPT_KEEP_SENDING_ON_ERROR option could not be set. For maximize performance you need to enable this option and you should use libcurl 7.51.0 or later. Apr 22 05:54:59 ubuntuserver s3fs[10145]: URL is https://s3-us-west-2.amazonaws.com/bucketubuntuserver/ Apr 22 05:54:59 ubuntuserver s3fs[10145]: URL changed is https://bucketubuntuserver.s3-us-west-2.amazonaws.com/ Apr 22 05:55:01 ubuntuserver s3fs[10145]: curl.cpp:RequestPerform(2273): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?>#012<Error><Code>RequestTimeTooSkewed</Code><Message>The difference between the request time and the current time is too large.</Message> <RequestTime>Mon, 22 Apr 2019 05:54:59 GMT</RequestTime> <ServerTime>2019-04-22T06:23:01Z</ServerTime> <MaxAllowedSkewMilliseconds>900000</MaxAllowedSkewMilliseconds> <RequestId>2CDB15BFC9072D0D</RequestId><HostId>grA/XIvT7zLUh9jLUxYGAs8jOtMs762CPMX+TM6GdAVvAB36/b8hH0dVOugVBWRpHX3O63V2Bv8=</HostId></Error> Apr 22 05:55:01 ubuntuserver s3fs[10145]: curl.cpp:CheckBucket(3305): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>#012<Error><Code>RequestTimeTooSkewed</Code><Message>The difference between the request time and the current time is too large.</Message> <RequestTime>Mon, 22 Apr 2019 05:54:59 GMT</RequestTime> <ServerTime>2019-04-22T06:23:01Z</ServerTime> <MaxAllowedSkewMilliseconds>900000</MaxAllowedSkewMilliseconds><RequestId>2CDB15BFC9072D0D</RequestId><HostId>grA/XIvT7zLUh9jLUxYGAs8jOtMs762CPMX+TM6GdAVvAB36/b8hH0dVOugVBWRpHX3O63V2Bv8=</HostId></Error> Apr 22 05:55:01 ubuntuserver s3fs[10145]: s3fs.cpp:s3fs_check_service(3868): invalid credentials(host=https://s3-us-west-2.amazonaws.com) - result of checking service. Apr 22 05:55:01 ubuntuserver s3fs[10145]: Pool full: destroy the oldest handler Apr 22 05:55:01 ubuntuserver s3fs[10145]: s3fs.cpp:s3fs_exit_fuseloop(3444): Exiting FUSE event loop due to errors Apr 22 05:55:01 ubuntuserver s3fs[10145]: destroy i had my credentials correct,but i wasnt able to mount s3 due to the clock difference. My server is using UTC which was late by 26 minutes. My problem is solved by fixing ntp sync but:- 1) I want to confirm if the s3fs or any aws tool i use also send the Clock information to S3 ? as is present but its GMT instead of UTC. The s3 seems to be using UTC when comparing it to servers properly synced to ntp. 2) Can we use any timezone provided that is properly synced with good NTP server ?
S3 signs requests including the client's current time to prevent attackers from replaying requests at a later time. Thus if your client has the incorrect time, the server will treat it as an invalid request. Both the client and server use UTC/GMT; the time zone does not matter. Configuring ntp as you did should resolve these issues.
Failed opening .rdb for saving: Permission denied - started after a while of running successfully
I have had a node web service running successfully on an aws ubuntu server for over a month, with the requests cached using redis. Yesterday I started getting the following error from some of my routes: MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error. I was able to stop the error occurring by using: config set stop-writes-on-bgsave-error no as suggested in the answers to this question, but it doesn't actually solve the underlying problem. To find the underlying problem I checked the logs and found the following had started happening: [1105] 09 Aug 13:17:14.800 - 0 clients connected (0 slaves), 797680 bytes in use [1105] 09 Aug 13:17:15.101 * 1 changes in 900 seconds. Saving... [1105] 09 Aug 13:17:15.101 * Background saving started by pid 28090 [28090] 09 Aug 13:17:15.101 # Failed opening .rdb for saving: Permission denied [1105] 09 Aug 13:17:15.201 # Background saving error Over the weekend no one had been using the server, but before the weekend the logs were fine, and we were getting no errors: [12521] 06 Aug 04:49:27.308 - 0 clients connected (0 slaves), 803352 bytes in use [12521] 06 Aug 04:49:29.012 * 1 changes in 900 seconds. Saving... [12521] 06 Aug 04:49:29.012 * Background saving started by pid 26663 [26663] 06 Aug 04:49:29.014 * DB saved on disk [26663] 06 Aug 04:49:29.014 * RDB: 2 MB of memory used by copy-on-write [12521] 06 Aug 04:49:29.112 * Background saving terminated with success As I said, no one has touched this server in the intervening time. Looking around for people having the same problem I found this question. I checked the ownership and permissions on the directory and db file as suggested in the answers there: drwxr-xr-x 2 redis redis 26 Aug 6 06:55 redis -rw-r--r-- 1 redis redis 18 Aug 6 06:55 dump-6379.rdb The permissions and ownership both look ok to me, but I have noticed that the date on the file and folder is between the last time I saw the service working and the first time it failed. Unfortunately that hasn't really helped me with what to do next and I am at a bit of a loss. I am looking for suggestions for next steps to find the cause of the problem, or at least a way of making redis able to write again.