gitalb CI: x509: certificate signed by unkown authority while accessing private docker registry - gitlab-ci

Can't login to my private docker registry from the gitlab-ci.
Scenario:
gitlab CE omnibus installation, the registry is inside the gitlab.
gitlab-runner with docker executor running as container in a docker swarm cluster
gitlab-runner has a ca.crt in /etc/gitlab-runner/certs/
The ca.crt contain the server, the intermediate and the root certificate in the correct order.
It's not a sel-signed certificate, it's a wildcard certificate (*.domain.com)
Inside the gitlab-runner container I can run curl https://registry.domain.com without erro
What I have tried:
Add the registry as insecure (daemon.json and in the .gitlab-ci.yaml)
Add the certificate in the runner as registry.domain.com.crt
.gitlab-ci.yml
build_image:
image: docker:19.03.8
services:
- name: docker:19.03.12-dind
command: ["--insecure-registry=registry.domain.com:443"]
alias: docker
stage: build
...
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.domain.com
obs: I already saw this without success.

I still don't know what caused this issue but the solution was mount docker socket in the gitlab-runner
gitlab-runner register <other_options> --docker-volumes /var/run/docker.sock:/var/run/docker.sock

Related

Configuring Container Registry in gitlab over http

I'm trying to configure Container Registry in gitlab installed on my Ubuntu machine.
I have Docker configured over http and it works, added insecure.
Gitlab is installed on the host http://5.121.32.5
external_url 'http://5.121.32.5'
In the gitlab.rb file, I have enabled the following settings:
registry_external_url 'http://5.121.32.5'
gitlab_rails['registry_enabled'] = true
gitlab_rails['registry_host'] = "5.121.32.5"
gitlab_rails['registry_port'] = "5005"
gitlab_rails['registry_path'] = "/var/opt/gitlab/gitlab-rails/shared/registry"
To listen to the port, I created a file
sudo mkdir -p /etc/systemd/system/docker.service.d/
Here are its contents
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
But when the code runs in the gitlab-ci.yaml file
docker push ${MY_REGISTRY_PROJECT}:latest
then I get an error
Error response from daemon: Get "https://5.121.32.5:5005/v2/": dial tcp 5.121.32.5:5005: connect: connection refused
What is the problem? What did I miss?
And why is https specified here if I have http configured?
When you use docker login -u gitlab-ci-token -p ${CI_JOB_TOKEN} ${CI_REGISTRY} the docker command defaults to HTTPS causing the problem.
You need to tell your GitLab Runner to use insecure registry:
On the server on which the GitLab Runner is running, add the following option to your docker launch arguments (for me I added it to the DOCKER_OPTS in /etc/default/docker and restarted the docker engine): --insecure-registry 172.30.100.15:5050, replacing the IP with your own insecure registry.
Source
Also, you may want to read more about it in this interesting discussion

Cannot connect via SSH from Github Action workflow

Connection to created Droplet via SSH by Github Actions runner.
My steps:
ssh-keygen -t rsa -f ~/.ssh/KEY_NAME -P ""
doctl compute ssh-key create KEY --public-key "CONTENT OF KEY_NAME.pub"
doctl compute droplet create --image ubuntu-20-04-x64 --size s-1vcpu-1gb --region fra1 DROPLET_NAME --ssh-keys FINGERPRINT --wait
ssh -vvv -i ~/.ssh/KEY_NAME root#DROPLET_IP
✔️ Tested on Windows local machine using doctl.exe runned from cmd - works!
✔️ Tested on Docker (installed on Windows) based on Linux image using doctl script - works!
⚠️ Tested on Github Actions runner based on ubuntu-latest using digitalocean/action-doctl script - doesn't work!
Received message is: connect to host ADDRESS_IP port 22: Connection refused.
So the steps are correct, so why does this not work for Github Actions?
If you are using the GitHub Action digitalocean/action-doctl, check issue 14 first:
In order to SSH into a Droplet, doctl needs access to the private half of the SSH key pair whose public half is on the Droplet.
Currently the doctl Action is based on a Docker container.
If you were using the Docker container directly, you could invoke it with:
docker run --rm --interactive --tty \
--env=DIGITALOCEAN_ACCESS_TOKEN=<YOUR-DO-API-TOKEN> \
-v $HOME/.ssh/id_rsa:/root/.ssh/id_rsa \
digitalocean/doctl compute ssh <DROPLET-ID>
in order to mount the SSH key from outside the container.
You might be better off just using doctl to grep the Droplet's IP address and using this Action that is more focused on SSH related use cases and provides a lot of additional functionality: marketplace/actions/ssh-remote-commands.

gitlab runner - x509: certificate signed by unknown authority

Well, I am trying to run gitlab-runner on my PC, which should be connected to our Gitlab on the server.
I am getting
ERROR: Registering runner... failed runner=XXXXXX status=couldn't execute POST against https://XXXXXXXXXX/api/v4/runners: Post https://XXXXXXXXXX/api/v4/runners: x509: certificate signed by unknown authority
PANIC: Failed to register this runner. Perhaps you are having network problems
I ran through different advices, but nothing really changed.
My current setup is self-signed ceritificate generated by
wget "https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem.txt" -O "/Users/admin/gitlab-runner-certs/fs-tul-letsencrypt.pem"
(I also tried https://futurestud.io/tutorials/how-to-run-gitlab-with-self-signed-ssl-certificate),
script for gitlab-runner registration
#!/usr/bin/env bash
# tried also without sudo
sudo gitlab-runner register \
--non-interactive \
--registration-token OUR_GITLAB_TOKEN \
--url OUR_GITLAB_HOST_URL \
--tls-ca-file /Users/admin/gitlab-runner-certs/fs-tul-letsencrypt.pem \
--executor docker
And I am still getting that error. Any idea?
I also did not change anything on server side. Shouldn't I do anything there? (I did not find any mention about it, but still asking)
PS: gitlab-runner x509: certificate signed by unknown authority did not fix my problem
There was a problem on server side where gitlab was running.
There was wrong path to full-chain certificate.

How can I pass a external environment variable to drone docker runner?

The scene is: I want to exec docker run & push in docker runner, and the docker registry and docker runner is in same server. so I want to pass host ip as variable into drone pipeline container so I can push docker image without a remote registry server. But it seem that only drone allowable environment variable can be used in ‘${}’. I try to export EXTERNALIP in host machine and try to get ${EXTERNALIP} but got nothing.
so Is there some way I can get external ip for communicating to localhost or another way to achieve this?
You should be able to push to localhost if its on the same host, that said, I was not able to do this using the packages plugin but was able to to replicate using direct docker:
steps:
- name: docker-${DRONE_EVENT}
image: docker:19.03
when:
event: [ push, pull_request ]
status: [ success ]
environment:
DOCKER_PASSWORD:
from_secret: docker_password
commands:
- echo $DOCKER_PASSWORD | docker login --username user_name --password-stdin localhost
- docker build -t localhost/demo-web:latest .
- if [ "${DRONE_EVENT}" == "push" ]; then docker push localhost/demo-web:latest; fi;
volumes:
- name: docker-socket
path: /var/run/docker.sock
volumes:
- name: docker-socket
host:
path:
/var/run/docker.sock
Couple caveats, obviously you will need to have trusted access in the repo configuration or --trusted if using local exec. Enjoy!

Cannot validate certificate for ip because it doesn't contain any IP SANs

I have installed OpenShift3 with Docker and Kubernetes with the ansible installer.
After the installation I want to create my docker registration on my master but I get the following error (I read it was something with SSL but I can't find a solution):
commands (from the sample):
[root#ip-10-0-0-x centos]# export CURL_CA_BUNDLE=`pwd`/openshift.local.config/master/ca.crt
[root#ip-10-0-0-x centos]# sudo chmod a+rwX openshift.local.config/master/admin.kubeconfig
[root#ip-10-0-0-x centos]# sudo chmod +r openshift.local.config/master/openshift-registry.kubeconfig
[root#ip-10-0-0-x centos]# oadm registry --create --credentials=openshift.local.config/master/openshift-registry.kubeconfig --config=openshift.local.config/master/admin.kubeconfig
error:
error: error getting client: couldn't read version from server: Get https://10.0.0.x:8443/api: x509: cannot validate certificate for 10.0.0.x because it doesn't contain any IP SANs
additional info
[root#ip-10-0-0-x centos]# kubectl version
Client Version: version.Info{Major:"", Minor:"", GitVersion:"v1.1.0-alpha.0-1605-g44c91b1", GitCommit:"44c91b1", GitTreeState:"not a git tree"}
Server Version: version.Info{Major:"", Minor:"", GitVersion:"v1.1.0-alpha.0-1605-g44c91b1", GitCommit:"44c91b1", GitTreeState:"not a git tree"}
[root#ip-10-0-0-191 centos]# oc get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 172.30.0.1 <none> 443/TCP <none> 1d
[root#ip-10-0-0-x centos]# kubernetes apiserver
F0924 12:15:13.674745 75545 server.go:223] No --service-cluster-ip-range specified
The Ansible installer should generate certs for you that have the right IPs in the certs. Your local kubeconfig file (that oadm is using to connect to the server) should have been generated by the Ansible installer - can you verify that is the case? The file is in ~/.kube/config - does it point to the system that the Ansible installer used? Are you using an IaaS for OpenShift, deploying to local machines, or Vagrant?