Deploy a build to another server with gitlab-ci - gitlab-ci

I'm trying to do a deploy to another internal server that happens after each commit. I tried the steps mentioned here but they aren't working for me: Deploy every build to a server using Gitlab CI
I get the following error message:
gitlab-ci-multi-runner 1.2.0 (3a4fcd4)
Using SSH executor...
ERROR: Build failed: dial tcp: lookup ${MY_INTERNAL_PROJECTS_SERVER} on [::1]:53: read udp [::1]:52735->[::1]:53: read: connection refused
/etc/gitlab-runner/config.toml:
concurrent = 2
[[runners]]
name = "deploy_on_projects_server"
limit = 1
url = "https://${MY_INTERNAL_GITLAB_SERVER}/ci"
token = "701299841717071d3abfb45d91a56a"
executor = "ssh"
builds_dir = "/home/${USER}/test"
[runners.ssh]
user = "${USER}"
password = "${PASSWORD}"
host = "${MY_INTERNAL_PROJECTS_SERVER}"
port = "22"
identity_file = "/home/gitlab-runner/.ssh/id_rsa"
[runners.cache]
Insecure = false
.gitlab-ci.yml
build:
tags: ["deploy_on_projects_server"]
script:
- echo "build"
On my gitlab server I changed the user to gitlab-runner and successfully used ssh to login to my projects server. So that is working fine.
Does anyone know what I am doing wrong or how I can fulfill my need?

Related

Configuring Container Registry in gitlab over http

I'm trying to configure Container Registry in gitlab installed on my Ubuntu machine.
I have Docker configured over http and it works, added insecure.
Gitlab is installed on the host http://5.121.32.5
external_url 'http://5.121.32.5'
In the gitlab.rb file, I have enabled the following settings:
registry_external_url 'http://5.121.32.5'
gitlab_rails['registry_enabled'] = true
gitlab_rails['registry_host'] = "5.121.32.5"
gitlab_rails['registry_port'] = "5005"
gitlab_rails['registry_path'] = "/var/opt/gitlab/gitlab-rails/shared/registry"
To listen to the port, I created a file
sudo mkdir -p /etc/systemd/system/docker.service.d/
Here are its contents
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
But when the code runs in the gitlab-ci.yaml file
docker push ${MY_REGISTRY_PROJECT}:latest
then I get an error
Error response from daemon: Get "https://5.121.32.5:5005/v2/": dial tcp 5.121.32.5:5005: connect: connection refused
What is the problem? What did I miss?
And why is https specified here if I have http configured?
When you use docker login -u gitlab-ci-token -p ${CI_JOB_TOKEN} ${CI_REGISTRY} the docker command defaults to HTTPS causing the problem.
You need to tell your GitLab Runner to use insecure registry:
On the server on which the GitLab Runner is running, add the following option to your docker launch arguments (for me I added it to the DOCKER_OPTS in /etc/default/docker and restarted the docker engine): --insecure-registry 172.30.100.15:5050, replacing the IP with your own insecure registry.
Source
Also, you may want to read more about it in this interesting discussion

connect bitbucket pipeline to cpanel with API keys

How do I use SSH Keys (created from cPanel) to connect to the server? And eventually pull a fresh copy and run composer updates and database migrations (a Symfony script)
I get permission denied errors so my ssh example.net.au ls -l /staging.example.net.au is reaching the server, I'm just unsure how to use keys made from cPanel to make an authentication.
bitbucket-pipelines.yml
# This is an example Starter pipeline configuration
# Use a skeleton to build, test and deploy using manual and parallel steps
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: atlassian/default-image:2
pipelines:
default:
- parallel:
- step:
name: 'Build and Test'
script:
- echo "Your build and test goes here..."
- step:
name: 'Lint'
script:
- echo "Your linting goes here..."
- step:
name: 'Security scan'
script:
- echo "Your security scan goes here..."
# The following deployment steps will be executed for each pipeline run. To configure your steps and conditionally deploy see https://support.atlassian.com/bitbucket-cloud/docs/configure-bitbucket-pipelinesyml/
- step:
name: 'Deployment to Staging'
deployment: staging
script:
- echo "Your deployment to staging script goes here..."
- echo $TESTVAR
- ssh example.net.au ls -l /staging.example.net.au
- step:
name: 'Deployment to Production'
deployment: production
trigger: 'manual'
script:
- echo "Your deployment to production script goes here..."
I think your SSL set-up may be incorrect. Please try the following to ensure both servers trust each other:
==Part 1==
Step 1. SSH into cPanel server (use PuTTY or your preferred SSH client), and run the following commands to generate a new key:
ssh-keygen
eval $(ssh-agent)
ssh-add
cat ~/.ssh/id_rsa.pub
Step 2. Copy the resulting key from the 'cat' command above, into: Bitbucket -> your repo -> Settings -> Access keys
==Part 2==
Step 3. In Bitbucket, go to your repo -> settings -> SSH keys -> Generate key
Step 4. Back on your cPanel server's SSH connection, copy the key from Step 3 above into the authorized keys file. Save when you are done:
nano ~/.ssh/authorized_keys
Right click to paste (usually)
CNRL+O to save
CNRL+X to exit
Step 5. In the same Bitbucket screen from Step 3, fetch and add host's fingerprint. You will need to enter the URL or IP address of your cPanel server here. Some cPanels servers use non-default ports. If port 22 is not the correct port, be sure to specify like so:
example.com:2200
(Port 443 is usually reserved for HTTPS and it is unlikely the correct port for an SSH connection. If in doubt, try the default 22 and common alternative 2200 ports first.)
Let me know if you have any questions and I am happy to assist you further.

Icinga2: Run check on remote host instead of master

I just updated to Icing 2.8 which needs "the new" way of checking remote hosts, so I'm trying to get that to work.
On the master I added a folder in zones.d with the hostname of the remote host. I added a few checks but they all seem to be executed from the master instead of the remote.
For example: I need to monitor Redis. My redis.conf in /etc/icinga2/zones.d/remotehostname/redis.conf:
apply Service "Redis" {
import "generic-service"
check_command = "Redis"
vars.notification["pushover"] = {
groups = [ "ADMINS" ]
}
assign where host.name == "remotehostname"
}
A new service pops up in IcingaWeb but it errors out with:
execvpe(/usr/lib/nagios/nagios-plugins/check_redis_publish_subscribe.pl) failed: No such file or directory
Which is correct because on the master that file does not exist. It does exist on the remote host however.
How do I get Icinga to execute this on the remote host and have that host return the output to the master?
You can write this to the service:
command_endpoint = host.name
Or you can try to create a zone and add the zone to the Host.
Maybe this could help you:
NetWays Blog

Docker login error with Get Started tutorial

I'm trying to follow beginner tutorial on Docker's website and I suffer with an error on login.
OS is Ubuntu 14.04, I'm not using VirtualBox and I'm not behind any proxy and want to push to the "regular" docker repository (not private one).
All threads I've found mention proxies and private repositories but that isn't my case, I'm just trying to do simple beginner tutorial.
Here is my attempt:
$ sudo docker login
[sudo] password for myuname:
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: myDHuname
Password:
Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
My docker info:
Containers: 5
Running: 0
Paused: 0
Stopped: 5
Images: 5
Server Version: 1.11.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 28
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 3.19.0-58-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.686 GiB
Name: myuname-ThinkPad-T420
ID: 6RW3:X3FC:T62N:CWKI:JQW5:YIPY:RAHO:ZHF4:DFZ6:ZL7X:JPOD:V7EC
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Epilogue
Now docker login is passing. I have not touched anything since yesterday when it was broken...
I can't reproduce the behavior anymore.
I encounter this issue when my first use docker. I've shadowsocks proxy on, and configed as pac mode. When I try to run docker run hello-world, I get this timeout error. When I set the proxy mode to global, the error is aslo there.
But when I disable the proxy, docker runs well. It pull remote image successfully.
docker for windows
Note: Some users reported problems connecting to Docker Hub on Docker
for Windows stable version. This would manifest as an error when
trying to run docker commands that pull images from Docker Hub that
are not already downloaded, such as a first time run of docker run
hello-world. If you encounter this, reset the DNS server to use the
Google DNS fixed address: 8.8.8.8. For more information, see
Networking issues in Troubleshooting.
The error Client.Timeout exceeded while awaiting headers indicates:
GET request to the registry https://registry-1.docker.io/v2/ timedout
The library responsible (most likely libcurl) timed out before a response was heard
The connection never formed (proxy/firewall gobbled it up)
If you see the below result you can rule out timed out and network connectivity
$ curl https://registry-1.docker.io/v2/
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
If you get the above response next would be to check if your user environment has some proxy configuration.
env | grep "proxy"
Note: The docker runs as root. Maybe you have http_proxy in your env. Most likely I am wrong. Anywho see what happens with the curl GET request
change the proxy settings in the firefox. May be you are in access restricted mode. Just add the server address in the firefox settings -> preferences -> advanced -> network -> configuration (settings). Add the server ip in the no proxy for the issue can be resolved

Can't reload Puppet configuration -> inability to connect to Puppet Server

I'm using two Vagrant VMs to test some things with Puppet, but when I go to request a cert, I get a cryptic error message that I can't find any information about.
I should note that in correspondence with good Linux server administration I'm use /var/ and /opt/ for storing sensitive cert info, but otherwise a standard Puppet setup.
# Client node details
IP: 192.168.250.10
Hostname: client.example.com
Puppet version: 4.3.2
OS: CentOS Linux release 7.0.1406 (on Vagrant)
# Puppet server details
IP: 192.168.250.6
Hostname: puppet-server.example.com
Puppet version: 4.3.2
OS: CentOS Linux release 7.0.1406 (on Vagrant)
# client's and server's /etc/hosts files are identical
192.168.250.5 puppetmaster.example.com
192.168.250.6 puppet.example.com puppet-server.example.com
192.168.250.7 dashserver.example.com dashboard.example.com
192.168.250.10 client.example.com
192.168.250.20 webserver.example.com
# /etc/puppetlabs/puppet/puppet.conf on both client and server
[main]
logdest = syslog
[user]
bucketdir = $clientbucketdir
vardir = /var/opt/puppetlabs/server
ssldir = $vardir/ssl
[agent]
server = puppet.example.com
[master]
certname = puppet.example.com
vardir = /var/opt/puppetlabs/puppetserver
ssldir = $vardir/ssl
logdir = /var/log/puppetlabs/puppetserver
rundir = /var/run/puppetlabs/puppetserver
pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid
trusted_server_facts = true
reports = store
cacert = /var/opt/puppetlabs/puppetserver/ssl/certs/ca.pem
cacrl = /var/opt/puppetlabs/puppetserver/ssl/crl.pem
hostcert = /var/opt/puppetlabs/puppetserver/ssl/certs/{puppet, client}.example.com.pem # respectively, obviously
hostprivkey = /var/opt/puppetlabs/puppetserver/ssl/private_keys/{puppet, client}.example.com.pem # respectively, obviously
Finally, the error I get:
$ sudo puppet resource service puppet ensure=stopped enable=false
Notice: /Service[puppet]/ensure: ensure changed 'running' to 'stopped'
service { 'puppet':
ensure => 'stopped',
enable => 'false',
}
$ sudo puppet resource service puppet ensure=running enable=true
Notice: /Service[puppet]/ensure: ensure changed 'stopped' to 'running'
service { 'puppet':
ensure => 'running',
enable => 'true',
}
$ puppet agent --test --server=puppet.example.com
Error: Could not request certificate: Permission denied # dir_initialize - /etc/puppetlabs/puppet/ssl/private_keys
Exiting; failed to retrieve certificate and waitforcert is disabled
First of all, with this setup Puppet should not be using /etc/puppetlabs/puppet/ssl/private_keys. It's not using my configuration file correctly:
$ puppet config print ssldir
/etc/puppetlabs/puppet/ssl
Next, I went through and regenerated the keys on BOTH the server and the client nodes as prescribed in the Puppet docs, however I still got the same error and both the client AND server still think my $ssldir is /etc/puppetlabs/puppet/ssl when it should be /var/opt/puppetlabs/puppetserver/ssl.
Any thoughts?
You need to specify the ssl and vardir config in the agent section as well as master.
the user section is only applicable to the puppet apply commands etc