What's the use of the [runners.docker] section in config.toml for use case with docker machine? - gitlab-ci

reading the documentation on autoscaling I can't figure the role of the [runner.docker] section when using docker+machine as executor :
[runners.docker]
image = "ruby:2.1" # The default image used for builds is 'ruby:2.1'
In the executors documentation it says :
docker+machine : like docker, but uses auto-scaled docker machines -
this requires the presence of [runners.docker] and [runners.machine]
I get I have to define this [runners.docker] section to be able to use [runners.machine] section, but what is the aim of this [runners.docker] ?
I can't find how to configure it as I don't understand why to use it.
Our gitlab-runner runs on a vSphere VM and is configured to scale using docker+machine executor with MachineDriver using vmwarevsphere. All works nice but I would like to understand fully the configuration file.
Here is our "censored with stars" config.toml file with the [runners.docker] I can't understand (note that the guy that wrote it leaved the company, so I can't ask him):
[[runners]]
name = "gitlab-runner"
limit = 6
output_limit = 102400
url = "http://gitlab.**************.lan"
token = "*******************"
executor = "docker+machine"
[runners.docker]
tls_verify = false
image = "docker:latest"
dns = ["*.*.*.*"]
privileged = true
disable_cache = false
volumes = ["/etc/localtime:/etc/localtime:ro", "/var/run/docker.sock:/var/run/docker.sock", "/etc/docker/certs.d:/etc/docker/certs.d", "/cache:/cache", "/builds:/builds"]
cache_dir = "cache"
shm_size = 0
[runners.cache]
Type = "s3"
ServerAddress = "*.*.*.*"
AccessKey = "*****************"
SecretKey = "*****************"
BucketName = "runner"
Insecure = true
[runners.machine]
IdleCount = 4
MaxBuilds = 10
IdleTime = 3600
MachineDriver = "vmwarevsphere"
MachineName = "gitlab-runner-pool-1-%s"
MachineOptions = ["vmwarevsphere-username=************", "vmwarevsphere-password=*****************", "vmwarevsphere-vcenter=*.*.*.*", "vmwarevsphere-datastore=*********", "vmwarevsphere-memory-size=3096", "vmwarevsphere-disk-size=40960", "vmwarevsphere-cpu-count=3", "vmwarevsphere-network=*****************", "vmwarevsphere-datacenter=**************", "vmwarevsphere-hostsystem=*******************", "engine-storage-driver=overlay2", "engine-insecure-registry=**************", "engine-insecure-registry=*******************"]
OffPeakPeriods = ["* * 0-8,21-23 * * mon-fri *", "* * * * * sat,sun *"]
OffPeakTimezone = "Local"
OffPeakIdleCount = 1
OffPeakIdleTime = 600

The [runners.machine] section defines how to start and provision your runner machines, the [runners.docker] section then defines how to configure the runner on that machine.
Docker-machine on its own only does the following (as you can read here):
"Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands."
So this does nothing with the Gitlab runner, you still need to configure the runner after that and thats where the [runners.docker] section comes into play because the runner needs to know what default image to use and what volumes to mount etc.

Related

404 when executing docker push to gitlab-container-registry

I have installed gitlab-ce 13.2.0 on my server and the container-registry was immediately available.
from a other sever (or my local machine) I can login, but when pushing a image to the container-registry I get a 404-error: error parsing HTTP 404 response body: invalid character '<' looking for beginning of value: "<!DOCTYPE html>\n<html>\n<head>...
in my gitlab.rb I have:
external_url 'https://git.xxxxxxxx.com'
nginx['enable'] = true
nginx['client_max_body_size'] = '250m'
nginx['redirect_http_to_https'] = true
nginx['ssl_certificate'] = "/etc/gitlab/trusted-certs/xxxxxxxx.com.crt"
nginx['ssl_certificate_key'] = "/etc/gitlab/trusted-certs/xxxxxxxx.com.key"
nginx['ssl_protocols'] = "TLSv1.1 TLSv1.2"
registry_external_url 'https://git.xxxxxxxx.com'
what is confusing, is that the registry_external_url is the same as the external_url. There are those lines in the gitlab.rb:
### Settings used by GitLab application
# gitlab_rails['registry_enabled'] = true
# gitlab_rails['registry_host'] = "git.xxxxxxxx.com"
# gitlab_rails['registry_port'] = "5005"
# gitlab_rails['registry_path'] = "/var/opt/gitlab/gitlab-rails/shared/registry"
But when I uncomment this, I cannot login.
what can be the problem here?
This is actually because you are using https port without proxying the registry in nginx.
Fix these lines according to the following in gitlab.rb:
registry_nginx['enable'] = true
registry_nginx['listen_https'] = true
registry_nginx['redirect_http_to_https'] = true
registry_external_url 'https://registry.YOUR_DOMAIN.gtld'
You don't need to touch nginx['ssl_*] parameters when you are using letsencrypt since the chef would take care.
How is your image named? Your image name must match exactly not only the registry URL, but project too.
You can't just build "myimage:latest" and push it. It must be like git.xxxxxxxx.com/mygroup/myproject:latest. You can obtain correct name from $CI_REGISTRY_IMAGE predefined variable.

Splunk: How to enable Splunk SSO

I have splunk and try to enable splunk SSO instead of nornal authentiation. I have configuraitons as follows:
In /opt/splunk/etc/system/local/server.conf
[general]
trustedIP = 192.168.1.208
serverName = Splunk_Core_02
pass4SymmKey = $7$RRvdYDdIlj4P2geQdtHluTRb7OfvZhTFTZGJ7z5JiZAkJ6Q1at6j0Q==
sessionTimeout = 30s
[sslConfig]
sslPassword = $7$m6pB5a0PWFg64VlNZGgunhGElO3qLiAc6NrhfLO+tpX2jR7WC7qm1Q==
[lmpool:auto_generated_pool_download-trial]
description = auto_generated_pool_download-trial
quota = MAX
slaves = *
stack_id = download-trial
[lmpool:auto_generated_pool_forwarder]
description = auto_generated_pool_forwarder
quota = MAX
slaves = *
stack_id = forwarder
[lmpool:auto_generated_pool_free]
description = auto_generated_pool_free
quota = MAX
slaves = *
stack_id = free
[license]
active_group = Enterprise
[diskUsage]
minFreeSpace = 1024
[lmpool:test_splunk]
quota = MAX
slaves = *
stack_id = enterprise
In /opt/splunk/etc/system/local/web.conf
[settings]
#SSO
SSOMode = permissive
trustedIP = 192.168.1.208,192.168.2.15,127.0.0.1
remoteUser = REMOTE-USER
#tools.proxy.on = False
root_endpoint = /splunk
#SSL
enableSplunkWebSSL = 0
httpport = 8000
mgmtHostPort = 127.0.0.1:8089
appServerPorts = 8065
splunkdConnectionTimeout = 30
enableSplunkWebClientNetloc = False
# SSL certificate files.
privKeyPath = $SPLUNK_HOME/etc/auth/splunkweb/privkey.pem
serverCert = $SPLUNK_HOME/etc/auth/splunkweb/cert.pem
...
I see http://192.168.1.208:8000/debug/sso page, I see SSO is not enabled. What's wrong with my configurations?
There are several documentations says in server.conf, the trustedIP is 127.0.0.1. But none of them mention that only 127.0.0.1 is eligible to enable/activate SSO. So do not configure other IP address, instead of 127.0.0.1.
And in server.conf(/opt/splunk/etc/system/local/), you could only configure one trustedIP, and it is 127.0.0.1.
https://docs.splunk.com/Documentation/Splunk/8.0.3/Security/ConfigureSplunkSSO
Have you restarted splunk after making these changes?
In /opt/splunk/etc/system/local/web.conf,
remoteUser = REMOTE-USER is more likley to be REMOTE_USER
You have to use SAML.. I am using SAML for SSO purpose.. You need to contact IT guy and he will provide you IDP file upload it and share with your Splunk Connection file. You can download it from same window. Go to Users -> Authentication Method > SAML. Once everything is placed Then you have to create groups on AD and same thing you need to do on Splunk under SAML configuration.. Let me know if you need more details..
https://docs.splunk.com/Documentation/Splunk/8.0.3/Security/HowSAMLSSOworks

Where does Nomad put the downloaded S3 files?

I have the following Nomad job:
job "aws_s3_copy_rev2" {
datacenters = ["dc1"]
type = "system"
group "aws_s3_copy_rev2" {
count = 1
task "aws_s3_copy_rev2" {
driver = "raw_exec"
artifact {
source = "s3::https://my-data-files/123/"
}
resources {
cpu = 500 # 500 MHz
memory = 256 # 256MB
network {
port "http" {}
}
}
}
}
}
I submitted the job using nomad run aws_s3_copy_rev2.nomad. But I do not know where the file is downloaded to. Where does the Nomad put the downloaded S3 files?
This is my configuration file for starting the Nomad agent.
# Increase log verbosity
log_level = "DEBUG"
# Setup data dir
data_dir = "/tmp/client1"
# Give the agent a unique name. Defaults to hostname
name = "client1"
# Enable the client
client {
enabled = true
# For demo assume we are talking to server1. For production,
# this should be like "nomad.service.consul:4647" and a system
# like Consul used for service discovery.
servers = ["xxx:4647"]
options {
"driver.raw_exec.enable" = "1"
}
}
# Modify our port to avoid a collision with server1
ports {
http = 5657
}
Usually artifacts are stored in the allocation folder off your Nomad allocation, which in the default case would be /etc/nomad.d/alloc/<alloc_id>/<task>/local/<your_file.ext> on Linux machines. Not sure where things land on other OSes.
In this case, your data_dir is set to /tmp/client1, so I would expect the files would be somewhere like /tmp/client1/alloc/<alloc_id>/<task>/local/<your_file.ext>.
It is important to note that these artifacts are generated on the Nomad 'client' running an allocation of your job, not the machine you are starting the job from.
Also, you might want to be careful rooting your Nomad data directory in the /tmp folder as it might get periodically deleted, which might explain why you cannot find those files.
You can reference this directory in nomad environment as ${NOMAD_TASK_DIR}
and access or execute the file using the path:
artifact {
source = "s3::https://some-bucket/code/archive-logs.sh"
destination = "/local/"
}
driver = "raw_exec"
kill_timeout = "120s"
config {
command = "/bin/bash"
args = ["${NOMAD_TASK_DIR}/archive-logs.sh","7"]
}

Rabbit MQ declarative clustering

I have a RabbitMQ node running on a Windows 2012 server (rabbit#my-server-1).
I am creating a second node (rabbit#my-server-2) on a seperate server (also Windows 2012) and would like to cluster it with the existing node. The deployment of the second node is via Octopus Deploy and to make life easier I would like to have the clustering automatically done on startup of the node.
Reading the documentation (https://www.rabbitmq.com/clustering.html and https://www.rabbitmq.com/configure.html) leads me to believe I just need to add the following to the rabbitmq.conf file:
cluster_nodes.disc.1 = rabbit#my-server-1
However doing so causes the node to not start. The erl.exe process starts using 100% cpu and I see the following message in the erl_crash.dump file:
Slogan: init terminating in do_boot (generate_config_file)
I believe this is symptomatic of an invalid config file, and indeed removing these config entries allows me start the node fine.
I am able to cluster to the existing node manually via the relevant rabbitmqctl commands, but would prefer the declarative solution if possible.
I'm running RabbitMQ v3.7.4 and Erlang v20.3
So, what am I doing wrong? I've done some googling but haven't found anything that helps.
EDIT
Config file in full is:
listeners.ssl.default = 5671
ssl_options.cacertfile = e:/Rabbit/Certificates/cacert.pem
ssl_options.certfile = e:/Rabbit/Certificates/cert.pem
ssl_options.keyfile = e:/Rabbit/Certificates/key.pem
ssl_options.password = xxxxxxx
ssl_options.verify = verify_none
ssl_options.fail_if_no_peer_cert = false
ssl_options.versions.1 = tlsv1.2
web_stomp.ssl.port = 14879
web_stomp.ssl.backlog = 1024
web_stomp.ssl.certfile = e:/Rabbit/Certificates/cert.pem
web_stomp.ssl.keyfile = e:/Rabbit/Certificates/key.pem
web_stomp.ssl.cacertfile = e:/Rabbit/Certificates/cacert.pem
web_stomp.ssl.password = xxxxxxx
cluster_nodes.disc.1 = rabbit#my-server-1
How about adding the clustering-information like it is written in the doc under "Config File Peer Discovery Backend"
this would leave you with a configfile like this:
listeners.ssl.default = 5671
ssl_options.cacertfile = e:/Rabbit/Certificates/cacert.pem
ssl_options.certfile = e:/Rabbit/Certificates/cert.pem
ssl_options.keyfile = e:/Rabbit/Certificates/key.pem
ssl_options.password = xxxxxxx
ssl_options.verify = verify_none
ssl_options.fail_if_no_peer_cert = false
ssl_options.versions.1 = tlsv1.2
web_stomp.ssl.port = 14879
web_stomp.ssl.backlog = 1024
web_stomp.ssl.certfile = e:/Rabbit/Certificates/cert.pem
web_stomp.ssl.keyfile = e:/Rabbit/Certificates/key.pem
web_stomp.ssl.cacertfile = e:/Rabbit/Certificates/cacert.pem
web_stomp.ssl.password = xxxxxxx
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_classic_config
cluster_formation.classic_config.nodes.1 = rabbit#my-server-1
cluster_formation.classic_config.nodes.2 = rabbit#my-server-2

Traefik blue green deployment. Initialize web provider

I'm starting using traefik for blue/green deployment. I would like to use the REST API, so I have to put my configuration in the [web] section:
[web]
address = ":8080"
readOnly = false
[backends]
[backends.back]
[backends.back.loadbalancer.stickiness]
cookieName = "backend"
[backends.back.servers.S000]
url = "http://HOST_IP_ADDRESS:30000"
weight = 1
[backends.back.servers.S001]
url = "http://HOST_IP_ADDRESS:30001"
weight = 1
[frontends]
[frontends.front]
backend = "back"
passHostHeader = true
But it's not initialized with those values. However if I use PUT to http://localhost:8091/api/providers/web I can see the web provider OK. And if I use this same configuration for [file] it works right (but I'm unable to update it via API)
Is there any web to initialize [web] backends/frontends?
web section is deprecated.
try this:
# Enable API and dashboard
[api]
# Name of the related entry point
entryPoint = "traefik"
# Enabled Dashboard
dashboard = true