Gitlab-ci problems pushing to the private registry with HTTPS - gitlab-ci

I'm trying to push an image to my registry with the gitlab ci. I can login without any problems (the before script). However I get the following error on the push command. error parsing HTTP 400 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>\r\n<body>\r\n<center><h1>400 Bad Request</h1></center>\r\n<center>The plain HTTP request was sent to HTTPS port</center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"
This in the config.toml from the used gitlab-runner
[[runners]]
name = "e736f9d48a40"
url = "https://gitlab.domain.com/"
token = "token"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "docker"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
This is the relevant part of the gitlab-ci
image: docker
services:
- docker:dind
variables:
BACKEND_PROJECT: "test"
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
containerize:
stage: containerize
before_script:
- "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY"
only:
- master
script:
- "cd backend/"
- "docker build -t $CI_REGISTRY_IMAGE/api:latest ."
- "docker push $CI_REGISTRY_IMAGE/api:latest"
The GitLab omnibus registry configuration
registry_external_url 'https://gitlab.domain.com:5050'
registry_nginx['enable'] = true
registry_nginx['ssl_certificate_key'] = "/etc/letsencrypt/live/gitlab.domain.com/privkey.pem"
registry_nginx['ssl_certificate'] = "/etc/letsencrypt/live/gitlab.domain.com/fullchain.pem"
registry_nginx['port'] = 443
registry_nginx['redirect_http_to_https'] = true
### Settings used by Registry application
registry['enable'] = true
registry_nginx['proxy_set_headers'] = {
"Host" => "$http_host",
"X-Real-IP" => "$remote_addr",
"X-Forwarded-For" => "$proxy_add_x_forwarded_for",
"X-Forwarded-Proto" => "http",
"X-Forwarded-Ssl" => "on"
}
Can someone help me with this problem?

Okay, the solution was quite simple. I only had to change the
"X-Forwarded-Proto" => "http",
to
"X-Forwarded-Proto" => "https",

Related

Cannot send stream messages to rabbitmq which in docker

I have a rabbitmq container in docker and another service to send stream type messages to it. But it is only ok when the service is outside the docker, if I build the service as a container run in docker,and send stream messages, It always shows "System.Net.Sockets.SocketException (111): Connection refused ". But if you send a message of classic type, it's a success.
rabbitmq:
container_name: rabbitmq
image: rabbitmq:3-management
environment:
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS: -rabbitmq_stream advertised_host localhost
RABBITMQ_DEFAULT_USER:"admin"
RABBITMQ_DEFAULT_PASS:"admin"
RABBITMQ_DEFAULT_VHOST:"application"
ports:
- 5672:5672
- 5552:5552
- 15672:15672
volumes:
- ./conf/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
- ./conf/rabbitmq/enabled_plugins:/etc/rabbitmq/enabled_plugins
healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 15s
timeout: 15s
retries: 5
env_file:
- ./.local.env
./conf/rabbitmq/rabbitmq.conf
enter code herestream.listeners.tcp.1 = 5552
stream.tcp_listen_options.backlog = 4096
stream.tcp_listen_options.recbuf = 131072
stream.tcp_listen_options.sndbuf = 131072
stream.tcp_listen_options.keepalive = true
stream.tcp_listen_options.nodelay = true
stream.tcp_listen_options.exit_on_close = true
stream.tcp_listen_options.send_timeout = 120
./conf/rabbitmq/enabled_plugins
[rabbitmq_management,rabbitmq_prometheus,rabbitmq_stream,rabbitmq_stream_management].
another service configures in docker:
# RabbitMQ
Host = "host.docker.internal",
VirtualHost = "application",
Port= 5672,
StreamPort = 5552,
User= "admin",
Password = "admin",
UseSSL = false

Why does my proxied docker network request work locally but not in production?

I'm working on a project to build a front end for a private/secure docker registry. The way I'm doing this is to use docker-compose to create a network between the front end and the registry. My idea is to use express to serve my site and forward requests from the client to the registry via the docker network.
Locally, everything works perfectly....
However, in production the client doesn't get a response back from the registry. I can login to the registry and access it's api via postman (for ex the catalog) at https://myregistry.net:5000/v2/_catalog. But... the client just errors out.
when I go into the express server container and try to curl the endpoint I created to proxy requests, I get this
curl -vvv http://localhost:3000/api/images
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 3000 (#0)
> GET /api/images HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.61.1
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server
and the error that's returned includes a _currentUrl of https://username:password#registry:5000/v2/_catalog
my docker-compose file looks like this...
version: '3'
services:
registry:
image: registry:2
container_name: registry
ports:
# forward requests to registry.ucdev.net:5000 to 127.0.0.1:443 on the container
- "5000:443"
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
REGISTRY_HTTP_ADDR: 0.0.0.0:443
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/fullchain.pem
REGISTRY_HTTP_TLS_KEY: /certs/privkey.pem
volumes:
- /etc/letsencrypt/live/registry.ucdev.net/fullchain.pem:/certs/fullchain.pem
- /etc/letsencrypt/live/registry.ucdev.net/privkey.pem:/certs/privkey.pem
- ./auth:/auth
restart: always
server:
image: uc/express
container_name: registry-server
ports:
- "3000:3000"
volumes:
- ./:/project
environment:
NODE_ENV: production
restart: always
entrypoint: ["npm", "run", "production"]
an example of my front end request looks like this...
axios.get('http://localhost:3000/api/images')
.then((response) => {
const { data: { registry, repositories } } = response;
this.setState((state, props) => {
return { registry, repositories }
})
})
.catch((err) => {
console.log(`Axios error -> ${err}`)
console.error(err)
})
and that request is sent to the express server and then to the registry like this...
app.get('/api/images', async (req, res) => {
// scheme is either http or https depending on NODE_ENV
// registry is the name of the container on the docker network
await axios.get(`${scheme}://registry:5000/v2/_catalog`)
.then((response) => {
const { data } = response;
data.registry = registry;
res.json(data);
})
.catch((err) => {
console.log('Axios error -> images ', err);
return err;
})
})
any help you could offer would be great! thanks!
In this particular case it was an issue related to the firewall the server was behind. requests coming from the docker containers were being blocked. to solve this problem we had to explicitly set the network_mode to bridge. this allowed requests to from within the containers to behave correctly. the final docker-compose file looks like this
version: '3'
services:
registry:
image: registry:2
container_name: registry
# setting network_mode here and on the server helps the express api calls work correctly on the myregistry.net server.
# otherwise, the calls fail with 'network unreachable' due to the firewall.
network_mode: bridge
ports:
# forward requests to myregistry.net:5000 to 127.0.0.1:443 on the container
- "5000:443"
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
REGISTRY_HTTP_ADDR: 0.0.0.0:443
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/fullchain.pem
REGISTRY_HTTP_TLS_KEY: /certs/privkey.pem
volumes:
- /etc/letsencrypt/live/myregistry.net/fullchain.pem:/certs/fullchain.pem
- /etc/letsencrypt/live/myregistry.net/privkey.pem:/certs/privkey.pem
- ./auth:/auth
restart: always
server:
image: uc/express
container_name: registry-server
network_mode: bridge
ports:
- "3000:3000"
volumes:
- ./:/project
environment:
NODE_ENV: production
restart: always
entrypoint: ["npm", "run", "production"]

Vagrant provision multiple playbooks with multiple ssh users

I'm trying to provision a vm using vagrant's ansible provisioner. But I have two playbooks and both need to use different ssh users. My use case is this, I have a pre-provisioning script that runs under the vagrant ssh user that is set up by default. My pre-provision script then adds a different ssh user provisioner that is set up to ssh onto the VM with its own key. The actual provision script has a task that deletes the insecure vagrant user on the system so it has to run as a different ssh user, provsioner, the user that the pre-provisioner creates.
I can not figure out how to change the ssh user in the Vagrantfile. Example below is how far I've gotten. Despite changing the config.ssh.username vagrant always sets the ssh user to the last value, in this case provisioner and that doesn't authenticate when running the pre-provision script because it hasn't been created yet.
Can I override the ssh user somehow? Maybe with an ansible variable itself inside the do |ansible| block (below)?
Is what I'm trying to achieve possible? It seems so straightforward I'm shocked I'm having this much trouble with it.
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "base_box"
config.vm.box_url = "s3://bucket/base-box/base.box"
config.vm.network "private_network", ip: "10.0.3.10"
config.ssh.keep_alive = true
config.vm.define "vagrant_image"
config.vm.provision "ansible" do |ansible_pre|
config.ssh.username = "vagrant"
ansible_pre.playbook = "provisioning/pre_provisioning.yml"
ansible_pre.host_vars = {
"vagrant_image" => {
"ansible_host" => "127.0.0.1",
}
}
ansible_pre.vault_password_file = ENV['ANSIBLE_VAULT_PASSWORD_FILE']
end
config.vm.provision "ansible" do |ansible|
config.ssh.username = "provisioner"
ansible.playbook = "provisioning/provisioning.yml"
ansible.host_vars = {
"vagrant_image" => {
"ansible_host" => "127.0.0.1",
}
}
ansible.vault_password_file = ENV['ANSIBLE_VAULT_PASSWORD_FILE']
end
end
(In case you were wondering the s3 box url only works because I've installed the vagrant-s3auth (1.3.2) plugin)
You can set it in several places. Vagrantfile (but not config, it will be overridden), through Ansible extravars:
config.vm.provision "ansible" do |ansible_pre|
ansible_pre.playbook = "provisioning/pre_provisioning.yml"
ansible_pre.host_vars = {
"vagrant_image" => {
"ansible_host" => "127.0.0.1",
}
}
ansible_pre.extra_vars = {
ansible_user: "vagrant"
}
ansible_pre.vault_password_file = ENV['ANSIBLE_VAULT_PASSWORD_FILE']
end
config.vm.provision "ansible" do |ansible|
ansible.playbook = "provisioning/provisioning.yml"
ansible.host_vars = {
"vagrant_image" => {
"ansible_host" => "127.0.0.1",
}
ansible.extra_vars = {
ansible_user: "provisioner"
}
ansible.raw_ssh_args = "-i /path/to/private/key/id_rsa"
ansible.vault_password_file = ENV['ANSIBLE_VAULT_PASSWORD_FILE']
end
But you can also write a single playbook and switch users inside. See ansible_user and meta: reset_connection.

gitlab-runner Checking for jobs... failed . Error decoding json payload unexpected EOF

I install a gitlab-runner in Windows 10
When the Gitlab CI start to execute the job which the gitlab-runner is supposed to work on, sometimes, the gitlab-runner will yell the following logs:
time="2017-12-26T16:39:49+08:00" level=warning msg="Checking for jobs... failed" runner=96856a1d status="Error decoding json payload unexpected EOF"
It is really annoying.
I have to restart the gitlab-runner and it could work again.
The following is the content of config.toml
concurrent = 1
check_interval = 30
[[runners]]
name = "windows docker runner"
url = "http://my-gitlab.internal.example.com:9090/"
token = "abcdefg1c39f10e869625c2118e"
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:latest"
privileged = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
[runners.cache]
Insecure = false
Try running in debugmode (stop service first) to get more info about the error.

Terraform cannot connect Chef provisioner with ssh

I cannot get terraform's ssh to connect via private aws keypair for chef provisioning - the error looks to just be a timeout:
aws_instance.app (chef): Connecting to remote host via SSH...
aws_instance.app (chef): Host: 96.175.120.236:32:
aws_instance.app (chef): User: ubuntu
aws_instance.app (chef): Password: false
aws_instance.app (chef): Private key: true
aws_instance.app (chef): SSH Agent: true
aws_instance.app: Still creating... (5m30s elapsed)
Error applying plan:
1 error(s) occurred:
* dial tcp 96.175.120.236:32: i/o timeout
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Here is my terraform plan - note the ssh settings.. the key_name setting is set to my AWS keypair name and the ssh_for_chef.pem is the private key
variable "AWS_ACCESS_KEY" {}
variable "AWS_SECRET_KEY" {}
provider "aws" {
region = "us-east-1"
access_key = "${var.AWS_ACCESS_KEY}"
secret_key = "${var.AWS_SECRET_KEY}"
}
resource "aws_instance" "app" {
ami = "ami-88aa1ce0"
count = "1"
instance_type = "t1.micro"
key_name = "ssh_for_chef"
security_groups = ["sg-c43490e1"]
subnet_id = "subnet-75dd96e2"
associate_public_ip_address = true
provisioner "chef" {
server_url = "https://api.chef.io/organizations/xxxxxxx"
validation_client_name = "xxxxxxx-validator"
validation_key = "/home/user01/Documents/Devel/chef-repo/.chef/xxxxxxxx-validator.pem"
node_name = "dubba_u_7"
run_list = [ "motd_rhel" ]
user_name = "user01"
user_key = "/home/user01/Documents/Devel/chef-repo/.chef/user01.pem"
ssl_verify_mode = "false"
}
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file("/home/user01/Documents/Devel/ssh_for_chef.pem")}"
}
}
Any ideas?
I'm not sure if we had the same problem, since you didn't specify if you were able to ssh to the instance.
In my case, I was running terraform from within the VPC, and the connection was allowed with a security groups, which can't be used with a public IP.
the solution is simple (but you will have to use the new conditional interpolations of terraform v.0.8.0) -
Define this variable - variable use_public_ip { default = true }
Then, inside the connection section of the chef provisioner, add the following line -
host = "${var.use_public_ip ? aws_instance.instance.public_ip : aws_instance.instance.private_ip}"
If you wish to use the public IP, set the variable as true, otherwise, set it to false.
I use this for aws -
connection {
user = "ubuntu"
host = "${var.use_public_ip ? aws_instance.instance.public_ip : aws_instance.instance.private_ip}","
}