Can't start Docker traefik container with ssl - ssl

I'm trying to run traefik with ssl -
on a self signed certificate.
this is my docker-compose.yml file
traefik:
image: traefik
restart: unless-stopped
command: -c /dev/null --web --docker --logLevel=INFO --defaultEntryPoints='https' --entryPoints="Name:https Address::443 TLS:/certs/cert.pem,/certs/key.pem" --entryPoints="Name:http Address::80 Redirect.EntryPoint:https"
ports:
- '80:80'
- '443:443'
- '8080:8080'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./certs:/certs/
When running docker-compose up - i'm getting this error in log:
level=error msg="Error creating TLS config: bad TLS Certificate KeyFile format, expected a path"
after that:
level=fatal msg="Error preparing server: bad TLS Certificate KeyFile format, expected a path
And then:
traefik exited with code 1
I'm running Docker Version 17.06.0 on a Mac
Any clue on what could be the issue here ?

Related

Traefik serving SSL certificate as invalid

Traefik is setup, redirecting to https and seems to be configured correctly. However, when I try to access my project in the browser, the certificate is untrusted with a NET::ERR_CERT_INVALID error:
I can SSH into the container and cat the certificate files and it looks like docker is mounting the files and carrying over permissions as expected.
Locally, I've generated my certificate:
openssl req -x509 -newkey rsa:4096 -keyout infrastructure/certs/mysite-dev.com.key -out infrastructure/certs/mysite-dev.com.crt -days 10000 -nodes -subj "/C=US/ST=State/L=City/O=cicd/CN=mysite-dev.com"
Adjusted permissions using:
chmod 644 infrastructure/certs/*.crt
chmod 600 infrastructure/certs/*.key
traefik-conf.yml
tls:
certificates:
- certFile: /certs/mysite-dev.com.crt
keyFile: /certs/mysite-dev.com.key
stores:
- default
stores:
default: { }
Here's my relevant compose configuration:
services:
web:
build:
context: .
dockerfile: infrastructure/web/Dockerfile
image: registry.gitlab.com/my-org/my-project:web
env_file: .env
volumes:
- ./:/var/www/html
- ./infrastructure/web:/etc/nginx/conf.d
depends_on:
- redis
- db
labels:
traefik.enable: true
traefik.http.routers.mysite-web.entrypoints: web,websecure
traefik.http.middlewares.mysite-web.redirectscheme.scheme: https
traefik.http.middlewares.mysite-web.redirectscheme.permanent: true
traefik.http.routers.mysite-web.tls: true
traefik.http.routers.mysite-web.rule: Host(`mysite-dev.com`)
traefik.http.services.mysite-web.loadbalancer.server.port: 80
traefik:
command:
- --api.dashboard=true
- --api.insecure=true
- --accesslog=true
- --providers.docker.exposedbydefault=false
- --providers.docker=true
- --entryPoints.web.address=:80
- --entryPoints.websecure.address=:443
- --providers.file.filename=/conf/dynamic.yml
image: traefik:2.7
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./infrastructure/certs:/certs:ro
- ./infrastructure/traefik-conf.yml:/conf/dynamic.yml:ro
While I wasn't able to figure this one out, I ended up resolving the issue by using LetsEncrypt to provide an SSL certificate instead. Here's my new traefik service:
traefik:
command:
- --api.dashboard=true
- --api.insecure=true
# - --accesslog=true
- --log.level=INFO
- --providers.docker.exposedbydefault=false
- --providers.docker=true
- --entryPoints.web.address=:80
- --entryPoints.websecure.address=:443
- --certificatesresolvers.myresolver.acme.dnschallenge=true
- --certificatesresolvers.myresolver.acme.dnschallenge.provider=route53
- --certificatesresolvers.myresolver.acme.email=me#mysite.com
- --certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json
environment:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_HOSTED_ZONE_ID: ${AWS_HOSTED_ZONE_ID}
image: traefik:2.7
ports:
- "80:80"
- "443:443"
# The Web UI (enabled by --api.insecure=true)
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./infrastructure:/letsencrypt # this is an empty directory, to store generated json

Confluent REST proxy API SSL handshake fails

I have a kafka cluster on docker using confluent images. I am using docker-compose to build the containers.
When I tried to run the container it starts but can't communicate with any broker due to SSL handshake failed. I don't know if I miss some configuration
[kafka-admin-client-thread | adminclient-1] ERROR org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -3 (/XXX:19092) failed authentication due to: SSL handshake failed
My Kafka brokers are configured as follows:
kafka1:
image: confluentinc/cp-kafka:5.2.2
container_name: kafka1
ports:
- "19092:19092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: XXX:12181,XXX:12181,XXX:12181
KAFKA_ADVERTISED_LISTENERS: SSL://XXXX:19092
KAFKA_SSL_KEYSTORE_FILENAME: kafka.broker1.keystore.jks
KAFKA_SSL_KEYSTORE_CREDENTIALS: broker1_keystore_creds
KAFKA_SSL_KEY_CREDENTIALS: broker1_sslkey_creds
KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.broker1.truststore.jks
KAFKA_SSL_TRUSTSTORE_CREDENTIALS: broker1_truststore_creds
KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
KAFKA_SSL_CLIENT_AUTH: required
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SSL
KAFKA_SECURITY_PROTOCOL: SSL
volumes:
- ./../../secrets:/etc/kafka/secrets
I am trying to bring a Confluent REST Proxy API into another container using the configurations:
kafka-rest-proxy:
image: confluentinc/cp-kafka-rest:5.2.2
hostname: kafka-rest-proxy
ports:
- "18082:18082"
environment:
KAFKA_REST_LISTENERS: "http://0.0.0.0:18082"
KAFKA_REST_ZOOKEEPER_CONNECT: XXX:12181,XXX:12181,XXX:12181
KAFKA_REST_HOST_NAME: kafka-rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: SSL://XXX:19092,SSL://XXX:19092,SSL://XXX:19092
KAFKA_REST_CLIENT_SECURITY_PROTOCOL: SSL
KAFKA_REST_CLIENT_SSL_KEYSTORE_LOCATION: /etc/kafka/secrets/kafka.broker1.keystore.jks
KAFKA_REST_CLIENT_SSL_KEYSTORE_PASSWORD: XXX
KAFKA_REST_CLIENT_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.broker1.truststore.jks
KAFKA_REST_CLIENT_SSL_TRUSTSTORE_PASSWORD: XXX
KAFKA_REST_CLIENT_SSL_KEY_PASSWORD: XXX
KAFKA_REST_SSL_KEYSTORE_LOCATION: /etc/kafka/secrets/kafka.producer.keystore.jks
KAFKA_REST_SSL_KEYSTORE_PASSWORD: XXX
KAFKA_REST_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.producer.truststore.jks
KAFKA_REST_SSL_TRUSTSTORE_PASSWORD: XXX
volumes:
- ./../../secrets:/etc/kafka/secrets
I configured the SSH connection only with the truststore (I removed the keystore config completely) and I used the OPTS environment variable:
docker run -d \
--name krp \
-p 8082:8082 \
...
-v /home/ubuntu/kafka-keys:/kafka-keys \
-e KAFKA_REST_CLIENT_OPTS="-Dssl.keystore.location=/kafka-keys/kafka.client.keystore.jks -Dssl.keystore.password=changeit -Dssl.truststore.location=/kafka-keys/kafka.client.truststore.jks" \
confluentinc/cp-kafka-rest:5.3.1
And the connection worked.
In my case (kubernetes with helm) i had to add to change
"listeners":"http://0.0.0.0:8082" to "listeners":"https://0.0.0.0:8082"
i see the same mistake in your configuration
KAFKA_REST_LISTENERS: "http://0.0.0.0:18082"
After that you will see in the end of the startup logs that it tryes to load the keystore file

502 Proxy Error ( docker + traefik + apache )

I'm trying to setup traefik for SSL termination on my local development instance. Following up this guide I have the following configuration.
docker-compose.yml
version: '2.1'
services:
mariadb:
image: wodby/mariadb:10.2-3.0.2
healthcheck:
test: "/usr/bin/mysql --user=dummyuser --password=dummypasswd --execute \"SHOW DATABASES;\" | grep database"
interval: 3s
timeout: 1s
retries: 5
restart: always
environment:
MYSQL_ROOT_PASSWORD: dummy
MYSQL_DATABASE: database
volumes:
- ./mariadb-init:/docker-entrypoint-initdb.d # Place init .sql file(s) here.
- mysql:/var/lib/mysql # I want to manage volumes manually.
php:
depends_on:
mariadb:
condition: service_healthy
ports:
- "25:25"
- "587:587"
environment:
PHP_FPM_CLEAR_ENV: "no"
DB_HOST: mariadb
#DB_USER: dummy
DB_PASSWORD: dummypasswd
DB_NAME: database
DB_DRIVER: mysql
PHP_POST_MAX_SIZE: "256M"
PHP_UPLOAD_MAX_FILESIZE: "256M"
PHP_MAX_EXECUTION_TIME: 300
volumes:
- codebase:/var/www/html/
- private:/var/www/html/private
solr:
image: mxr576/apachesolr-4.x-drupal-docker
ports:
- "8983:8983"
labels:
- 'traefik.backend=solr'
- 'traefik.port=8983'
# - 'traefik.frontend.rule=Host:192.168.33.10'
volumes:
- solr:/opt/solr/example/solr/collection1/data
restart: always
portainer:
image: portainer/portainer
command: --no-auth -H unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels:
- 'traefik.backend=portainer'
- 'traefik.port=9000'
restart: always
apache:
image: wodby/php-apache:2.4-2.0.2
# ports:
# - "80:80"
depends_on:
- php
environment:
APACHE_LOG_LEVEL: warn
APACHE_BACKEND_HOST: php
APACHE_SERVER_ROOT: /var/www/html/drupal
volumes:
- codebase:/var/www/html/
- private:/var/www/html/private
labels:
- 'traefik.backend=apache'
- 'traefik.docker.network=proxy'
- "traefik.frontend.rule=Host:127.0.0.1"
- "traefik.enable=true"
- "traefik.port=80"
- "traefik.default.protocol=http"
restart: always
networks:
- proxy
traefik:
image: traefik
command: -c /traefik.toml --web --docker --logLevel=INFO
ports:
- '80:80'
- '443:443'
- '8888:8080' # Dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /codebase/traefik.toml:/traefik.toml
- /codebase/certs/cert.crt:/cert.crt
- /codebase/certs/cert.key:/cert.key
volumes:
solr:
external: true
mysql:
external: true
codebase:
external: true
private:
external: true
networks:
proxy:
external: true
traefik.toml
logLevel = "DEBUG" # <---
defaultEntryPoints = ["https", "http"] # <---
[accessLog]
[traefikLog]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/cert.crt"
keyFile = "/cert.key"
[retry]
[docker]
endpoint = "unix:///var/run/docker.sock"
watch = true
exposedbydefault = false
When trying to verify the instance, I get a 502 Bad Gateway
curl -i -k https://127.0.0.1
HTTP/1.1 502 Bad Gateway
Content-Length: 392
Content-Type: text/html; charset=iso-8859-1
Date: Fri, 14 Sep 2018 16:34:36 GMT
Server: Apache/2.4.29 (Unix) LibreSSL/2.5.5
X-Content-Type-Options: nosniff
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>502 Proxy Error</title>
</head><body>
<h1>Proxy Error</h1>
<p>The proxy server received an invalid
response from an upstream server.<br />
The proxy server could not handle the request <em>GET /index.php</em>.<p>
Reason: <strong>DNS lookup failure for: php</strong></p></p>
</body></html>
A reset for docker-compose and the docker network didn't help.
I've checked the issue on their repo and it seems like nobody got a definitive solution. Anybody has an idea on how to solve this?
Edit:Update for full docker-compose file.
You are trying to connect to php container from apache service using service discovery. But php container is not attached to the network proxy, Because you haven't declared network for it. The same case is with mariabd as well. So, When you connect to apache/traefik they look for host php which is not attached to the network proxy and throw error 502.
Unless and until you specify external network, Docker containers will not be connected to them.
Hence, You have to specify the network as follows for all the services in order to make docker service discovery work properly.
networks:
- proxy
Bonus:
Since you have done port mapping. You can also use public Ip of your host machine followed by the port to connect to services from docker container and from outside containers as well.
Example:
Let us assume your ip is 192.168.0.123 then you can connect to php from
any services in docker container and even from outside docker as 192.168.0.123:25 and 192.168.0.123:587. This is because you have exposed ports
25,587 by mapping them to host ports 25,587.
Some references:
Docker networking
Networking using the host network
Connect a container to a user-defined bridge
Networking with standalone containers
Service discovery
Networking in Compose (check "Specify custom networks" section)

Accessing container on port 3000 thru traefik

Okay, so I've got a node-js app I'd like to access thru traefik.
The node-js app runs on port 3000
I've got traefik running after following the test-it instructions from the getting started page.
docker-compose.yml
version: '2'
services:
app:
build:
context: .
dockerfile: docker/app/Dockerfile
environment:
- NODE_ENV=development
- NODE_PORT=3000
volumes:
- ./app:/app
expose:
- "3000"
networks:
- web
labels:
- "traefik.backend=microservice"
- "traefik.backend.port=3000"
- "traefik.port=3000"
- "traefik.frontend.rule=Host:microservice.docker.localhost"
networks:
web:
external:
name: traefik_webgateway
Trying to connect:
curl -H Host:microservice.docker.localhost http://localhost/
Bad Gateway
curl -H Host:microservice.docker.localhost http://localhost:3000/
curl: (52) Empty reply from server
But curl -H Host:whoami.docker.localhost http://localhost/ works like intended.
The problem was that my microservice was bound to listen to localhost:3000 instead I changed it to 0.0.0.0:3000 and it worked like a charm.
removed - "traefik.backend.port=3000" from the docker-compose.yml
added 127.0.0.1 microservice.docker.localhost to /etc/hosts
which rendered me able to:
curl http://microservice.docker.localhost/ and get the response I was expecting
I'm a microservice!

jwilder/nginx-proxy: Not able to integrate ssl with Nginx

We are working on setting up multiple website hosting with single port and jwilder/nginx-proxy via SSL, We are able to deploy the solution without ssl and its working fine but while we are trying to put it with SSL its failing on HTTPs Call.
Our docker-compose file is as below:
docker-compose.yml
site1:
build: site1
environment:
VIRTUAL_HOST: site1.domainlocal.com
VIRTUAL_PROTO: https
restart: always
site2:
build: site2
environment:
VIRTUAL_HOST: site2.domainlocal.com
VIRTUAL_PROTO: https
restart: always
site3:
build: site3
environment:
VIRTUAL_HOST: site3.domainlocal.com
VIRTUAL_PROTO: https
restart: always
nginx-proxy:
image: jwilder/nginx-proxy:alpine
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- certs:/etc/nginx/certs:ro
restart: always
privileged: true
PS: the "certs" folder is kept in the same folder as the docker-compose file.
Using self signed certificate using openssl
Folder structure is like:
Main_folder-|
|- docker-compose.yml
|
|- certs/.csr and .key files
|
|- site1/Dockerfile + Nodejs
|- site2/Dockerfile + Nodejs
|- site3/Dockerfile + Nodejs
Please suggest the possible cause of the issue and solution over same.
Output of docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c71b52c3e6bd compose_site3 "/bin/sh -c 'node ..." 3 days ago Up 3 days 80/tcp compose_site3_1
41ffb9ec3983 jwilder/nginx-proxy "/app/docker-entry..." 3 days ago Up 3 days 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp compose_nginx-proxy_1
a154257c62ec compose_site1 "/bin/sh -c 'node ..." 3 days ago Up 3 days 80/tcp compose_site1_1
3ed556e9287e compose_site2 "/bin/sh -c 'node ..." 3 days ago Up 3 days 80/tcp compose_site2_1
So after spending so much time on it finally I am able to solve the issue. So for ssl integration with jwilder/nginx-proxy there is no mandate to name the certificate and key in the name of domain instead it can be of any name just you need to mention the certificate name in docker-compose file (I found this approach by just hit and trial).
So your docker compose file should look like:
site1:
build: site1
environment:
VIRTUAL_HOST: site1.domainlocal.com
CERT_NAME: mycertificate
volumes:
- /etc/ssl/certs:/etc/ssl/certs:ro
restart: always
site2:
build: site2
environment:
VIRTUAL_HOST: site2.domainlocal.com
CERT_NAME: mycertificate
volumes:
- /etc/ssl/certs:/etc/ssl/certs:ro
restart: always
site3:
build: site3
environment:
VIRTUAL_HOST: site3.domainlocal.com
CERT_NAME: mycertificate
volumes:
- /etc/ssl/certs:/etc/ssl/certs:ro
restart: always
nginx-proxy:
image: jwilder/nginx-proxy:alpine
ports:
- "80:80"
- "443:443"
environment:
DEFAULT_HOST: domainlocal.com #default host
CERT_NAME: mycertificate # Wildcard Certificate name without extension
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /etc/ssl/certs:/etc/nginx/certs #certificate path in docker container
restart: always
privileged: true
and just build and run the compose using "docker-compose up --build" and congrats now you are by on secured layer.
Your certificate should end with a '.crt' extension, not '.csr'. Also make sure it is named appropriately for the domain, matching the VIRTUAL_HOST variable. According to the documentation:
The certificate and keys should be named after the virtual host with a .crt and .key extension. For example, a container with VIRTUAL_HOST=foo.bar.com should have a foo.bar.com.crt and foo.bar.com.key file in the certs directory.