We are working on setting up multiple website hosting with single port and jwilder/nginx-proxy via SSL, We are able to deploy the solution without ssl and its working fine but while we are trying to put it with SSL its failing on HTTPs Call.
Our docker-compose file is as below:
docker-compose.yml
site1:
build: site1
environment:
VIRTUAL_HOST: site1.domainlocal.com
VIRTUAL_PROTO: https
restart: always
site2:
build: site2
environment:
VIRTUAL_HOST: site2.domainlocal.com
VIRTUAL_PROTO: https
restart: always
site3:
build: site3
environment:
VIRTUAL_HOST: site3.domainlocal.com
VIRTUAL_PROTO: https
restart: always
nginx-proxy:
image: jwilder/nginx-proxy:alpine
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- certs:/etc/nginx/certs:ro
restart: always
privileged: true
PS: the "certs" folder is kept in the same folder as the docker-compose file.
Using self signed certificate using openssl
Folder structure is like:
Main_folder-|
|- docker-compose.yml
|
|- certs/.csr and .key files
|
|- site1/Dockerfile + Nodejs
|- site2/Dockerfile + Nodejs
|- site3/Dockerfile + Nodejs
Please suggest the possible cause of the issue and solution over same.
Output of docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c71b52c3e6bd compose_site3 "/bin/sh -c 'node ..." 3 days ago Up 3 days 80/tcp compose_site3_1
41ffb9ec3983 jwilder/nginx-proxy "/app/docker-entry..." 3 days ago Up 3 days 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp compose_nginx-proxy_1
a154257c62ec compose_site1 "/bin/sh -c 'node ..." 3 days ago Up 3 days 80/tcp compose_site1_1
3ed556e9287e compose_site2 "/bin/sh -c 'node ..." 3 days ago Up 3 days 80/tcp compose_site2_1
So after spending so much time on it finally I am able to solve the issue. So for ssl integration with jwilder/nginx-proxy there is no mandate to name the certificate and key in the name of domain instead it can be of any name just you need to mention the certificate name in docker-compose file (I found this approach by just hit and trial).
So your docker compose file should look like:
site1:
build: site1
environment:
VIRTUAL_HOST: site1.domainlocal.com
CERT_NAME: mycertificate
volumes:
- /etc/ssl/certs:/etc/ssl/certs:ro
restart: always
site2:
build: site2
environment:
VIRTUAL_HOST: site2.domainlocal.com
CERT_NAME: mycertificate
volumes:
- /etc/ssl/certs:/etc/ssl/certs:ro
restart: always
site3:
build: site3
environment:
VIRTUAL_HOST: site3.domainlocal.com
CERT_NAME: mycertificate
volumes:
- /etc/ssl/certs:/etc/ssl/certs:ro
restart: always
nginx-proxy:
image: jwilder/nginx-proxy:alpine
ports:
- "80:80"
- "443:443"
environment:
DEFAULT_HOST: domainlocal.com #default host
CERT_NAME: mycertificate # Wildcard Certificate name without extension
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /etc/ssl/certs:/etc/nginx/certs #certificate path in docker container
restart: always
privileged: true
and just build and run the compose using "docker-compose up --build" and congrats now you are by on secured layer.
Your certificate should end with a '.crt' extension, not '.csr'. Also make sure it is named appropriately for the domain, matching the VIRTUAL_HOST variable. According to the documentation:
The certificate and keys should be named after the virtual host with a .crt and .key extension. For example, a container with VIRTUAL_HOST=foo.bar.com should have a foo.bar.com.crt and foo.bar.com.key file in the certs directory.
Related
For the company I work at, I setup a docker environment using docker-composer and multiple containers so we can all benefit from having the same environment. I created a subdomain DNS record (dev.company.com) pointing to 127.0.0.1. This works fine for reaching projects from within the browser to the appropriate Apache vhosts. The problem however is that we cannot resolve this domain within the PHP container because the DNS points towards 127.0.0.1 how can I add a custom entry to the docker php container to resolve *.dev.company.com to the Apache container?
Also adding this to /etc/hosts is not really an option because we run like 50+ projects.
I found some solutions online which just said to add php to the same container, but this kinda defeats the purpose of having separate containers per service. Added docker-composer file as reference.
Note: I'm the only one using Linux in the office other colleagues are using Docker on Windows or Mac, so a Linux only solution won't cut it :)
version: "3.7"
services:
php:
build: php
env_file:
- ./conf/php.config.env
volumes:
- ./htdocs:/htdocs
expose:
- "9000"
links:
- mysql
- mssql
- mail
restart: always
init: true
apache:
build: apache
volumes:
- ./htdocs:/htdocs:ro
ports:
- "80:80"
- "443:443"
links:
- php
restart: always
init: true
mysql:
build: mysql
env_file:
- ./conf/mysql.config.env
volumes:
- ./mysql/data:/var/lib/mysql
ports:
- "3306:3306"
restart: always
mssql:
image: microsoft/mssql-server-linux
env_file:
- ./conf/mssql.config.env
volumes:
- ./mssql/data:/var/opt/mssql/data
ports:
- "1433:1433"
restart: always
mail:
image: schickling/mailcatcher
ports:
- "1080:1080"
restart: always
init: true
redis:
image: redis
expose:
- "6379"
links:
- php
restart: always
init: true
I asked a question here and got part of my problem solved, but I was advised to create another question because it started to get a bit lengthy in the comments.
I'm trying to use docker to run multiple PHP,MySQL & Apache based apps on my Mac, all of which would use different docker-compose.yml files (more details in the post I linked). I have quite a few repositories, some of which communicate with one another, and not all of them are the same PHP version. Because of this, I don't think it's wise for me to cram 20+ separate repositories into one single docker-compose.yml file. I'd like to have separate docker-compose.yml files for each repository and I want to be able to use an /etc/hosts entry for each app so that I don't have to specify the port. Ex: I would access 2 different repositories such as http://dockertest.com and http://dockertest2.com (using /etc/hosts entries), rather than having to specify the port like http://dockertest.com:8080 and http://dockertest.com:8081.
Using the accepted answer from my other post I was able to get one app running at a time (one docker-compose.yml file), but if I try to launch another with docker-compose up -d it results in an error because port 80 is already taken. How can I runn multiple docker apps at the same time, each with their own docker-compose.yml files and without having to specify the port in the url?
Here's a docker-compose.yml file for the app I made. In my /etc/hosts I have 127.0.0.1 dockertest.com
version: "3.3"
services:
php:
build: './php/'
networks:
- backend
volumes:
- ./public_html/:/var/www/html/
apache:
build: './apache/'
depends_on:
- php
- mysql
networks:
- frontend
- backend
volumes:
- ./public_html/:/var/www/html/
environment:
- VIRTUAL_HOST=dockertest.com
mysql:
image: mysql:5.6.40
networks:
- backend
environment:
- MYSQL_ROOT_PASSWORD=rootpassword
nginx-proxy:
image: jwilder/nginx-proxy
networks:
- backend
ports:
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
frontend:
backend:
I would suggest to extract the nginx-proxy to a separate docker-compose.yml and create a repository for the "reverse proxy" configuration with the following:
A file with extra contents to add to /etc/hosts
127.0.0.1 dockertest.com
127.0.0.1 anothertest.com
127.0.0.1 third-domain.net
And a docker-compose.yml which will have only the reverse proxy
version: "3.3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
Next, as you already mentioned, create a docker-compose.yml for each of your repositories that act as web endpoints. You will need to add VIRTUAL_HOST env var to the services that serve your applications (eg. Apache).
The nginx-proxy container can run in "permanent mode", as it has a small footprint. This way whenever you start a new container with VIRTUAL_HOST env var, the configuration of nginx-proxy will be automatically updated to include the new local domain. (You will still have to update /etc/hosts with the new entry).
If you decide to use networks, your web endpoint containers will have to be in the same network as nginx-proxy, so your docker-compose files will have to be modified similar to this:
# nginx-proxy/docker-compose.yml
version: "3.3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- 80:80
networks:
- reverse-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
reverse-proxy:
# service1/docker-compose.yml
version: "3.3"
services:
php1:
...
networks:
- backend1
apache1:
...
networks:
- nginx-proxy_reverse-proxy
- backend1
environment:
- VIRTUAL_HOST=dockertest.com
mysql1:
...
networks:
- backend1
networks:
backend1:
nginx-proxy_reverse-proxy:
external: true
# service2/docker-compose.yml
version: "3.3"
services:
php2:
...
networks:
- backend2
apache2:
...
networks:
- nginx-proxy_reverse-proxy
- backend2
environment:
- VIRTUAL_HOST=anothertest.com
mysql2:
...
networks:
- backend2
networks:
backend2:
nginx-proxy_reverse-proxy:
external: true
The reverse-proxy network that is created in nginx-proxy/docker-compose.yml is referred as nginx-proxy_reverse-proxy in the other docker-compose files because whenever you define a network - its final name will be {{folder name}}_{{network name}}
If you want to have a look at a solution that relies on browser proxy extension instead of /etc/hosts, check out mitm-proxy-nginx-companion
In this project I have an apache docker container (called loadbalancer) which points to either of two apache docker containers. If the path is "/support*" then it goes to the support container otherwise it goes to webapp. Currently to achieve this I have hard coded my docker compose networks subnet and each containers ipv4 address. Then an apache conf file just points to those hard coded ips. This works great for local development environments.
However, it doesn't work for staging servers which need to host multiple instances of the project. I can't spin up more than one instance of this docker-compose network due to the hardcoded subnet/ipv4 addresses. How can I achieve this load balancer setup without hard coding the subnet so I can have multiple instances. Or is there a better solution to achieve the desired effect of many copies being hosted on a single server such as many vhosts in apache container. What would you suggest? As I have no clue as to what would be best practice here.
loadbalancer.conf
<VirtualHost *:80>
TimeOut -1
ProxyPass "/support" "http://172.20.0.5/support"
ProxyPassReverse "/support" "http://172.20.0.5/support"
ProxyPass "/" "http://172.20.0.2/"
ProxyPassReverse "/" "http://172.20.0.2/"
ProxyPreserveHost On
TimeOut -1
</VirtualHost>
docker-compose.yml
version: '3.7'
networks:
pi-net:
ipam:
config:
- subnet: 172.20.0.0/24
services:
cli:
container_name: cli
build: ./docker/cli
networks:
pi-net:
ipv4_address: 172.20.0.3
volumes:
- type: bind
source: .
target: /srv/www
- type: bind
source: $HOME/.gitconfig
target: /home/developer/.gitconfig
extra_hosts:
- "pi.docker:172.20.0.2"
user: developer
stdin_open: true
tty: true
environment:
GIT_PAGER: cat
webapp:
container_name: webapp
build:
context: ./docker/web-server
args:
- vhostsFileName=webapp.conf
networks:
pi-net:
ipv4_address: 172.20.0.2
ports:
- 80
volumes:
- type: bind
source: .
target: /srv/www
# depends on cli because cli entrypoint.sh is creating var/ files needed by httpd
depends_on:
- "cli"
support:
container_name: support
build:
context: ./docker/web-server
args:
- vhostsFileName=support.conf
networks:
pi-net:
ipv4_address: 172.20.0.5
ports:
- 80
volumes:
- type: bind
source: .
target: /srv/www
# depends on cli because cli entrypoint.sh is creating var/ files needed by httpd
depends_on:
- "cli"
loadbalancer:
container_name: loadbalancer
build:
context: ./docker/web-server
args:
- vhostsFileName=loadbalancer.conf
networks:
pi-net:
ipv4_address: 172.20.0.6
ports:
- 80:80
db:
container_name: db
build: ./docker/mysql
networks:
pi-net:
ipv4_address: 172.20.0.4
ports:
- 3306:3306
volumes:
- db:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: pi
MYSQL_USER: root
MYSQL_PASSWORD: root
restart: always
volumes:
db:
driver: local
Docker provides an internal DNS service to resolve container names as host names, and Docker Compose provides a network for you. You should make two changes:
In your Apache configuration, replace the explicit IP addresses with the name of the corresponding service block in the docker-compose.yml: http://support/support, for example.
Delete all of the networks: and container_name: settings in the docker-compose.yml, since they're redundant and limit reuse of the file. (Docker will assign IP addresses for you and Docker Compose will pick container names, but there's nothing wrong with these defaults.)
(Many questions of this form also use the outdated links: functionality; it's safe to delete all of the links: blocks too.)
I'm trying to run traefik with ssl -
on a self signed certificate.
this is my docker-compose.yml file
traefik:
image: traefik
restart: unless-stopped
command: -c /dev/null --web --docker --logLevel=INFO --defaultEntryPoints='https' --entryPoints="Name:https Address::443 TLS:/certs/cert.pem,/certs/key.pem" --entryPoints="Name:http Address::80 Redirect.EntryPoint:https"
ports:
- '80:80'
- '443:443'
- '8080:8080'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./certs:/certs/
When running docker-compose up - i'm getting this error in log:
level=error msg="Error creating TLS config: bad TLS Certificate KeyFile format, expected a path"
after that:
level=fatal msg="Error preparing server: bad TLS Certificate KeyFile format, expected a path
And then:
traefik exited with code 1
I'm running Docker Version 17.06.0 on a Mac
Any clue on what could be the issue here ?
I have a apache server installed and running for 3 website in PHP. I also developed a mobile api in django running on 4 docker containers
(django, redis, elasticsearch, rabbitmq using fig.sh).
Because apache is running and I want to keep it and configure it to run the web app on the docker containers. if it is django app I will config mod_wsgi for that but it is not so I don't know.
Any idea about that. Thank a lot.
Note: I am using docker 1.5 and apache 2.2 on Centos 6.6.
Edit:
Apache is contain 3 <VirtualHost *:80 > for 3 domain of 3 website.
1 website1.com
2 website2.com
3 website3.com
and api I want to deploy is running on domain api.website1.com is subdomain of website1.com
fig.yml
db:
image: mysql
volumes:
- /var/lib/mysql:/var/lib/mysql
volumes_from:
- mysql_data
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: 123
# command:
redis:
image: redis:3
elasticsearch:
image: elasticsearch
ports:
- "9200:9200"
- "9300:9300"
rabbitmq:
image: tutum/rabbitmq
environment:
- RABBITMQ_PASS=123456
ports:
- "5672:5672" # we forward this port because it's useful for debugging
- "15672:15672" # here, we can access rabbitmq management plugin
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db:db
- elasticsearch:elasticsearch
- rabbitmq:rabbit
- redis:redis
# container with redis worker
worker:
build: .
command:
volumes:
- .:/code/mobile_api
links:
- db:db
- rabbitmq:rabbit
- redis:redis
For more information about the general issues around proxying Apache to backend Python web sites which use mod_wsgi, see:
http://blog.dscpl.com.au/2015/06/proxying-to-python-web-application.html