I have problems figuring out how to properly set up a web server with https which contains multiple Docker containers.
I have a main container running apache by using the "httpd" docker-image.
For simplicity lets call this website "main.com". SSL works perfectly here. I have set up the httpd.conf configuration file to redirect all calls to port 80 to port 443 and loaded SSL and proxy modules. (Port 80 and 443 are both exposed).
I have another Docker container which runs an API to serve geodata to "main.com". We can call this container for "side-container". In the Dockerfile for "side-container" I expose port 8080 from this. Then I can call "main.com:8080" to make a query to my "side-container" which runs the API.
Problem --> At least I could until I changed "main.com" to only use https.
I am stuck trying to get "side-container" to work again. When trying to connect to "main.com:8080" I get a timeout error.
My "docker ps" looks like this:
IMAGE COMMAND PORTS NAMES
main-container "httpd-foreground" 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:9010->9010/tcp main
side-container:latest "/docker-entrypoint.…" 0.0.0.0:8080->8080/tcp side-container
I use docker-compose to control the containers, so perhaps I need to set something there?
I have made an attempt to get it working by using a reverse proxy setting in apache (see http.conf below), by using port 9010 on the "main" container to point to port 8080 on the "side-container".
I can get it to reply with an "internal server error" due to a failed SSL handshake, but no more than that.
My background is in pure physics and not software and webservers so maybe I am missing something obvious. Any hint is greatly appreciated.
From httpd.conf:
<IfModule mod_ssl.c>
Listen 443
Listen 8080
Listen 0.0.0.0:9010 https
LoadModule socache_shmcb_module modules/mod_socache_shmcb.so
SSLProtocol all -SSLv3
SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
SSLHonorCipherOrder on
SSLCompression off
SSLSessionTickets off
SSLRandomSeed startup file:/dev/urandom 512
SSLRandomSeed connect file:/dev/urandom 512
SSLSessionCache shmcb:/dev/ssl_gcache_data(512000)
</IfModule>
<Virtualhost *:443>
ServerName main.com
SSLEngine on
#Primary Certificate file
SSLCertificateFile /usr/local/apache2/conf/certificate.crt
#Private Key
SSLCertificateKeyFile /usr/local/apache2/conf/private.key
#Chain bundle file
SSLCertificateChainFile /usr/local/apache2/conf/ca_bundle.crt
</VirtualHost>
<Virtualhost 0.0.0.0:9010>
ServerName main.com
SSLEngine on
SSLProxyEngine on
SSLProxyVerify none
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCheckPeerExpire off
SSLCertificateFile /usr/local/apache2/conf/certificate.crt
SSLCertificateKeyFile /usr/local/apache2/conf/private.key
SSLCertificateChainFile /usr/local/apache2/conf/ca_bundle.crt
ProxyPass /apptest http://0.0.0.0:8080/
ProxyPassReverse /apptest https://0.0.0.0:8080/
</VirtualHost>
docker-compose.yml:
version: '3'
services:
main-container:
build:
context: .
dockerfile: Dockerfile
container_name: "main"
restart: "always"
ports:
- "80:80"
- "443:443"
- "9010:9010"
links:
- side-container
networks:
- fu
side-container:
image: side-container:latest
container_name: "side-container"
ports:
- "8080:8080"
volumes:
- ${HOME}/data:/data
restart: "always"
networks:
- fu
networks:
fu:
driver: bridge
When linking docker containers within the same network with docker compose you need to reference them by the docker service name, thus instead of 0.0.0.0 use side-container:
ProxyPass /apptest http://side-container:8080/
ProxyPassReverse /apptest https://side-container:8080/
NOTE: the server running in the side container must be listening into 0.0.0.0:8080 in its httpd configuration.
Now you can remove from the docker compose file the ports declaration altogether, because both containers are in the same docker network, therefore you don't need to expose any ports. Exposing ports are only necessary if you want to reach the side-container from localhost in the host machine or from the internet.
So from the side container remove:
ports:
- "8080:8080"
Also in the docker compose file you should replace links with the new syntax depends_on:
depends_on:
- side-container
Ports declaration
For educational purpose.
Please bear in mind that when specifying the port like 8080:8080 is the same as 0.0.0.0:8080:8080 and 0.0.0.0 listens in all requests from the internet, thus to restrict them to localhost 127.0.0.1 of the machine running docker you would do 127.0.0.1:8080:8080.
Related
I have a similar problem as mentioned in Apache redirect to another port but the answer does not work for me.
I have Apache set up on an Debian VM, with an instance of Nextcloud.
I setup a vhost for cloud.mydomain.com on port 443 and it works fine.
Also, I installed Gitlab on the same VM, and the external url is https://debianvm.local:1234
How can I redirect https://gitlab.mydomain.com:443 to https://debianvm.local:1234 ??
I have tried
<VirtualHost *:443>
ServerName gitlab.mydomain.com
ServerAlias gitlab.mydomain.com
ProxyPass / https://debianvm:8508/
SSLEngine on
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
</VirtualHost>
I was hoping to later be able to call certbot -d gitlab.mydomain.com and change the certificate...
I also tried putting exactly the same file for *:80 (without SSLEngine lines) and then call certbot but without success.
I also tried directly putting https://gitlab.mydomain.com in the gitlab configuration, in vain.
Any ideas?
Thanks.
On the DNS side, I set up 2 DNS redirections type A: one for cloud.mydomain.com and one for gitlab.mydomain.com, but they are pointing to the same IP.
On the port forwarding side, the NAS with the host IP is forwarding 80 and 443 to 80 and 443 of the debianvm.local
I have a collection of web applications, with each running inside its own Docker Containers. I can access them locally via http://localhost:9001, for example. I want to access them remotely via https://site.example.com, instead. I have a wildcard Let's Encrypt certificate for example.com.
I understand I need Apache to do direct traffic from FQDN to Port. So I have setup a VirtualHost (below). Normal web activity seems to work fine. I can navigate the website normally.
However, when I try to login using OAuth (e.g. BitBucket), I get a URI redirect mismatch error. This does not happen when I run this outside of a container. I think there is something wrong with my Proxy setup. Is anyone able to advise how to rectify?
<VirtualHost *:443>
ServerAdmin admin#example.com
ServerName site.example.com
ServerSignature Off
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:9001/
ProxyPassReverse / http://127.0.0.1:9001/
<IfModule mod_headers.c>
Header always set Strict-Transport-Security "max-age=15768000; includeSubDomains; preload"
</IfModule>
SSLEngine On
SSLCertificateFile /etc/letsencrypt/live/example.com/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/example.com/fullchain.pem
AllowEncodedSlashes NoDecode
</VirtualHost>
For such use case, Traefik is a very adapted tool. Coupled with docker-compose, you can setup multiple docker containers on the same host, each one having its own endpoint. To access them remotely, you just have then to bind remote host's IP address to all your endpoints (or use a public DNS that does it for you).
Here is a docker-compose.yml example using Traefik.
version: "3"
services:
traefik:
image: traefik:latest
command: --api --docker --logLevel=DEBUG
ports:
- "80:80"
- "443:443"
- "8082:8080"
volumes:
# So that Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
labels:
- "traefik.enable=false"
your_first_container:
image: <YOUR_IMAGE>
labels:
- "traefik.frontend.rule=Host:site.example.com"
- "traefik.port=9001"
I´m trying to overcome the ingress-gce limitation of redirect traffic from HTTP to HTTPS.
So, the easiest configuration whould be a Reverse Proxy with Apache2 but isn't working for me, this apache is in another VM apart from my kubernetes cluster, I just want to "proxy" the traffic so I can manipulate the request, redirect to https, etc
I´m need this specific solution to work as I can´t configure a nginx ingress at this point, it has to be done with this GCE ingress
My ingress yaml configuration is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-reserved-address
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- hosts:
- mycustom.domain.com
secretName: mydomain-com-certificate
rules:
- host: mycustom.domain.com
http:
paths:
- path: /*
backend:
serviceName: tomcat-service
servicePort: 80
- path: /app/*
backend:
serviceName: spring-boot-app-service
servicePort: 80
My apache virtualhost configuration is:
<VirtualHost *:80>
ServerName myother.domain.com
Redirect permanent / https://myother.domain.com/
</VirtualHost>
<VirtualHost *:443>
ServerName myother.domain.com
ProxyPreserveHost On
ProxyRequests On
ProxyPass / https://mycustom.domain.com/
ProxyPassReverse / https://mycustom.domain.com/
SSLEngine on
SSLProxyEngine on
SSLProtocol All -SSLv2 -SSLv3
SSLCipherSuite ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:!RC4:+HIGH:+MEDIUM
SSLCertificateKeyFile /etc/ssl/domain.com/domain.com-privatekey-nopass.pem
SSLCertificateFile /etc/ssl/domain.com/domain.com.crt
SSLCACertificateFile /etc/ssl/domain.com/IntermediateCA.crt
</VirtualHost>
Every piece of the puzzle is working independent as expected, I mean, if I go to any of the following
A) https://mycustom.domain.com/tomcat_context
B) https://mycustom.domain.com/app/hello
I get the desired results, A) I get my web page and B) I get a simple response from my app
However, when I use the proxy http://myother.domain.com/tomcat_context I can see how it transform to but I always get a TEXT response from the cluster, always is
default backend - 404
I´m also checking the Apache2 logs and I can see how the correct invocation is being made internally by apache
[Wed May 22 18:39:40.757619 2019] [proxy:debug] [pid 14196:tid 140335314564864] proxy_util.c(2213): [client xxx.xxx.xxx.xxx:52636] AH00944: connecting https://mycustom.domain.com/tomcat_context to mycustom.domain.com:443
I can´t find an explanation why this is happening if all the pieces are working properly, at the end of the day my ingress-gce is like an external service to my apache proxy, it should be working already.
Also both configurations, the ingress and the apache have SSL configured and its the same exact certificate as both are running on the same domain
Any help will be appreciated
The ingress controller doesn't have a handler for myother.domain.com so produces a 404.
You either need to setup an additional Ingress host for myother.domain.com or turn ProxyPreserveHost Off so the proxy sends the mycustom.domain.com host name from the ProxyPass config.
How the tomcat application make use of the Host header is usually the decider for which way you need to map the header through the proxy.
Hi I'm a student that's working on a docker project from school.
I have to configure docker with Apache on Ubuntu 16.04. The demand is that I can host multiple applications on one IP with different ports. But I have one problem.... I can't link my urls to the assigned port that I want.
This is my virtual Host file for different containers:
DocumentRoot "/var/www/html"
ServerName site1.docker.biz
Allow from localhost
ProxyPass / http://localhost:80/
DocumentRoot /var/www/html
ServerName site2.docker.biz
allow from localhost
ProxyPass / http://site2.docker.biz:8080/
When I run this file site2.docker.biz links to site1.docker.biz but that's not what I want. I want to link site2.docker.biz to port 8080 instead of port 80.
Can somebody tell me how to do this.
Thank You and kind regards,
Monkeyspree
Why dont you just use port mapping in your docker run statement
docker run -p 8080:80
I have a local Kubernetes cluster on a single machine, and I successfully deployed a flask web app using apache server, so there shouldn't be any problem with the cluster setup. However, I need to upgrade the website to https, so I used letsencrypt to generate ssl certificates and volume mapped them into the container. I also successfully deployed the app without docker, i.e. directly start the apache server using sudo /usr/sbin/apache2ctl -D FOREGROUND. I can visit my website at https://XXX.XXX.XXX.edu without problem.
However, when I started putting everything into Docker and Kubernetes, and visited https://XXX.XXX.XXX.edu:30001, the browser gave me this error:
This site can’t be reached
XXX.XXX.XXX.edu took too long to respond
Here is how I deployed:
I first started the service kubectl create -f web-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: web
labels:
name: web
role: "ssl-proxy"
spec:
type: NodePort
ports:
- nodePort: 30001
name: "https"
port: 443
targetPort: 443
protocol: "TCP"
- nodePort: 30000
name: "http"
port: 80
targetPort: 80
protocol: "TCP"
selector:
name: web
role: "ssl-proxy"
Then I started the pod kubectl create -f web-controller.yaml:
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 1
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- image: XXX/web_app
command: ['/bin/sh', '-c']
args: ['sudo a2enmod ssl && service apache2 restart && sudo /usr/sbin/apache2ctl -D FOREGROUND && python fake.py']
name: web
ports:
- containerPort: 443
name: http-server
volumeMounts:
- mountPath: /etc/letsencrypt/live/host
name: test-volume
readOnly: false
volumes:
- hostPath:
path: /etc/letsencrypt/archive/XXX.XXX.XXX.edu
name: test-volume
The log of the pod looks like:
root#XXX:~# kubectl logs web-controller-ontne
Considering dependency setenvif for ssl:
Module setenvif already enabled
Considering dependency mime for ssl:
Module mime already enabled
Considering dependency socache_shmcb for ssl:
Module socache_shmcb already enabled
Module ssl already enabled
* Restarting web server apache2
[Mon Jun 27 14:34:48.753153 2016] [so:warn] [pid 30:tid 140046645868416] AH01574: module ssl_module is already loaded, skipping
...done.
[Mon Jun 27 14:34:49.820047 2016] [so:warn] [pid 119:tid 139909591328640] AH01574: module ssl_module is already loaded, skipping
httpd (pid 33) already running
root#XXX:~#
The pod is running, but I got the following apache error log:
[Mon Jun 27 17:13:50.912683 2016] [ssl:warn] [pid 33:tid 140513871427456] AH01909: RSA certificate configured for 0.0.0.0i:443 does NOT include an ID which matches the server name
I think the problem is that, I am using NodePort and exposing port 30001, so I have to visit https://XXX.XXX.XXX.edu:30001 which does not match XXX.XXX.XXX.edu (just the domain name without the arbitrary port number 30001).
This is my /etc/apache2/sites-available/000-default.conf in the docker container:
<VirtualHost _default_:30001>
DocumentRoot /usr/local/my_app
LoadModule ssl_module /usr/lib64/apache2-prefork/mod_ssl.so
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/host/cert1.pem
SSLCertificateKeyFile /etc/letsencrypt/live/host/privkey1.pem
SSLCertificateChainFile /etc/letsencrypt/live/host/chain1.pem
WSGIDaemonProcess python-app user=www-data group=www-data threads=15 maximum-requests=10000 python-path=/usr/local/lib/python2.7/dist-p
ackages
WSGIScriptAlias / /usr/local/my_app/apache/apache.wsgi
WSGIProcessGroup python-app
CustomLog "|/usr/bin/rotatelogs /usr/local/my_app/apache/logs/access.log.%Y%m%d-%H%M%S 5M" combined
ErrorLog "|/usr/bin/rotatelogs /usr/local/my_app/apache/logs/error.log.%Y%m%d-%H%M%S 5M"
LogLevel warn
<Directory /usr/local/my_app>
Order deny,allow
Allow from all
Require all granted
</Directory>
</VirtualHost>
How to modify it so that apache serves https requests at port 30001 rather than 443? Thank you very much!
I found the answer myself. 2 causes: (1) There is an environment variable specific to my web app that I forgot to set in apache.wsgi; (2) There are several small errors in the original apache configuration file. I post the working /etc/apache2/sites-available/000-default.conf here:
ServerName 0.0.0.0
<VirtualHost _default_:443>
DocumentRoot /usr/local/my_app
LoadModule ssl_module /usr/lib64/apache2-prefork/mod_ssl.so
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/host/cert1.pem
SSLCertificateKeyFile /etc/letsencrypt/live/host/privkey1.pem
SSLCertificateChainFile /etc/letsencrypt/live/host/chain1.pem
WSGIDaemonProcess python-app user=www-data group=www-data threads=15 maximum-requests=10000 python-path=/usr/local/lib/python2.7/dist-packages
WSGIScriptAlias / /usr/local/my_app/apache/apache.wsgi
WSGIProcessGroup python-app
CustomLog "|/usr/bin/rotatelogs /usr/local/my_app/apache/logs/access.log.%Y%m%d-%H%M%S 5M" combined
ErrorLog "|/usr/bin/rotatelogs /usr/local/my_app/apache/logs/error.log.%Y%m%d-%H%M%S 5M"
LogLevel warn
<Directory /usr/local/my_app>
Order deny,allow
Allow from all
Require all granted
</Directory>
</VirtualHost>
Start the pod with commands sudo a2enmod ssl && sudo /usr/sbin/apache2ctl -D FOREGROUND, and containerPort should be 443. The Kubernetes script for the service is as simple as follows:
apiVersion: v1
kind: Service
metadata:
name: web
labels:
name: web
spec:
type: NodePort
ports:
- nodePort: 30001
port: 443
targetPort: 443
protocol: TCP
selector:
name: web
Now I can visit my web site at https://XXX.XXX.XXX.XXX:30001.
Special thanks to the owner of this github repo and NorbertvanNobelen. Hope this helps!
I just run into this issue this morning.
I expose the employment using --type=NodePort
I can access it from either
http://<pod ip>:<target port>
http://<cluster IP>: <port>
But I can not access it from
http://<Node IP>:< NodePort>
The Chrome say: .... took too long to respond
I check pod's status. It is ready and running
Later I fixed it by:
delete the deployment and service.
create deployment
watch the pod till its status became 'running'
expose this deployment using --type=NodePort
I find now the pod is running on another node.
I check
http://<new Node IP>:< new NodePort>
It works
I do not know what is the reason.
Just guess:
make sure the pod is created and in running status before expose
the deployment
maybe it is related with the cluster IP allocated by k8s
maybe there is something wrong with the node machine it ever was running on.