Kubernetes: Cannot deploy flask web app with apache and https - apache

I have a local Kubernetes cluster on a single machine, and I successfully deployed a flask web app using apache server, so there shouldn't be any problem with the cluster setup. However, I need to upgrade the website to https, so I used letsencrypt to generate ssl certificates and volume mapped them into the container. I also successfully deployed the app without docker, i.e. directly start the apache server using sudo /usr/sbin/apache2ctl -D FOREGROUND. I can visit my website at https://XXX.XXX.XXX.edu without problem.
However, when I started putting everything into Docker and Kubernetes, and visited https://XXX.XXX.XXX.edu:30001, the browser gave me this error:
This site can’t be reached
XXX.XXX.XXX.edu took too long to respond
Here is how I deployed:
I first started the service kubectl create -f web-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: web
labels:
name: web
role: "ssl-proxy"
spec:
type: NodePort
ports:
- nodePort: 30001
name: "https"
port: 443
targetPort: 443
protocol: "TCP"
- nodePort: 30000
name: "http"
port: 80
targetPort: 80
protocol: "TCP"
selector:
name: web
role: "ssl-proxy"
Then I started the pod kubectl create -f web-controller.yaml:
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 1
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- image: XXX/web_app
command: ['/bin/sh', '-c']
args: ['sudo a2enmod ssl && service apache2 restart && sudo /usr/sbin/apache2ctl -D FOREGROUND && python fake.py']
name: web
ports:
- containerPort: 443
name: http-server
volumeMounts:
- mountPath: /etc/letsencrypt/live/host
name: test-volume
readOnly: false
volumes:
- hostPath:
path: /etc/letsencrypt/archive/XXX.XXX.XXX.edu
name: test-volume
The log of the pod looks like:
root#XXX:~# kubectl logs web-controller-ontne
Considering dependency setenvif for ssl:
Module setenvif already enabled
Considering dependency mime for ssl:
Module mime already enabled
Considering dependency socache_shmcb for ssl:
Module socache_shmcb already enabled
Module ssl already enabled
* Restarting web server apache2
[Mon Jun 27 14:34:48.753153 2016] [so:warn] [pid 30:tid 140046645868416] AH01574: module ssl_module is already loaded, skipping
...done.
[Mon Jun 27 14:34:49.820047 2016] [so:warn] [pid 119:tid 139909591328640] AH01574: module ssl_module is already loaded, skipping
httpd (pid 33) already running
root#XXX:~#
The pod is running, but I got the following apache error log:
[Mon Jun 27 17:13:50.912683 2016] [ssl:warn] [pid 33:tid 140513871427456] AH01909: RSA certificate configured for 0.0.0.0i:443 does NOT include an ID which matches the server name
I think the problem is that, I am using NodePort and exposing port 30001, so I have to visit https://XXX.XXX.XXX.edu:30001 which does not match XXX.XXX.XXX.edu (just the domain name without the arbitrary port number 30001).
This is my /etc/apache2/sites-available/000-default.conf in the docker container:
<VirtualHost _default_:30001>
DocumentRoot /usr/local/my_app
LoadModule ssl_module /usr/lib64/apache2-prefork/mod_ssl.so
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/host/cert1.pem
SSLCertificateKeyFile /etc/letsencrypt/live/host/privkey1.pem
SSLCertificateChainFile /etc/letsencrypt/live/host/chain1.pem
WSGIDaemonProcess python-app user=www-data group=www-data threads=15 maximum-requests=10000 python-path=/usr/local/lib/python2.7/dist-p
ackages
WSGIScriptAlias / /usr/local/my_app/apache/apache.wsgi
WSGIProcessGroup python-app
CustomLog "|/usr/bin/rotatelogs /usr/local/my_app/apache/logs/access.log.%Y%m%d-%H%M%S 5M" combined
ErrorLog "|/usr/bin/rotatelogs /usr/local/my_app/apache/logs/error.log.%Y%m%d-%H%M%S 5M"
LogLevel warn
<Directory /usr/local/my_app>
Order deny,allow
Allow from all
Require all granted
</Directory>
</VirtualHost>
How to modify it so that apache serves https requests at port 30001 rather than 443? Thank you very much!

I found the answer myself. 2 causes: (1) There is an environment variable specific to my web app that I forgot to set in apache.wsgi; (2) There are several small errors in the original apache configuration file. I post the working /etc/apache2/sites-available/000-default.conf here:
ServerName 0.0.0.0
<VirtualHost _default_:443>
DocumentRoot /usr/local/my_app
LoadModule ssl_module /usr/lib64/apache2-prefork/mod_ssl.so
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/host/cert1.pem
SSLCertificateKeyFile /etc/letsencrypt/live/host/privkey1.pem
SSLCertificateChainFile /etc/letsencrypt/live/host/chain1.pem
WSGIDaemonProcess python-app user=www-data group=www-data threads=15 maximum-requests=10000 python-path=/usr/local/lib/python2.7/dist-packages
WSGIScriptAlias / /usr/local/my_app/apache/apache.wsgi
WSGIProcessGroup python-app
CustomLog "|/usr/bin/rotatelogs /usr/local/my_app/apache/logs/access.log.%Y%m%d-%H%M%S 5M" combined
ErrorLog "|/usr/bin/rotatelogs /usr/local/my_app/apache/logs/error.log.%Y%m%d-%H%M%S 5M"
LogLevel warn
<Directory /usr/local/my_app>
Order deny,allow
Allow from all
Require all granted
</Directory>
</VirtualHost>
Start the pod with commands sudo a2enmod ssl && sudo /usr/sbin/apache2ctl -D FOREGROUND, and containerPort should be 443. The Kubernetes script for the service is as simple as follows:
apiVersion: v1
kind: Service
metadata:
name: web
labels:
name: web
spec:
type: NodePort
ports:
- nodePort: 30001
port: 443
targetPort: 443
protocol: TCP
selector:
name: web
Now I can visit my web site at https://XXX.XXX.XXX.XXX:30001.
Special thanks to the owner of this github repo and NorbertvanNobelen. Hope this helps!

I just run into this issue this morning.
I expose the employment using --type=NodePort
I can access it from either
http://<pod ip>:<target port>
http://<cluster IP>: <port>
But I can not access it from
http://<Node IP>:< NodePort>
The Chrome say: .... took too long to respond
I check pod's status. It is ready and running
Later I fixed it by:
delete the deployment and service.
create deployment
watch the pod till its status became 'running'
expose this deployment using --type=NodePort
I find now the pod is running on another node.
I check
http://<new Node IP>:< new NodePort>
It works
I do not know what is the reason.
Just guess:
make sure the pod is created and in running status before expose
the deployment
maybe it is related with the cluster IP allocated by k8s
maybe there is something wrong with the node machine it ever was running on.

Related

Run Mercure on production : 404 no found

I am contacting you because I can't get mercure to work in production.
The binary prebuild runs fine, but when I try to connect to the hub, I get a 404 no found.
Here is the command I run :
sudo MERCURE_PUBLISHER_JWT_KEY='eyJhbGciOiJIUzI1NiIsInR5cCI6...' MERCURE_SUBSCRIBER_JWT_KEY='eyJhbGciOiJIUzI1NiIsInR5cCI6...' SERVER_NAME=:3000 ./mercure run
the server launches without any problem apparently:
2022/02/15 17:38:09.919 INFO using adjacent Caddyfile
2022/02/15 17:38:09.920 WARN input is not formatted with 'caddy fmt' {"adapter": "caddyfile", "file": "Caddyfile", "line": 3}
2022/02/15 17:38:09.921 INFO admin admin endpoint started {"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["[::1]:2019", "127.0.0.1:2019", "localhost:2019"]}
2022/02/15 17:38:09.922 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc0000cb7a0"}
2022/02/15 17:38:09.935 INFO tls cleaning storage unit {"description": "FileStorage:/root/.local/share/caddy"}
2022/02/15 17:38:09.935 INFO tls finished cleaning storage units
2022/02/15 17:38:09.935 INFO autosaved config (load with --resume flag) {"file": "/root/.config/caddy/autosave.json"}
2022/02/15 17:38:09.935 INFO serving initial configuration
my .env is configured as such:
###> symfony/mercure-bundle ###
MERCURE_URL=https://monsite.com/.well-known/mercure
MERCURE_PUBLIC_URL=https://monsite.com/.well-known/mercure
MERCURE_JWT_SECRET="eyJhbGciOiJIUzI1NiIsInR5cCI6..."
###< symfony/mercure-bundle ###
My CaddyFile :
# Learn how to configure the Mercure.rocks Hub on https://mercure.rocks/docs/hub/config
{
{$GLOBAL_OPTIONS}
}
{$SERVER_NAME:monsite.com}
log
route {
encode zstd gzip
mercure {
# Transport to use (default to Bolt)
transport_url {$MERCURE_TRANSPORT_URL:bolt://mercure.db}
# Publisher JWT key
publisher_jwt {env.MERCURE_PUBLISHER_JWT_KEY} {env.MERCURE_PUBLISHER_JWT_ALG}
# Subscriber JWT key
subscriber_jwt {env.MERCURE_SUBSCRIBER_JWT_KEY} {env.MERCURE_SUBSCRIBER_JWT_ALG}
# Allow Subscribers
anonymous
# Extra directives
{$MERCURE_EXTRA_DIRECTIVES}
}
respond /healthz 200
respond "Not Found" 404
}
When I try to access the hub with postman by typing the following command:
https://monsite.com/.well-known/mercure
I get a 404 no found.
I am under linux debian 10 with apache2. I don't understand what I did wrong. Thanks for your help.
EDIT 21/02/2022
Hi Mehmet, here is what I did:
In /etc/apache2/sites-available monsite.conf and monsite-le-ssl.conf :
ProxyPass /mercure-hub http://localhost:8080/
ProxyPassReverse /mercure-hub http://localhost:8080/
In Caddyfile and Caddyfile.dev :
{
{$GLOBAL_OPTIONS}
auto_https off }
{$SERVER_NAME::8080}
Apparently the hub launches well, I have no error in the console:
debian#vps-...:/var/www/monsite/mercure$ sudo MERCURE_PUBLISHER_JWT_KEY='eyJhbGciOiJIUzI1NiIsInR5cCI6I...' MERCURE_SUBSCRIBER_JWT_KEY='eyJhbGciOiJIUzI1NiIsInR5cCI6I...' ./mercure run -config Caddyfile.dev
2022/02/21 13:31:20.672 INFO using provided configuration {"config_file": "Caddyfile.dev", "config_adapter": ""}
2022/02/21 13:31:20.675 WARN input is not formatted with 'caddy fmt' {"adapter": "caddyfile", "file": "Caddyfile.dev", "line": 3}
2022/02/21 13:31:20.676 INFO admin admin endpoint started {"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["localhost:2019", "[::1]:2019", "127.0.0.1:2019"]}
2022/02/21 13:31:20.676 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc0003fe700"}
2022/02/21 13:31:20.703 INFO tls cleaning storage unit {"description": "FileStorage:/root/.local/share/caddy"}
2022/02/21 13:31:20.703 INFO tls finished cleaning storage units
2022/02/21 13:31:20.703 INFO autosaved config (load with --resume flag) {"file": "/root/.config/caddy/autosave.json"}
2022/02/21 13:31:20.704 INFO serving initial configuration
Whether I run Caddyfile or Caddyfile.dev, when accessing https://monsite.com/mercure-hub, I get a 500 error.
That is my apache settings. Maybe help to you.
open
nano /etc/apache2/sites-available/yourdomain.com-le-ssl.conf
<IfModule mod_ssl.c>
<VirtualHost *:443>
DocumentRoot /var/www/html/yourdomain.com
DirectoryIndex /index.php
ServerName yourdomain.com
#Settings for mercure
ProxyPass /mercure-hub http://localhost:8080
ProxyPassReverse /mercure-hub http://localhost:8080
<Directory /var/www/html/yourdomain.com >
AllowOverride None
Order Allow,Deny
Allow from All
FallbackResource /index.php
Options FollowSymLinks MultiViews
</Directory>
<Directory /var/www/html/yourdomain.com >
DirectoryIndex disabled
FallbackResource disabled
</Directory>
RewriteEngine on
Include /etc/letsencrypt/options-ssl-apache.conf
#YOUR SSL PEM FİLES
SSLCertificateFile /etc/letsencrypt/live …..
SSLCertificateKeyFile /etc/letsencrypt/live …..
</VirtualHost>
</IfModule>
Your caddyfile option should like that
{
{$GLOBAL_OPTIONS}
auto_https off
}
{$SERVER_NAME::8080}#this parameter will run http://localhost:8080
Mercure command
MERCURE_PUBLISHER_JWT_KEY='YOUR_KEY' MERCURE_SUBSCRIBER_JWT_KEY='YOUR_KEY' ./mercure run -config Caddyfile
you can try with Caddyfile.dev for test.
MERCURE_PUBLISHER_JWT_KEY='YOUR_KEY' MERCURE_SUBSCRIBER_JWT_KEY='YOUR_KEY' ./mercure run -config Caddyfile.dev
after this settings your mercure will run yourdomain.com/mercure-hub
After some digging, I tried to figure out why it was returning a 500 error. I went to the apache logs, I had this error message:
"No protocol handler was valid for the URL /. If you are using a DSO
version of mod_proxy, make sure the proxy submodules are included in
the configuration using LoadModule"
So I installed the missing components:
sudo a2enmod ssl
sudo a2enmod proxy
sudo a2enmod proxy_balancer
sudo a2enmod proxy_http
I also modified the proxy urls by adding a slash at the end of mercure-hub, otherwise I would get a 404 error:
ProxyPass /mercure-hub/ http://localhost:8080/
ProxyPassReverse /mercure-hub/ http://localhost:8080/
Restarted apache, and updated the Url in my .env :
MERCURE_URL=https://monsite.com/mercure-hub/.well-known/mercure
MERCURE_PUBLIC_URL=https://monsite.com/mercure-hub/.well-known/mercure
And it works, thanks a lot !

https in multiple docker containers

I have problems figuring out how to properly set up a web server with https which contains multiple Docker containers.
I have a main container running apache by using the "httpd" docker-image.
For simplicity lets call this website "main.com". SSL works perfectly here. I have set up the httpd.conf configuration file to redirect all calls to port 80 to port 443 and loaded SSL and proxy modules. (Port 80 and 443 are both exposed).
I have another Docker container which runs an API to serve geodata to "main.com". We can call this container for "side-container". In the Dockerfile for "side-container" I expose port 8080 from this. Then I can call "main.com:8080" to make a query to my "side-container" which runs the API.
Problem --> At least I could until I changed "main.com" to only use https.
I am stuck trying to get "side-container" to work again. When trying to connect to "main.com:8080" I get a timeout error.
My "docker ps" looks like this:
IMAGE COMMAND PORTS NAMES
main-container "httpd-foreground" 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:9010->9010/tcp main
side-container:latest "/docker-entrypoint.…" 0.0.0.0:8080->8080/tcp side-container
I use docker-compose to control the containers, so perhaps I need to set something there?
I have made an attempt to get it working by using a reverse proxy setting in apache (see http.conf below), by using port 9010 on the "main" container to point to port 8080 on the "side-container".
I can get it to reply with an "internal server error" due to a failed SSL handshake, but no more than that.
My background is in pure physics and not software and webservers so maybe I am missing something obvious. Any hint is greatly appreciated.
From httpd.conf:
<IfModule mod_ssl.c>
Listen 443
Listen 8080
Listen 0.0.0.0:9010 https
LoadModule socache_shmcb_module modules/mod_socache_shmcb.so
SSLProtocol all -SSLv3
SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
SSLHonorCipherOrder on
SSLCompression off
SSLSessionTickets off
SSLRandomSeed startup file:/dev/urandom 512
SSLRandomSeed connect file:/dev/urandom 512
SSLSessionCache shmcb:/dev/ssl_gcache_data(512000)
</IfModule>
<Virtualhost *:443>
ServerName main.com
SSLEngine on
#Primary Certificate file
SSLCertificateFile /usr/local/apache2/conf/certificate.crt
#Private Key
SSLCertificateKeyFile /usr/local/apache2/conf/private.key
#Chain bundle file
SSLCertificateChainFile /usr/local/apache2/conf/ca_bundle.crt
</VirtualHost>
<Virtualhost 0.0.0.0:9010>
ServerName main.com
SSLEngine on
SSLProxyEngine on
SSLProxyVerify none
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCheckPeerExpire off
SSLCertificateFile /usr/local/apache2/conf/certificate.crt
SSLCertificateKeyFile /usr/local/apache2/conf/private.key
SSLCertificateChainFile /usr/local/apache2/conf/ca_bundle.crt
ProxyPass /apptest http://0.0.0.0:8080/
ProxyPassReverse /apptest https://0.0.0.0:8080/
</VirtualHost>
docker-compose.yml:
version: '3'
services:
main-container:
build:
context: .
dockerfile: Dockerfile
container_name: "main"
restart: "always"
ports:
- "80:80"
- "443:443"
- "9010:9010"
links:
- side-container
networks:
- fu
side-container:
image: side-container:latest
container_name: "side-container"
ports:
- "8080:8080"
volumes:
- ${HOME}/data:/data
restart: "always"
networks:
- fu
networks:
fu:
driver: bridge
When linking docker containers within the same network with docker compose you need to reference them by the docker service name, thus instead of 0.0.0.0 use side-container:
ProxyPass /apptest http://side-container:8080/
ProxyPassReverse /apptest https://side-container:8080/
NOTE: the server running in the side container must be listening into 0.0.0.0:8080 in its httpd configuration.
Now you can remove from the docker compose file the ports declaration altogether, because both containers are in the same docker network, therefore you don't need to expose any ports. Exposing ports are only necessary if you want to reach the side-container from localhost in the host machine or from the internet.
So from the side container remove:
ports:
- "8080:8080"
Also in the docker compose file you should replace links with the new syntax depends_on:
depends_on:
- side-container
Ports declaration
For educational purpose.
Please bear in mind that when specifying the port like 8080:8080 is the same as 0.0.0.0:8080:8080 and 0.0.0.0 listens in all requests from the internet, thus to restrict them to localhost 127.0.0.1 of the machine running docker you would do 127.0.0.1:8080:8080.

Docker Reverse Proxy

I have a collection of web applications, with each running inside its own Docker Containers. I can access them locally via http://localhost:9001, for example. I want to access them remotely via https://site.example.com, instead. I have a wildcard Let's Encrypt certificate for example.com.
I understand I need Apache to do direct traffic from FQDN to Port. So I have setup a VirtualHost (below). Normal web activity seems to work fine. I can navigate the website normally.
However, when I try to login using OAuth (e.g. BitBucket), I get a URI redirect mismatch error. This does not happen when I run this outside of a container. I think there is something wrong with my Proxy setup. Is anyone able to advise how to rectify?
<VirtualHost *:443>
ServerAdmin admin#example.com
ServerName site.example.com
ServerSignature Off
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:9001/
ProxyPassReverse / http://127.0.0.1:9001/
<IfModule mod_headers.c>
Header always set Strict-Transport-Security "max-age=15768000; includeSubDomains; preload"
</IfModule>
SSLEngine On
SSLCertificateFile /etc/letsencrypt/live/example.com/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/example.com/fullchain.pem
AllowEncodedSlashes NoDecode
</VirtualHost>
For such use case, Traefik is a very adapted tool. Coupled with docker-compose, you can setup multiple docker containers on the same host, each one having its own endpoint. To access them remotely, you just have then to bind remote host's IP address to all your endpoints (or use a public DNS that does it for you).
Here is a docker-compose.yml example using Traefik.
version: "3"
services:
traefik:
image: traefik:latest
command: --api --docker --logLevel=DEBUG
ports:
- "80:80"
- "443:443"
- "8082:8080"
volumes:
# So that Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
labels:
- "traefik.enable=false"
your_first_container:
image: <YOUR_IMAGE>
labels:
- "traefik.frontend.rule=Host:site.example.com"
- "traefik.port=9001"

Apache reverse proxy in front of an ingress-gce (GKE)

I´m trying to overcome the ingress-gce limitation of redirect traffic from HTTP to HTTPS.
So, the easiest configuration whould be a Reverse Proxy with Apache2 but isn't working for me, this apache is in another VM apart from my kubernetes cluster, I just want to "proxy" the traffic so I can manipulate the request, redirect to https, etc
I´m need this specific solution to work as I can´t configure a nginx ingress at this point, it has to be done with this GCE ingress
My ingress yaml configuration is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-reserved-address
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- hosts:
- mycustom.domain.com
secretName: mydomain-com-certificate
rules:
- host: mycustom.domain.com
http:
paths:
- path: /*
backend:
serviceName: tomcat-service
servicePort: 80
- path: /app/*
backend:
serviceName: spring-boot-app-service
servicePort: 80
My apache virtualhost configuration is:
<VirtualHost *:80>
ServerName myother.domain.com
Redirect permanent / https://myother.domain.com/
</VirtualHost>
<VirtualHost *:443>
ServerName myother.domain.com
ProxyPreserveHost On
ProxyRequests On
ProxyPass / https://mycustom.domain.com/
ProxyPassReverse / https://mycustom.domain.com/
SSLEngine on
SSLProxyEngine on
SSLProtocol All -SSLv2 -SSLv3
SSLCipherSuite ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:!RC4:+HIGH:+MEDIUM
SSLCertificateKeyFile /etc/ssl/domain.com/domain.com-privatekey-nopass.pem
SSLCertificateFile /etc/ssl/domain.com/domain.com.crt
SSLCACertificateFile /etc/ssl/domain.com/IntermediateCA.crt
</VirtualHost>
Every piece of the puzzle is working independent as expected, I mean, if I go to any of the following
A) https://mycustom.domain.com/tomcat_context
B) https://mycustom.domain.com/app/hello
I get the desired results, A) I get my web page and B) I get a simple response from my app
However, when I use the proxy http://myother.domain.com/tomcat_context I can see how it transform to but I always get a TEXT response from the cluster, always is
default backend - 404
I´m also checking the Apache2 logs and I can see how the correct invocation is being made internally by apache
[Wed May 22 18:39:40.757619 2019] [proxy:debug] [pid 14196:tid 140335314564864] proxy_util.c(2213): [client xxx.xxx.xxx.xxx:52636] AH00944: connecting https://mycustom.domain.com/tomcat_context to mycustom.domain.com:443
I can´t find an explanation why this is happening if all the pieces are working properly, at the end of the day my ingress-gce is like an external service to my apache proxy, it should be working already.
Also both configurations, the ingress and the apache have SSL configured and its the same exact certificate as both are running on the same domain
Any help will be appreciated
The ingress controller doesn't have a handler for myother.domain.com so produces a 404.
You either need to setup an additional Ingress host for myother.domain.com or turn ProxyPreserveHost Off so the proxy sends the mycustom.domain.com host name from the ProxyPass config.
How the tomcat application make use of the Host header is usually the decider for which way you need to map the header through the proxy.

Apache Webserver ReverseProxy to serve Apache Solr Admin Panel

I'm trying to run an Apache Solr Service (on its emdedded jetty server) on a remote server. The admin has provided me following information:
DNS: my.server.com
IP: xxx.xxx.xxx
Server OS: 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux
Only Port 80 is accessible. On the server we want to deploy Apache Solr and a microservice which uses Solr as search engine. I want to use Apache Webserver to forward the HTTP-Request to the Solr Admin UI and to the microservice UI, but it doesn't seem to work, I use Apache Server version: Apache/2.4.10 (Debian)
Server built: Sep 15 2016 20:44:43.
I installed Apache and started the server, so far everything works as expected. I can access the admin view from Apache entering the DNS in my browser.
I enabled a few modules following this articel https://www.digitalocean.com/community/tutorials/how-to-use-apache-http-server-as-reverse-proxy-using-mod_proxy-extension:
a2enmod proxy
a2enmod proxy_http
a2enmod proxy_ajp
a2enmod rewrite
a2enmod deflate
a2enmod headers
a2enmod proxy_balancer
a2enmod proxy_connect
a2enmod proxy_html
Then I tried to configure a virtual host under /etc/apache2/sites-available/myconf.conf:
<VirtualHost *:80>
DocumentRoot /var/www/html
ErrorLog /var/log/apache2/error.log
CustomLog /var/log/apache2/access.log combined
ProxyPass /solr http://my.server.com:8983 retry=0 timeout=5
ProxyPassReverse /solr http://my.server.com:8983
ProxyPass /microservice http://my.server.com:6868 retry=0 timeout=5
ProxyPassReverse /microservice http://my.server.com:6868
LogLevel debug
</VirtualHost>
Solr uses its standard port 8983 and the microservice will be on port 6868. When I try to acces solr with http://my.server.com/solr I get an HTTP 503 Service unavailable.
I first tried this:
/usr/sbin/setsebool -P httpd_can_network_connect 1
But it changed nothing. I also had to install first:
apt-get install policycoreutils
to make this option available. The solr service seems to be ok:
solr status
Found 1 Solr nodes:
Solr process 14082 running on port 8983
{
"solr_home":"/etc/apache-solr/solr-6.2.0/server/solr",
"version":"6.2.0 764d0f19151dbff6f5fcd9fc4b2682cf934590c5 - mike - 2016-08-20 05:41:37",
"startTime":"2016-10-07T12:02:05.300Z",
"uptime":"0 days, 1 hours, 29 minutes, 55 seconds",
"memory":"29.7 MB (%6.1) of 490.7 MB"}
The Apache log keeps saying:
The timeout specified has expired: AH00957: HTTP: attempt to connect to xxx.xxx.xxx:8983 (my.server.com) failed
AH00959: ap_proxy_connect_backend disabling worker for (my.server.com) for 0s
AH01114: HTTP: failed to make connection to backend: my.server.com
Without my timeout setting everthing keeps the same but it takes ages before I get the 503 Error.
Any hints? After one day struggeling I'm depressed ... all I want is to finish the task.
Thanks in advance!
It turns out that I needed to append a slash to the urls:
ProxyPass /solr/ http://my.server.com:8983/ retry=0 timeout=5
ProxyPassReverse /solr/ http://my.server.com:8983/
ProxyPass /microservice/ http://my.server.com:6868/ retry=0 timeout=5
ProxyPassReverse /microservice/ http://my.server.com:6868/