apache docker container communicate with rstudio container - apache

I have been playing about with docker and apache recently neither of which I understand massively well.
I have a problem concerning communication between 2 docker containers on the same host.
One container is running apache with options -p 80:80.
Going to localhost:80 shows the default apache page
I have a second container running the rocker/rstudio image with option -p 8787:8787.
Going to localhost:8787 shows the rstudio log in page as expected.
I want to inside my apache container make it such that localhost/rstudio takes me to the login page for rstudio that is running in the rocker container.
As far as i understood the apache container should be able to see localhost:8787, under sites-available i have the following rstudio.conf file
<VirtualHost *:80>
<Proxy *>
Allow from localhost
</Proxy>
# Specify path for Logs
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
RewriteEngine on
# Following lines should open rstudio directly from the url
# Map rstudio to rstudio/
RedirectMatch ^/rstudio$ /rstudio/
RewriteCond %{HTTP:Upgrade} =websocket
RewriteRule /rstudio/(.*) ws://localhost:8787/$1 [P,L]
RewriteCond %{HTTP:Upgrade} !=websocket
RewriteRule /rstudio/(.*) http://localhost:8787/$1 [P,L]
ProxyPass /rstudio/ http://localhost:8787/
ProxyPassReverse /rstudio/ http://localhost:8787/
ProxyRequests off
</VirtualHost>
as suggested by the rstudio server configuration docs. However localhost:80/rstudio returns a 404 and i don't understand why. Does anyone have any suggestions as to how to fix this?
The main reason I want to do this from inside the apache container rather than just install apache in the rocker container is such that the apache container can manage other connected containers too.

As far as i understood the apache container should be able to see localhost:8787, under sites-available i have the following rstudio.conf file
Almost. From inside the apache docker container, localhost is that container, not the host.
If you want to see what I'm talking about, go into your running apache container and curl localhost:8787. You will get a 404. Now add another vhost in the apache container for 8787 and enable it, then from inside the container curl localhost:8787 again, you'll get the new vhost's content.
The two most straightforward options to do what you're asking would be either a custom network or using docker-compose.
custom network
docker network create jamie-rowan-network
docker run -itd -p 80:80 --network jamie-rowan-network --name apache <image>
docker run -itd -p 8787:8787 --network jamie-rowan-network --name rstudio <image>
This creates a bridge network named jamie-rowan-network. When you run your containers, add them to this network. The embedded network DNS also has service discovery, so your containers will be able to resolve each other by the --name given in the run. (Suggested reading about that here.
Now you should be able to resolve your rstudio container from your apache container with curl rstudio:8787.
Important note: this behavior is a little bit different before and after Docker 1.10, definitely check the docs I linked above. I'm assuming you're on > 1.10.
docker-compose
docker-compose is a tool designed to make a container orchestration much simpler. In this case, it pretty much does all the lifting required for the custom network on it's own, with no work required on your part. I won't go in to how to write a docker-compose.yml, but any service listed in the docker-compose.yml is reachable by the other services by name.
Example:
version: '3'
services:
apache:
image: <image>
ports:
- 80:80
rstudio:
image: <image>
ports:
- 8787:8787
This would accomplish the same as the custom network; rstudio would be reachable from the apache container with curl rstudio:8787 and going the other way, apache would be reachable from rstudio with curl apache:80

Related

View error.log and access.log in Apache in Docker?

My Docker has the following in its vhost.conf
<VirtualHost *:80>
// ...(snipped)
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Viewing these via docker exec in /var/log/apache, doing ls -l shows:
access.log -> /dev/stdout
error.log -> /dev/stderr
What does this mean and is it possible to view their content?
docker logs on the container will show you this log output.
/dev/stdout and /dev/stderr are special "files" that actually point at the current process's standard output and error channels, respectively (they should themselves be symlinks to /proc/self/fd/1 and /proc/self/fd/2). Unless something causes them to get redirected somewhere else, this will become the main output of the container, and that gets captured by Docker's internal log subsystem.
If you wanted to capture these as concrete files on your local system, you could bind mount a local directory over /var/log/apache (with a docker run -v option or Docker Compose ports: option). This would cause an (initially empty) directory to hide the contents of that directory in the container, and when the HTTP daemon wrote out its logs, they'd appear as real files in a directory shared with the host.
You should not need docker exec in normal operation.

CSS & images breaking after mapping application deployed on Tomcat to my domain

I've installed Tomcat 9.0.27 on my Digital Ocean droplet running Ubuntu 18.04.3.
I deployed my Java WAR on Tomcat and am able to access it on the URL:
http://example.com:8080/app_name
I want to be able to directly access my WAR serving JSP through my domain.
So, when I hit example.com it directly serves my Java application.
I have tried several links to do the same. According to one of them (https://www.digitalocean.com/community/questions/how-to-tie-domain-name-with-application-running-on-tomcat), I did the following steps:
1. Enabled "proxy" and "proxy_http" using a2enmod
2. Restarted Apache2 service using systemctl restart
3. Created a new virtual host in a file named /etc/apache2/sites-available/tomcat.conf with the following contents:
<VirtualHost *:80>
ServerName www.example.com
ProxyRequests On
ProxyPass / http://localhost:8080/app_name/
ProxyPassReverse / http://localhost:8080/app_name/
</VirtualHost>
Enabled 'tomcat' site using a2ensite
Restarted Apache2 service using systemctl restart
Now when I hit example.com it does serve my homepage but all the CSS styles and images seem to be broken. The hyperlinks also don't work anymore.
My application is still being served at example.com:8080/app_name and on this URL everything works perfectly.
Please help me out with this.
Fixed this by renaming my webapp to "ROOT" and copying it to Tomcat.
Now the redirection is to http://localhost:8080.

How do I deploy a golang app with Apache installed on Ubuntu 16.04 on digitalocean?

I am learning Go at the moment and I have built really simple webapps following some tutorials with the net/http package. I have created a simple wishlist, where I add an item and than it does to a simple table of things I want, pretty simple.
Now I want to deploy this app to my Digital Ocean droplet, but I just don't know how. I have some php websites with different domains already with Apache behind it.
I am really a begginer on this "servers configuration" thing, usually with php is pretty easy on webhosts and I didn't need this much experience. Can you point me on the right direction to make my Go app available at a domain I own, without the ports bit? Preferably with Apache.
Thanks :)
Note: Almost everything in this answer needs to be customized to your specific circumstances. This is written with the assumption that your Go app is called "myapp" and you have made it listen at port 8001 (and many others).
You should make a systemd unit file to make your app start up automatically at boot. Put the following in /etc/systemd/system/myapp.service (adapt to your needs):
[Unit]
Description=MyApp webserver
[Service]
ExecStart=/www/myapp/bin/webserver
WorkingDirectory=/www/myapp
EnvironmentFile=-/www/myapp/config/myapp.env
StandardOutput=journal
StandardError=inherit
SyslogIdentifier=myapp
User=www-data
Group=www-data
Type=simple
Restart=on-failure
[Install]
WantedBy=multi-user.target
For documentation of these settings see: man systemd.unit, man systemd.service and man systemd.exec
Start it:
systemctl start myapp
Check that it is ok:
systemctl status myapp
Enable automatic startup:
systemctl enable myapp
Then it is time to configure Apache virtualhost for your app. Put the following in /etc/apache2/sites-available/myapp.conf:
<VirtualHost *:80>
ServerName myapp.example.com
ServerAdmin webmaster#example.com
DocumentRoot /www/myapp/public
ErrorLog ${APACHE_LOG_DIR}/myapp-error.log
CustomLog ${APACHE_LOG_DIR}/myapp-access.log combined
ProxyPass "/" "http://localhost:8001/"
</VirtualHost>
Documentation of the proxy related settings: https://httpd.apache.org/docs/2.4/mod/mod_proxy.html
Enable the configuration:
a2ensite myapp
Make sure you did not make mistake in Apache configuration:
apachectl configtest
In case the proxy modules are not previously enabled you will get an error at this point. In that case enable the proxy modules and try again:
a2enmod proxy
a2enmod proxy_http
apachectl configtest
Reload Apache configuration:
systemctl reload apache2
Remember to make the name myapp.example.com available in DNS.
That's it!
EDIT: Added pointers to documentation and instructions for enabling Apache modules if needed. Use apachectl for config test.

Docker GitLab: Decoupling GitLab "external url" and "ssh port" from the GUI copy and paste links, web resources, certificates and actual ports

Please note that this question relates to personal use of a dedicated server, not professional.
I want to run two GitLab Docker Containers on the same machine with two different volumes, the two of them "made available" on port 443 on the hosts' machine. Port 80 for http content is not made available. The host's HTTP will not be a 3xx redirect, it will be a web page using HSTS. SSH ports on the hosts' will be something like 10703 and 10803.
The urls will be https://gitlab.example.com and https://sources.example.com. The certificates are maintained by Let's Encrypt on the host.
The content will be served by Apache, using mod_proxy and virtual hosts. Apache does not run inside Docker. There are other virtual hosts enabled unrelated to GitLab. In order to simplify certificates, I'm trying not to put certificates inside the gitlab themselves. Instead the Apache configuration holds the certificates.
https://gitlab.example.com will forward to http://127.0.0.1:10701 or whatever port is used to serve the web content depending on the current GitLab configuration
https://sources.example.com will forward to http://127.0.0.1:10801
Now here come the issues.
If I specify https://gitlab.example.com as the external_url:
https will be enabled. gitlab will refuse to even start because the certificates are missing. As I said I'm using mod_proxy, I don't need certificates because Apache is doing all the work already. I would like GitLab to serve insecure content locally to Apache so that it's Apache's job to make that secure over untrusted network.
If I specify http://gitlab.example.com as the external_url and let Apache forward to http://127.0.0.1:10701, which is port 80 on the Docker Container:
almost all gitlab web resources will be served through http://gitlab.example.com, causing the browser to indicate the site is in practice insecure, which is understandable.
the copy and paste link to clone a gitlab repository will be http://gitlab.example.com/group/something.git, causing the clone links to fail because it's not https.
SSH is forwarded from the port 10703 on the host's machine. Inside the Docker Container, it's running on port 22. The current SSH cloning copy and paste link is still git#gitlab.example.com:group/something.git. I want it to be ssh://git#gitlab.example.com:10703/group/something.git (see answer about cloning on other ports)
My X problem is:
To serve GitLab web interfaces securely on the standard https port of the host (443).
To preserve the usability of the copy and paste links of the web interface content, with no compromise on security.
Constraint: Apache must not be replaced.
My current Y ideal solution is:
I would like to strongly decouple GitLab's configuration from intent. Currently when I configure the external URL to https, it recognizes the intent to be serving secure content, which requires certificates. When in fact, I just want the external URL to change. Same deal with SSH, separate external displayed port (used for copy and paste links) from actual networking port from the container's perspective. Maybe there is a configuration that allows this.
My current Y quick and dirty solutions are:
Use https://... as the external url, and add placeholder certificate files to /etc/gitlab/ssl/. GitLab will ignore these certificates completely, but as long as they are present in the filesystem, GitLab will be able to start and the host will be able to deliver secure content. I would like to avoid doing this if there is a better alternative.
To solve the SSH problem, Maybe I could add gitlab_rails['gitlab_shell_ssh_port'] = 10703 (see answer about changing SSH port) and then use docker run --publish 10703:10703 ... instead of how it's currently done ( docker run --publish 10703:22 ... ). EDIT: It turns out that gitlab_rails['gitlab_shell_ssh_port'] only changes the displayed port. Since it's sshd that manages the port 22 and not gitlab, docker run --publish 10703:10703 ... will cause port 10703 on the host to be forwarded to port 10703 on the container, which is closed. docker run --publish 10703:22 ... + gitlab_rails['gitlab_shell_ssh_port'] = 10703 is how it should be done.
How can I solve this problem? Both elegant and quick and dirty ways are appreciated.
After 4 months, I'm going to answer my question in a way that does not answer the original question but provide how I managed to perform what I wanted so far, for any reader who may be stumbling upon this question.
I've done this a while ago so the contents of this answer may not be correct, and it could also be bad practice, but I wouldn't know.
Remember this question relates to personal use of a dedicated server, not professional.
Container configuration
There are no guarantees this configuration is sufficient to have a removable container with data preservation. Use this at your own risk.
This is my docker run command:
docker create \
--name example-gitlab \
--restart=unless-stopped \
--hostname gitlab.example.com \
--publish 2222:22 \
--env GITLAB_OMNIBUS_CONFIG="external_url 'https://gitlab.example.com/'; gitlab_rails['gitlab_signup_enabled'] = false; gitlab_rails['gitlab_shell_ssh_port'] = 2222; nginx['real_ip_trusted_addresses'] = [ '172.17.0.0/16' ]; nginx['real_ip_header'] = 'X-Forwarded-For'" \
--volume /data/docker-data/example-gitlab/config:/etc/gitlab \
--volume /data/docker-data/example-gitlab/logs:/var/log/gitlab \
--volume /data/docker-data/example-gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest
I'm not sure the --hostname gitlab.example.com \ serves any purpose.
SSH port exposure and UI display
GitLab doesn't care about the SSH port in its logic. The SSH port provided in the environment variables is only for display purposes, when displaying the SSH URL of a repository to the user.
I've exposed the real port 22 of the container onto port 2222: --publish 2222:22
I've passed to GitLab the port number to display: gitlab_rails['gitlab_shell_ssh_port'] = 2222
HTTPS
As for https, I must put https:// in the environment variable. It must not be http. If it is http, the page will not be served securely as almost all resources and links will be through http, and some third-party resources will also be http.
When adding https to the environment variable, GitLab WILL check for valid TLS certificates.
I was unable to find a way around this, so I gave up and now I'm feeding GitLab with real certificates.
Since it's a personal server, I'm using an external script to copy the Let's Encrypt certificates into the volume I've exposed through the docker run command above.
cp /data/docker-data/http-realm/certs/live/gitlab.example.com/cert.pem /data/docker-data/example-gitlab/config/ssl/gitlab.example.com.crt
cp /data/docker-data/http-realm/certs/live/gitlab.example.com/privkey.pem /data/docker-data/example-gitlab/config/ssl/gitlab.example.com.key
Not pretty, but it works.
Reverse proxy
There has been a major change from the original question since that now I'm using Apache inside Docker. This causes Apache to gain access to Docker's internal DNS resolution.
This is my virtual host configuration using Apache. I could have used nginx but I'm using something I'm confident working with.
<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerAdmin webmaster#localhost
ServerName gitlab.example.com
## REQUIRED so that external resources such as gravatar are fetched
## using https by the underlying nginx wizard stuff
#RequestHeader set X-Forwarded-Proto "https"
Header add X-Forwarded-Proto "https"
## http://stackoverflow.com/questions/6764852/proxying-with-ssl
SSLProxyEngine On
RewriteEngine On
ProxyRequests Off
ProxyPreserveHost On
ProxyAddHeaders On
## docker alias
ProxyPass / https://example-gitlab:443/
<Location />
ProxyPassReverse /
Require all granted
</Location>
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/gitlab.example.com/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/gitlab.example.com/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/gitlab.example.com/chain.pem
SSLProtocol all -SSLv2 -SSLv3
SSLHonorCipherOrder on
SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4"
# Custom log file locations
ErrorLog /var/log/apache2/example-gitlab_error.log
CustomLog /var/log/apache2/example-gitlab_access.log combined
</VirtualHost>
</IfModule>
In the line ProxyPass https://example-gitlab:443/: the hostname example-gitlab is resolvable through Docker's own DNS because it is the container's name. I'm using Docker 1.12 in case that's a behavior specific to this version.
Therefore I don't need to publish port 443 nor port 80 of my GitLab container to the host, my reverse proxy takes care of this.
Network layers and X-Forwarded-For
The gitlab needs to be setup to trust the X-Forwarded-For header corresponding to your network layer.
Otherwise, in the admin panel, all users' IPs will seem to be coming from within the docker network layer.
If you are using a network layer of this type:
docker network create --driver=bridge \
--subnet=192.168.127.0/24 --gateway=192.168.127.1 \
--ip-range=192.168.127.128/25 strawberry
I believe you will need to change the docker run configuration and replace this:
nginx['real_ip_trusted_addresses'] = [ '172.17.0.0/16' ]
With this:
nginx['real_ip_trusted_addresses'] = [ '192.168.127.128/25' ]
(in addition to adding --net=strawberry in the docker run configuration)
Also if you are using nginx, you will probably have to switch this:
nginx['real_ip_header'] = 'X-Forwarded-For'
With this:
nginx['real_ip_header'] = 'X-Real-IP'

Load balancing an app server with Apache web servers

Our current setup has 2 load balanced web servers that point their application requests to a load balancer for 2 web servers
LB1
/ \
Web1 Web2
\ /
LB2
/ \
App1 App2
The 3rd party app we use now recommends we switch from a hardware LB on the app portion to software.
(note: Any information from Apache will be cut down a bit to remove IPs, directories, etc. It's just paranoia)
I've added a load balancing configuration that, very cut down, looks like this
<Proxy balancer://mycluster>
BalancerMember ajp://FIRSTIP:8009 route=node1
BalancerMember ajp://SECONDIP:8009 route=node2
ProxySet stickysession=JSESSIONID
</Proxy>
As you can see we're balancing ajp requests. There's a ton of ProxyPass rules after this for various parts of the site.
I have this loaded by the main httpd.conf
In that httpd.conf I have the following modules loaded, in this order
mod_headers.so
mod_proxy.so
mod_proxy_http.so
mod_proxy_balancer.so
mod_proxy_connect.so
mod_proxy_scgi.so
mod_deflate.so
mod_proxy._ajp.so
The problem is that when I put it all in place and try to restart httpd it throws this:
httpd: Syntax error on line 62 of httpd.conf: Cannot load modules/mod_proxy_ajp.so into server: modules/mod_proxy_ajp.so: undefined symbol: ajp_send_header
Also of course now all server requests throw 500 and have an error message in error.log:
No protocol handler was valid for the URL /. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule.
I don't see why this is happening. According to research that error should only be thrown if mod_proxy_ajp is being called BEFORE mod_proxy. Since it's the very last thing everything should have been loaded beforehand.
I have fixed it just now by running following
cd /media/httpd-2.4.16/modules/proxy
/usr/apache24/bin/apxs -c -i -a mod_proxy.c proxy_util.c
/usr/apache24/bin/apxs -c -i -a mod_proxy_ajp.c ajp*.c
/usr/apache24/bin/apxs -c -i -a mod_proxy_balancer.c mod_proxy_connect.c mod_proxy_http.c
Hopeful this is useful for others.