Load balancing an app server with Apache web servers - apache

Our current setup has 2 load balanced web servers that point their application requests to a load balancer for 2 web servers
LB1
/ \
Web1 Web2
\ /
LB2
/ \
App1 App2
The 3rd party app we use now recommends we switch from a hardware LB on the app portion to software.
(note: Any information from Apache will be cut down a bit to remove IPs, directories, etc. It's just paranoia)
I've added a load balancing configuration that, very cut down, looks like this
<Proxy balancer://mycluster>
BalancerMember ajp://FIRSTIP:8009 route=node1
BalancerMember ajp://SECONDIP:8009 route=node2
ProxySet stickysession=JSESSIONID
</Proxy>
As you can see we're balancing ajp requests. There's a ton of ProxyPass rules after this for various parts of the site.
I have this loaded by the main httpd.conf
In that httpd.conf I have the following modules loaded, in this order
mod_headers.so
mod_proxy.so
mod_proxy_http.so
mod_proxy_balancer.so
mod_proxy_connect.so
mod_proxy_scgi.so
mod_deflate.so
mod_proxy._ajp.so
The problem is that when I put it all in place and try to restart httpd it throws this:
httpd: Syntax error on line 62 of httpd.conf: Cannot load modules/mod_proxy_ajp.so into server: modules/mod_proxy_ajp.so: undefined symbol: ajp_send_header
Also of course now all server requests throw 500 and have an error message in error.log:
No protocol handler was valid for the URL /. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule.
I don't see why this is happening. According to research that error should only be thrown if mod_proxy_ajp is being called BEFORE mod_proxy. Since it's the very last thing everything should have been loaded beforehand.

I have fixed it just now by running following
cd /media/httpd-2.4.16/modules/proxy
/usr/apache24/bin/apxs -c -i -a mod_proxy.c proxy_util.c
/usr/apache24/bin/apxs -c -i -a mod_proxy_ajp.c ajp*.c
/usr/apache24/bin/apxs -c -i -a mod_proxy_balancer.c mod_proxy_connect.c mod_proxy_http.c
Hopeful this is useful for others.

Related

Wildfly, Tomcat, Apache and Subdomains

I have an Ubuntu server in AWS that is running multiple application servers -- a Wildfly serving up some pages and two Tomcats running a separate app.
I am trying to get subdomains working.
I have DNS's set up to point subdomain1.example.com, subdomain2.example.com. That works fine.
Wildfly is listening on port 80 (I think?), the Tomcats are listening on 8080 and 8090. The goal is to have www.example.com go to Wildfly, subdomain1.example.com go to Tomcat : 8080 and subdomain2.example.com go to Tomcat : 8090
I've found numerous posts that talk about setting up virtual hosts in Apache that should solve my problem. But I keep getting sent down rabbit holes. Some suggest adding to /opt/bitnami/apache2/bin/httpd.config and some suggest putting it in /opt/bitnami/apache2/sites-available/subdomain1.example.com.conf
My first issue: I don't think that Apache is even running. I was under the impression that Apache was baked into Wildfly, but when I execute:
service apache2 status
I get:
apache2.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
Running sudo service --status-all also doesn't show it running so I think that it is not. It seems to be installed (Bitnami stack) in /opt/bitnami/apache2
Do I have to turn Apache on as part of Wildfly (and how to turn it on)? If I do, then I would assume that Wildfly is no longer getting traffic.
Second - my research tells me I need i need to enable proxy and proxy_http using a2enmod and a2ensite but I don't have these. Research suggests that all Ubuntu's will have those scripts... do they get created if I turn on Apache?
Sorry for all the noob questions.... I'm a developer without a DevOps guy. This seems like it would so common it would be baked in or there would be a definite solution that I am probably missing.
For those looking for something similar, here is the solution that worked for me.
My server is a Wildfly-Apache2-MySQL AMI image on AWS. I did not need to use a2enmod nor a2ensite as my research suggested. It seems many of those modules are already enabled by the pre-built image.
NOTE THESE INSTRUCTIONS ARE BITNAMI AWI SPECIFIC - YOUR FLAVOR CONFIGURATION MAY BE SLIGHTLY DIFFERENT
To have a subdomain point to a simple Apache text site (yada.example.com):
Create a directory in ~/stack/apache2/htdocs called yada
Add an entry to the virtual hosts configuration file (sudo nano /opt/bitnami/apache2/conf/extra/httpd-vhosts.conf)
<VirtualHost *:80>
ServerAdmin info#example.com
DocumentRoot "/opt/bitnami/apache2/htdocs/yada"
ServerName yada.example.com
ErrorLog "logs/yada-subdomain-error-log"
CustomLog "logs/yada-subdomain-access-log" common
</VirtualHost>
Modify the Apache configuration file to include the virtual hosts. (sudo nano /opt/bitnami/apache2/conf/httpd.conf):
...snip...
# Supplemental configuration
#
# The configuration files in the conf/extra/ directory can be
# included to add extra features or to modify the default configuration of
# the server, or you may simply copy their contents here and change as
# necessary.
...snip...
# Virtual hosts
Include conf/extra/httpd-vhosts.conf
# ADDED THE ABOVE LINE
...snip...
Restart Apache (sudo /opt/bitnami/ctlscript.sh restart apache)
To make it point to a Tomcat server, add this to the httpd-vhosts.conf:
<VirtualHost *:80>
ServerAdmin info#example.com
ServerName yada.example.com
ProxyPreserveHost On
# setup the proxy
<Proxy *>
Order allow,deny
Allow from all
</Proxy>
ProxyPass / http://localhost:8090/
ProxyPassReverse / http://localhost:8090/
</VirtualHost>
Your port may differ.
FYI, I found this helpful: https://docs.bitnami.com/virtual-machine/components/apache/#how-to-configure-your-web-application-to-use-a-virtual-host
Good luck and shout out to #stdunbar for his guidance.

How do I deploy a golang app with Apache installed on Ubuntu 16.04 on digitalocean?

I am learning Go at the moment and I have built really simple webapps following some tutorials with the net/http package. I have created a simple wishlist, where I add an item and than it does to a simple table of things I want, pretty simple.
Now I want to deploy this app to my Digital Ocean droplet, but I just don't know how. I have some php websites with different domains already with Apache behind it.
I am really a begginer on this "servers configuration" thing, usually with php is pretty easy on webhosts and I didn't need this much experience. Can you point me on the right direction to make my Go app available at a domain I own, without the ports bit? Preferably with Apache.
Thanks :)
Note: Almost everything in this answer needs to be customized to your specific circumstances. This is written with the assumption that your Go app is called "myapp" and you have made it listen at port 8001 (and many others).
You should make a systemd unit file to make your app start up automatically at boot. Put the following in /etc/systemd/system/myapp.service (adapt to your needs):
[Unit]
Description=MyApp webserver
[Service]
ExecStart=/www/myapp/bin/webserver
WorkingDirectory=/www/myapp
EnvironmentFile=-/www/myapp/config/myapp.env
StandardOutput=journal
StandardError=inherit
SyslogIdentifier=myapp
User=www-data
Group=www-data
Type=simple
Restart=on-failure
[Install]
WantedBy=multi-user.target
For documentation of these settings see: man systemd.unit, man systemd.service and man systemd.exec
Start it:
systemctl start myapp
Check that it is ok:
systemctl status myapp
Enable automatic startup:
systemctl enable myapp
Then it is time to configure Apache virtualhost for your app. Put the following in /etc/apache2/sites-available/myapp.conf:
<VirtualHost *:80>
ServerName myapp.example.com
ServerAdmin webmaster#example.com
DocumentRoot /www/myapp/public
ErrorLog ${APACHE_LOG_DIR}/myapp-error.log
CustomLog ${APACHE_LOG_DIR}/myapp-access.log combined
ProxyPass "/" "http://localhost:8001/"
</VirtualHost>
Documentation of the proxy related settings: https://httpd.apache.org/docs/2.4/mod/mod_proxy.html
Enable the configuration:
a2ensite myapp
Make sure you did not make mistake in Apache configuration:
apachectl configtest
In case the proxy modules are not previously enabled you will get an error at this point. In that case enable the proxy modules and try again:
a2enmod proxy
a2enmod proxy_http
apachectl configtest
Reload Apache configuration:
systemctl reload apache2
Remember to make the name myapp.example.com available in DNS.
That's it!
EDIT: Added pointers to documentation and instructions for enabling Apache modules if needed. Use apachectl for config test.

Docker GitLab: Decoupling GitLab "external url" and "ssh port" from the GUI copy and paste links, web resources, certificates and actual ports

Please note that this question relates to personal use of a dedicated server, not professional.
I want to run two GitLab Docker Containers on the same machine with two different volumes, the two of them "made available" on port 443 on the hosts' machine. Port 80 for http content is not made available. The host's HTTP will not be a 3xx redirect, it will be a web page using HSTS. SSH ports on the hosts' will be something like 10703 and 10803.
The urls will be https://gitlab.example.com and https://sources.example.com. The certificates are maintained by Let's Encrypt on the host.
The content will be served by Apache, using mod_proxy and virtual hosts. Apache does not run inside Docker. There are other virtual hosts enabled unrelated to GitLab. In order to simplify certificates, I'm trying not to put certificates inside the gitlab themselves. Instead the Apache configuration holds the certificates.
https://gitlab.example.com will forward to http://127.0.0.1:10701 or whatever port is used to serve the web content depending on the current GitLab configuration
https://sources.example.com will forward to http://127.0.0.1:10801
Now here come the issues.
If I specify https://gitlab.example.com as the external_url:
https will be enabled. gitlab will refuse to even start because the certificates are missing. As I said I'm using mod_proxy, I don't need certificates because Apache is doing all the work already. I would like GitLab to serve insecure content locally to Apache so that it's Apache's job to make that secure over untrusted network.
If I specify http://gitlab.example.com as the external_url and let Apache forward to http://127.0.0.1:10701, which is port 80 on the Docker Container:
almost all gitlab web resources will be served through http://gitlab.example.com, causing the browser to indicate the site is in practice insecure, which is understandable.
the copy and paste link to clone a gitlab repository will be http://gitlab.example.com/group/something.git, causing the clone links to fail because it's not https.
SSH is forwarded from the port 10703 on the host's machine. Inside the Docker Container, it's running on port 22. The current SSH cloning copy and paste link is still git#gitlab.example.com:group/something.git. I want it to be ssh://git#gitlab.example.com:10703/group/something.git (see answer about cloning on other ports)
My X problem is:
To serve GitLab web interfaces securely on the standard https port of the host (443).
To preserve the usability of the copy and paste links of the web interface content, with no compromise on security.
Constraint: Apache must not be replaced.
My current Y ideal solution is:
I would like to strongly decouple GitLab's configuration from intent. Currently when I configure the external URL to https, it recognizes the intent to be serving secure content, which requires certificates. When in fact, I just want the external URL to change. Same deal with SSH, separate external displayed port (used for copy and paste links) from actual networking port from the container's perspective. Maybe there is a configuration that allows this.
My current Y quick and dirty solutions are:
Use https://... as the external url, and add placeholder certificate files to /etc/gitlab/ssl/. GitLab will ignore these certificates completely, but as long as they are present in the filesystem, GitLab will be able to start and the host will be able to deliver secure content. I would like to avoid doing this if there is a better alternative.
To solve the SSH problem, Maybe I could add gitlab_rails['gitlab_shell_ssh_port'] = 10703 (see answer about changing SSH port) and then use docker run --publish 10703:10703 ... instead of how it's currently done ( docker run --publish 10703:22 ... ). EDIT: It turns out that gitlab_rails['gitlab_shell_ssh_port'] only changes the displayed port. Since it's sshd that manages the port 22 and not gitlab, docker run --publish 10703:10703 ... will cause port 10703 on the host to be forwarded to port 10703 on the container, which is closed. docker run --publish 10703:22 ... + gitlab_rails['gitlab_shell_ssh_port'] = 10703 is how it should be done.
How can I solve this problem? Both elegant and quick and dirty ways are appreciated.
After 4 months, I'm going to answer my question in a way that does not answer the original question but provide how I managed to perform what I wanted so far, for any reader who may be stumbling upon this question.
I've done this a while ago so the contents of this answer may not be correct, and it could also be bad practice, but I wouldn't know.
Remember this question relates to personal use of a dedicated server, not professional.
Container configuration
There are no guarantees this configuration is sufficient to have a removable container with data preservation. Use this at your own risk.
This is my docker run command:
docker create \
--name example-gitlab \
--restart=unless-stopped \
--hostname gitlab.example.com \
--publish 2222:22 \
--env GITLAB_OMNIBUS_CONFIG="external_url 'https://gitlab.example.com/'; gitlab_rails['gitlab_signup_enabled'] = false; gitlab_rails['gitlab_shell_ssh_port'] = 2222; nginx['real_ip_trusted_addresses'] = [ '172.17.0.0/16' ]; nginx['real_ip_header'] = 'X-Forwarded-For'" \
--volume /data/docker-data/example-gitlab/config:/etc/gitlab \
--volume /data/docker-data/example-gitlab/logs:/var/log/gitlab \
--volume /data/docker-data/example-gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest
I'm not sure the --hostname gitlab.example.com \ serves any purpose.
SSH port exposure and UI display
GitLab doesn't care about the SSH port in its logic. The SSH port provided in the environment variables is only for display purposes, when displaying the SSH URL of a repository to the user.
I've exposed the real port 22 of the container onto port 2222: --publish 2222:22
I've passed to GitLab the port number to display: gitlab_rails['gitlab_shell_ssh_port'] = 2222
HTTPS
As for https, I must put https:// in the environment variable. It must not be http. If it is http, the page will not be served securely as almost all resources and links will be through http, and some third-party resources will also be http.
When adding https to the environment variable, GitLab WILL check for valid TLS certificates.
I was unable to find a way around this, so I gave up and now I'm feeding GitLab with real certificates.
Since it's a personal server, I'm using an external script to copy the Let's Encrypt certificates into the volume I've exposed through the docker run command above.
cp /data/docker-data/http-realm/certs/live/gitlab.example.com/cert.pem /data/docker-data/example-gitlab/config/ssl/gitlab.example.com.crt
cp /data/docker-data/http-realm/certs/live/gitlab.example.com/privkey.pem /data/docker-data/example-gitlab/config/ssl/gitlab.example.com.key
Not pretty, but it works.
Reverse proxy
There has been a major change from the original question since that now I'm using Apache inside Docker. This causes Apache to gain access to Docker's internal DNS resolution.
This is my virtual host configuration using Apache. I could have used nginx but I'm using something I'm confident working with.
<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerAdmin webmaster#localhost
ServerName gitlab.example.com
## REQUIRED so that external resources such as gravatar are fetched
## using https by the underlying nginx wizard stuff
#RequestHeader set X-Forwarded-Proto "https"
Header add X-Forwarded-Proto "https"
## http://stackoverflow.com/questions/6764852/proxying-with-ssl
SSLProxyEngine On
RewriteEngine On
ProxyRequests Off
ProxyPreserveHost On
ProxyAddHeaders On
## docker alias
ProxyPass / https://example-gitlab:443/
<Location />
ProxyPassReverse /
Require all granted
</Location>
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/gitlab.example.com/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/gitlab.example.com/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/gitlab.example.com/chain.pem
SSLProtocol all -SSLv2 -SSLv3
SSLHonorCipherOrder on
SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4"
# Custom log file locations
ErrorLog /var/log/apache2/example-gitlab_error.log
CustomLog /var/log/apache2/example-gitlab_access.log combined
</VirtualHost>
</IfModule>
In the line ProxyPass https://example-gitlab:443/: the hostname example-gitlab is resolvable through Docker's own DNS because it is the container's name. I'm using Docker 1.12 in case that's a behavior specific to this version.
Therefore I don't need to publish port 443 nor port 80 of my GitLab container to the host, my reverse proxy takes care of this.
Network layers and X-Forwarded-For
The gitlab needs to be setup to trust the X-Forwarded-For header corresponding to your network layer.
Otherwise, in the admin panel, all users' IPs will seem to be coming from within the docker network layer.
If you are using a network layer of this type:
docker network create --driver=bridge \
--subnet=192.168.127.0/24 --gateway=192.168.127.1 \
--ip-range=192.168.127.128/25 strawberry
I believe you will need to change the docker run configuration and replace this:
nginx['real_ip_trusted_addresses'] = [ '172.17.0.0/16' ]
With this:
nginx['real_ip_trusted_addresses'] = [ '192.168.127.128/25' ]
(in addition to adding --net=strawberry in the docker run configuration)
Also if you are using nginx, you will probably have to switch this:
nginx['real_ip_header'] = 'X-Forwarded-For'
With this:
nginx['real_ip_header'] = 'X-Real-IP'

How to enable Apache SSL Reverse Proxy on HTTP application

I've been having problems attempting to implement a reverse SSL proxy on Apache for an HTTP application on Ubuntu 14.04. As a baseline, the application works fine when I access it via port 8000 in the browser normally. For all intents and purposes, let's say the IP of my app is 192.141.56.11 (I do not have a domain name yet). The application runs with HTTP Basic Auth, I don't know if it's relevant. Basically I'm fishing for some glaring error here and would be grateful if you could help me out. Here is a log of my process:
I created my SSL cert and key and put them in the following locations:
/etc/apache/ssl/apache.crt (I performed chmod 644 here)
/etc/apache/ssl/apache.key (I performed chmod 400 here)
I then installed:
apt-get install apache2
a2enmod proxy
a2enmod ssl
a2enmod proxy_http
I then disabled the default config with:
a2dissite 000-default
I created the file "/etc/apache2/sites-available/redirect.conf"
I then created the file "/etc/apache2/sites-available/redirect.conf" and copied the text below:
<VirtualHost *:80>
Redirect "/" "https://192.141.56.11"
</VirtualHost>
After, I created the file "/etc/apache2/sites-available/reverse_proxy.conf" and copied below:
<VirtualHost *:443>
SSLEngine On
SSLCertificateFile /etc/apache/ssl/apache.crt
SSLCertificateKeyFile /etc/apache/ssl/apache.key
ProxyPass / http://127.0.0.1:8000/
ProxyPassReverse / http://127.0.0.1:8000/
and did:
service apache2 restart
I now attempt to access the UI of the application on another machine in the Chrome browser. When trying:
https://192.141.56.11
I get a general SSL connection error.
However, trying
http://192.141.56.11:8000
gives me the application, as if none of my config changed anything. However,
192.141.56.11:80
gives me an "Index Of" page with an html folder that says "Apache/2.4.7 (Ubuntu) Server at 192.141.56.11 Port 80"
192.141.56.11:443
gives me the same result except with "Apache/2.4.7 (Ubuntu) Server at 192.141.56.11 Port 443"
I've tried all manners of configurations but can't get what I want -- any ideas here?
EDIT: I tried https[:]//192.141.56.11 and got a more specific SSL error:
received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long)
EDIT2: After running apache, I get this warning;
apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message
I suppose this is fine as I am using an IP and not a domain name.
EDIT3: It turns out I needed to do:
a2ensite reverse_proxy.conf.
Now https[:]//192.141.56.11 works but defaults to an apache page. working on this.
EDIT4: I had to do
a2dissite default-ssl.conf
Now It actually redirects to the app on https[:]//192.141.56.11!! But I can still access the app via port 8000, which is bad {still working on}
EDIT5: IN the end, I couldn't figure out how to block access to the original app via port 8000 on Apache. Instead, I just implemented iptables on the server so that it can only be accessed via HTTPS. This is probably not the correct method. but all I could think of.

How do I use Apache http to proxy to two different tomcat servers?

I have apache httpd that I want to proxy to two different tomcat servers.
I see this:
http://tomcat.apache.org/connectors-doc-archive/jk2/proxy.html
But that is only for one tomcat server. What if I had one server running on 8081 in addition to a tomcat running at 8080?
There's an easier way to setup load balancing using mod_proxy_balancer. Simply list the tomcat servers under a balancer list than put that balancer in your ProxyPass:
<Proxy balancer://mycluster>
BalancerMember http://tomcat1:8080/
BalancerMember http://tomcat2:8081/
</Proxy>
ProxyPass /test balancer://mycluster
Apache httpd two out-of-the-box options for proxying to any number of backend Tomcat instances:
mod_proxy_http
mod_proxy_ajp
They are configured identically to each other, except that the former uses the HTTP protocol for communication and the latter uses the AJP protocol and URLs that start with ajp:// instead of http:// for the backend server. Both can be configured for load-balancing, failover, etc. in the same way. You can proxy to completely separate Tomcat instances (i.e. no load-balancing: just separate backends) by providing separate proxy configuration for separate URL spaces (e.g. /app1 -> Tomcat1 and /app2 -> Tomcat2) or you can configure the two (or more) backend instances for load-balancing, etc.
Specifically, look at the documentation for the following httpd configuration directives:
<Proxy>
BalanceMember
ProxyPass
ProxyPassReverse
You can find documentation for all of these here:
http://httpd.apache.org/docs/2.2/mod/mod_proxy.html (General)
http://httpd.apache.org/docs/2.2/mod/mod_proxy_http.html (HTTP)
http://httpd.apache.org/docs/2.2/mod/mod_proxy_ajp.html (AJP)
http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html (load-balancer)
If you want to use the AJP protocol and you have more complex configuration needs, you can also use mod_jk (not mod_jk2, which is an old, dead, abandoned, completely irrelevant project, now). You can find out more about mod_jk on the Tomcat site here: http://tomcat.apache.org/connectors-doc/
mod_jk has a radically different configuration procedure and a lot more AJP-specific options than mod_proxy_ajp.
The (short) documentation you mentioned in your original post (from the old mod_jk2 docs) points to Apache httpd's mod_proxy_ajp and mod_proxy_balancer modules (though it points to the unstable httpd 2.1, which was the bleeding-edge at the time that documentation was written). You were on the right track: you just needed to keep reading. You can definitely proxy to as many backend instances of Tomcat as you want with any of the modules described here.
You can install HAProxy on either 3rd server which will work as LB to both of them or you can install HAProxy on any one of them and then do following configuration.
To install HAProxy (if you're running Ubuntu/Debain distro)
$ sudo apt-get install haproxy
# Setup config file in /etc/haproxy/haproxy.cnf per requirement
# change /etc/default/ to Enabled = 1 and restart haproxy service
after setup do following mods in config:
$ sudo vim /etc/haproxy/haproxy.cfg
global
maxconn 4096
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
listen webcluster *:80
mode http
stats enable
stats auth us3r:passw0rd
balance roundrobin
option httpchk HEAD / HTTP/1.0
option forwardfor
cookie LSW_WEB insert
option httpclose
server web01 192.168.0.1:8080 cookie LSW_WEB01 check
server web02 192.168.0.2:8081 cookie LSW_WEB02 check
Once done, restart HAProxy service by:
$ sudo service haproxy restart
Here 192.168.0.1 and 192.168.0.2 can be your two servers one can be running on port 8080 and another can be on 8081.
Ref. Post: http://www.leaseweblabs.com/2011/07/high-availability-load-balancing-using-haproxy-on-ubuntu-part-1/
You will also find online help if you will google about how to setup haproxy on your linux distribution if you're not using Ubuntu/Debain. But yes you can bet on it as it's proven tool for the job.