Converting Nginx config to Haproxy - reverse-proxy

I am trying to migrate my reverseproxy from Nginx to Haproxy to use it's health checks, maps and monitoring capabilities.
HAProxy config seems not as straight forward as Nginx. Can someone help me convert below nginx rules to haproxy config.
location = / {
proxy_set_header X-Forwarded-Host $server_name;
proxy_pass https://myapp.azurewebsites.net;
}
location /test {
proxy_pass https://myapp.azurewebsites.net;
}
location /test/ {
proxy_pass https://myapp.azurewebsites.net;
}
So i have same backend for all the rules. I am using below HAProxy config but it's working only for /test/ but not for /test
acl is_domain hdr(host) -i www.domain.com
use_backend be_test if is_domain { path -i / } || { path -i -m beg /test}
backend be_test
mode http
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
http-request replace-header Host .* myapp.azurewebsites.net
server srv01 myapp.azurewebsites.net:443 weight 1 maxconn 100 check ssl verify none

What does the logs say ?
Not sure replace-header Host is required as your Azure website should answer to www.domain.com as a VirtualHost.

Related

how to change url host link in backend server to another

I want to modify link content with any proxy as a reverse proxy (haproxy, nginx or apache).
The backend server has a simple link that redirects to another host ( this host is in an isolated network, only proxy have access).
But when I try to connect, this link redirects to host
unattainable for my, the proxy does not know and does not receive any request.
proxy = 10.10.10.1
backend = 30.30.30.1
link_to_another_host = 30.30.30.2
final_user = 10.10.10.3 ( cant connect to net 30.30.30.x )
Is there any way to solve this?
Simple haproxy example
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main
bind 10.10.10.1:443 ssl crt /etc/haproxy/haproxy.pem
# use_backend static if url_static
default_backend app
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
balance roundrobin
server app1 30.30.30.1:443 check ssl verify none
link backend server app1
link_to_another_host
solved with nginx: replace url link with sub_filter important all compression disable
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name null;
location / {
proxy_pass http://30.30.30.1/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Accept-Encoding ""; # no compression allowed or next won't work
sub_filter "http://30.30.30.2:8080" "http://10.10.10.1:80/new_link";
sub_filter_once off;
}
}

How to make docker application use port 80 (http) instead of 443 (https)

I recently installed this web application on my Ubuntu server which runs Apache (SSL disabled).
It doesn't matter how much i try i can't get the application to use http. tried the -p flag. Then it exposes port 443 and binds something else. I hate browser warnings about SSL. I just want to use http with port 8080.
The application use nginx which only listens to 443. I want my application URL to look like http://localhost:8080 This application use Google OAuth for logins. I'm assuming it will work on http.
How to get it to work in http?
You must edit nginx.conf in order to use plain http (nginx will never speak http on a https port, only for some errors)
Change:
listen 443;
server_name localhost;
access_log /dev/stdout;
error_log /dev/stderr;
ssl on;
ssl_certificate /src/openseedbox/conf/host.cert;
ssl_certificate_key /src/openseedbox/conf/host.key;
To:
listen 8080;
server_name localhost;
access_log /dev/stdout;
error_log /dev/stderr;
Then after docker build, run with:
docker run -p 8080:8080 .......
Alternatively you can set your Apache as an HTTP virtual host that reverse-proxy to the secure HTTPS nginx. But I think it is easier to modify nginx config.
Approach #2
You can add another nginx container to act as reverse proxy, not sure if the application behind will break, but it acts as http "plainer":
docker-compose.yml
# Add this:
plain_nginx:
image: nginx
volumes:
- ./plain_nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- 8080:80
links:
- openseedbox
plain_nginx.conf
server {
listen 80;
server_name _;
access_log /dev/stdout;
error_log /dev/stderr;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
proxy_pass https://openseedbox;
}
}
Then do from ./docker/ directory in that repo:
docker-compose up
Then you have http://localhost:8080 acting as reverse proxy of the SSL stuff

Multiple docker containers (domains) with NGINX as proxy

I have one server, one IP, and multiple websites.
What I try to do is to isolate all websites, each website in it's own docker container.
What I need, its the NGINX as proxy, and one Docker container for each website.
I tried something like that (as root on server):
/etc/hosts :
127.0.0.1 example.com
NGINX config:
http {
upstream app-a {
server 127.0.0.1:3000;
}
server {
listen 80;
server_name example.com www.example.com;
location / {
proxy_pass http://app-a;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Docker:
docker run -d -p 3000:80 coreos/apache /usr/sbin/apache2ctl -D FOREGROUND
curl example.com :
<title>Welcome to nginx!</title>
Why do I get the NGINX response and not the Apache server response?
If I connect on Docker container, the Apache server is started:
sudo docker exec -i -t ID /bin/bash
curl 127.0.0.1 :
<html><body><h1>It works!</h1>
Also, on a remote computer (my PC), I have test.com pointed to my server's public IP. If I access example.com, I get the same NGINX response.
And one more question: is this a good approach, to isolate/run a live websites on docker container? Or should I look for another VM solution?
Thanks in advance.
Edited: the problem was inside my NGINX config file. The strange thing, I get no error from NGINX server.
Changed from
location \ {
to
location / {
It works fine now. Thanks #Oliver Charlesworth.

Simple reverse proxy with Nginx (equivalent to Apache)

With Apache, I can make reverse proxy working with this VirtualHost configuration.
I have executed nanoc view -p 8080to use 8080 port for nanoc web app.
With this setup, http://scalatra.prosseek is mapped to the nanoc.
<VirtualHost *:80>
ProxyPreserveHost On
ServerName scalatra.prosseek
ProxyPass /excluded !
ProxyPass / http://127.0.0.1:8080/
ProxyPassReverse / http://127.0.0.1:8080/
</VirtualHost>
I need to have the same setup with Nginx, with some trial and error, I could make it work with this configuration.
upstream aha { # ??? (1)
server 127.0.0.1:8080;
keepalive 8;
}
# the nginx server instance
server {
listen 0.0.0.0:80;
server_name scalatra.prosseek;
access_log /usr/local/etc/nginx/logs/error_prosseek.log;
location / {
# ??? (2)
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://aha/; # ??? (1)
proxy_redirect off; # ??? (3)
}
}
It works, but I'm not sure if this is the best setup.
Here come my questions:
Is the setup OK for http://scalatra.prosseek to localhost:8080:
Are these correct setup of proxy_set_headers? Or did I miss something?
For the proxy_pass and upstream, is it just OK as long as two names are the same?
Do I need proxy_redirect off;?
Your configuration looks close.
Proxy headers should be fine. Normally Nginx passes headers through, so proxy_set_header is used when you want to modify those - for example forcing the Host header to be present even if the client does not provide one.
For the proxy_pass and upstream, yes the names need to match.
Consider leaving proxy_redirect on (default). This option modifies whether Nginx interferes with responses like 301 & 302 redirects including the port number. Turning it off means that your upsteam application must take responsibility for passing the correct public domain name and port in any redirect responses. Leaving it set to default means that if you accidentally try to direct the client to port 8080, Nginx would in some cases correct it to be port 80 instead.
You also did not include the /excluded path in your nginx config. Add that in with
location /excluded {
return 403;
}

Nginx on SSL (443)

My goal is to redirect from port 80 to 443 (force https), but can't manage to get a working https configuration first. I get a 503 Server Error and nothing appears in the logs. I've looked at all the posts on SO and SF, none of them worked (X_FORWARDED_PROTO, X-Forwarded-For headers don't make a difference.). I'm on EC2 behind a load balancer, and so I don't need to use the SSL-related directives as I've configured my certificate on the ELB already. I'm using Tornado for a web server.
Here's the config, if anyone has ideas, thank you!
http {
# Tornado server
upstream frontends {
server 127.0.0.1:8002;
}
server {
listen 443;
client_max_body_size 50M;
root <redacted>/static;
location ^~/static/ {
root <redacted>/current;
if ($query_string) {
expires max;
}
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://frontends;
}
}
}
Well, there are two different tasks:
If you need to redirect all your http traffic to https, you'll need to create http server in nginx:
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
Second note, if your SSL is terminated at ELB than you dont need ssl enabled nginx server at all. Simply pass traffic from ELB to your server 80 port.