How to use HA Proxy server as Load Balancer - load-balancing

I am trying to use HA Proxy server as a load-balancer between my two web servers.
The Individual servers are up and running on the specified ports.
But I am not able to hit the web servers through HA Proxy server.
I have configured HA proxy using standard settings as below
/etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local0
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main *:80
mode http
default_backend app
backend app
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server app1 10.368.240.116:9091
server app2 10.368.240.317:9092
#server app3 127.0.0.1:5003 check
#server app4 127.0.0.1:5004 check
I can successfully curl the above servers from my HA Proxy host machine
curl -i "10.368.240.116:9091"
HTTP/1.1 200 OK
curl -i "10.368.240.317:9091"
HTTP/1.1 200 OK
My HA Proxy server is listening on 80 port
But when I hit it it is giving 503 error as below
curl -iv "127.0.0.1"
* About to connect() to 127.0.0.1 port 80 (#0)
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 503 Service Unavailable
HTTP/1.0 503 Service Unavailable
< Cache-Control: no-cache
Cache-Control: no-cache
< Connection: close
Connection: close
< Content-Type: text/html
Content-Type: text/html
<
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
* Closing connection 0
Can anyone guide where am I doing wrong

Related

Haproxy seemingly substitutes brotli with gzip in "Accept-Encoding" header

Im struggling to figure out why haproxy seemingly replaces br with gzip in "Accept-Encoding" header as request passes haproxy.
My application currently structured like this:
HAPROXY(tls termination) -> varnish -> apache
So I test like this:
curl -I --http2 -H 'Accept-Encoding: br' -I https://mysite.dev:31753?tru
So - sending single GET request to haproxy that strictly asks for brotly only (using curl)...
So that's what I would expect to see coming to varnish, but what is actually coming into varnish is these 2 requests:
HEAD request with br
GET request with gzip value instead...
I'm so confused - why are there 2 requests now? I did not configure compression in haproxy how can it be rewriting br to gzip.
Requests coming to varnish (I get this using tcpflow program):
172.030.000.035.41382-172.030.000.034.00080: HEAD /?tru HTTP/1.1
user-agent: curl/7.68.0
accept: */*
accept-encoding: br
host: mysite.dev:31753
x-client-ip: 192.168.10.103
x-forwarded-port: 31753
x-forwarded-proto: https
x-forwarded-for: 192.168.10.103
connection: close
172.030.000.034.41882-172.030.000.033.00080: GET /?tru HTTP/1.1
user-agent: curl/7.68.0
accept: */*
x-client-ip: 192.168.10.103
x-forwarded-port: 31753
x-forwarded-proto: https
X-Forwarded-For: 192.168.10.103, 172.30.0.35
host: mysite:31753
Accept-Encoding: gzip
X-Varnish: 328479
My haproxy config looks like so:
Haproxy
global
maxconn 1024
log stdout format raw local0
ssl-default-bind-options ssl-min-ver TLSv1.2
defaults
log global
option httplog
option http-server-close
mode http
option dontlognull
timeout connect 5s
timeout client 20s
timeout server 45s
frontend fe-wp-combined
mode tcp
bind *:31753
tcp-request inspect-delay 2s
tcp-request content accept if HTTP
tcp-request content accept if { req.ssl_hello_type 1 }
use_backend be-wp-recirc-http if HTTP
default_backend be-wp-recirc-https
backend be-wp-recirc-http
mode tcp
server loopback-for-http abns#wp-haproxy-http send-proxy-v2
backend be-wp-recirc-https
mode tcp
server loopback-for-https abns#wp-haproxy-https send-proxy-v2
frontend fe-wp-https
mode http
bind abns#wp-haproxy-https accept-proxy ssl crt /certs/fullkeychain.pem alpn h2,http/1.1
# whatever you need todo for HTTPS traffic
default_backend be-wp-real
frontend fe-wp-http
mode http
bind abns#wp-haproxy-http accept-proxy
# whatever you need todo for HTTP traffic
redirect scheme https code 301 if !{ ssl_fc }
backend be-wp-real
mode http
balance roundrobin
option forwardfor
# Send these request to check health
option httpchk
http-check send meth HEAD uri / ver HTTP/1.1 hdr Host haproxy.local
http-response del-header Server
http-response del-header via
server wp-backend1 proxy-varnish:80 check
http-request set-header x-client-ip %[src]
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
Please help if anyone knows what's happening here - extremely stumped.
nvm.upon further investigation it turned out that I was mixing up tcpflow results.
apparently it was varnish all along that was automatically encoding gzip, after I disabled it - I got br back as expected.

Reverse proxy setup on haproxy gives 500; wget on base server & nginx works

I am trying to setup haproxy as a reverse proxy for a server. I am on Centos.
The config goes like this:
global
#log /dev/log local0
#log /dev/log local1 notice
log 127.0.0.1 local2 info
log 127.0.0.1 local2 notice
log 127.0.0.1 local2 debug
chroot /var/lib/haproxy
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
frontend http_front
bind *:801
option forwardfor
stats enable
default_backend http_back
backend http_back
mode http
option httpchk
option forwardfor
http-send-name-header Host
balance roundrobin
server server1 stg-hostserv.com:80
But, if I do a wget against it, I am getting the below error.
# wget http://0.0.0.0:801
--2018-07-16 14:26:24-- http://0.0.0.0:801/
Connecting to 0.0.0.0:801... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2018-07-16 14:26:24 ERROR 500: Internal Server Error.
haproxy -f /etc/haproxy/haproxy.cfg -d
[WARNING] 197/200148 (13833) : config : frontend 'GLOBAL' has no 'bind' directive. Please declare it as a backend if this was intended.
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result FAILED
Total: 3 (2 usable), will use epoll.
Using epoll() as the polling mechanism.
00000000:http_front.accept(0004)=0006 from [127.0.0.1:60696]
00000000:http_front.clireq[0006:ffffffff]: GET / HTTP/1.1
00000000:http_front.clihdr[0006:ffffffff]: User-Agent: Wget/1.14 (linux-gnu)
00000000:http_front.clihdr[0006:ffffffff]: Accept: */*
00000000:http_front.clihdr[0006:ffffffff]: Host: 0.0.0.0:801
00000000:http_front.clihdr[0006:ffffffff]: Connection: Keep-Alive
00000000:http_back.srvrep[0006:0007]: HTTP/1.1 500 Internal Server Error
00000000:http_back.srvhdr[0006:0007]: Content-Type: text/html
00000000:http_back.srvhdr[0006:0007]: Server: Microsoft-IIS/8.0
00000000:http_back.srvhdr[0006:0007]: X-Powered-By: ASP.NET
00000000:http_back.srvhdr[0006:0007]: Date: Tue, 17 Jul 2018 12:02:00 GMT
00000000:http_back.srvhdr[0006:0007]: Connection: close
00000000:http_back.srvhdr[0006:0007]: Content-Length: 1208
00000001:http_front.clicls[0006:ffffffff]
00000001:http_front.closed[0006:ffffffff]
^C
[root#izp0w3tkx2yr8zhes26ajqz ~]#
I tried different config for the server and consistently hit 500 error. Wget to the base server works without any issues.
I setup nginix to the same thing and it works beautifully. Just haproxy does not seem to work. The customer wants it on haproxy. :)
Can you please advise where I can look at to further debug. Appreciate your assistance.
This update from nuster cache server helped solve the problem:
Does your backend Microsoft-IIS/8.0 check host header? as you set http-send-name-header Host, so request from HAProxy to stg-hostserv.com:80 looks like GET / HTTP/1.1 Host: izp0w3tkx2yr8zhes26ajqz
HAProxy worked when I set:
http-request set-header Host stg-hostserv.com

HAproxy times out(504 error) with large POST bodies

I have 3 nodejs web-servers spun on an ubuntu box and HAproxy to load-balance those servers on the same box. HAproxy listens at port 80(http) and 443(https, with SSL termination). There's no SSL between the HAproxy server and the web-servers.
The POST call to one of the api without SSL, passes through with any value of content-length, but when I try to do the POST call with a content-length greater than 8055 on the HAproxy with SSL connection(port443), HAproxy times out giving a 504 Gateway Timeout error.
Also, if I give an "Expect:100 continue" header to the curl command, the server responds with some delay, which I don't want to exist. Below is how the HAproxy config file looks like:
global
stats socket /var/run/haproxy.sock mode 0777
log 127.0.0.1 local0 info
log 127.0.0.1 local1 info
chroot /usr/share/haproxy
uid nobody
gid nobody
nbproc 1
daemon
maxconn 50000
frontend localnodes:https
bind *:443 ssl crt /etc/ssl/private/443_private_ssl_in.pem no-sslv3
mode http
reqadd X-Forwarded-Proto:\ https
default_backend nodes
timeout client 30000
frontend localnodes-http
bind *:80
mode http
reqadd X-Forwarded-Proto:\ http
default_backend nodes
timeout client 30000
backend nodes
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
log global
timeout connect 3000
timeout server 30000
option httplog
option ssl-hello-chk
option httpchk GET /
http-check expect status 404
server nodejsweb01 127.0.0.1:8000 check
server nodejsweb02 127.0.0.1:8001 check
server nodejsweb03 127.0.0.1:8002 check
I have ensured that the nodejs web-servers behind have no problem, they work fine. I have tried increasing the 'timeout server' period, no effect.
Also tried a solution on this link that tells to give an option ssl ca-file to the backend nodes, as follows:
server nodejsweb01 127.0.0.1:8000 ssl ca-file /etc/ssl/certs/ca.pem check
server nodejsweb02 127.0.0.1:8001 ssl ca-file /etc/ssl/certs/ca.pem check
server nodejsweb03 127.0.0.1:8002 ssl ca-file /etc/ssl/certs/ca.pem check
but after this option HAproxy throws an error saying no servers available at the backend.
Please tell me what am I doing wrong in HAproxy conf file, so that I make the webservers respond successfully with the SSL connection
Try this minimum config:
frontend localnodes
bind *:80
bind *:443 ssl crt /etc/ssl/private/443_private_ssl_in.pem
mode http
default_backend nodes
backend nodes
mode http
option forwardfor
option httplog
server nodejsweb01 127.0.0.1:8000 check
server nodejsweb02 127.0.0.1:8001 check
server nodejsweb03 127.0.0.1:8002 check
I suspect it's those additional options.
It could also be the SSL cert file. Is your PEM file created from a self-signed cert? How is it structured?

HAProxy using 100% CPU 200 concurrent connections

I've recently started to load test my app and found that HAProxy for some reason is not able to handle a lot of concurrent connections.
Im only using HAProxy to load balance my SSL traffic, for non-ssl (99% of my traffic is ssl) i use nginx.
I have tested my setup on blitz.io and when sending traffic to non-ssl (200 concurrent) i get no timeouts or errors. However when doing the same test over SSL (which HAProxy handles) i immediately get 100% CPU and requests are timing out.
This leads me to believe there is something wrong in my HAProxy config.
Below is my config, any ideas what could be wrong?
Oh and i am running this on a medium EC2 CPU optimized instance
My haproxy.cfg:
global
maxconn 400000
ulimit-n 800019
nbproc 1
debug
daemon
log 127.0.0.1 local0 notice
defaults
mode http
option httplog
log global
stats enable
stats refresh 60s
stats uri /stats
maxconn 32768
frontend secured
timeout client 86400000
mode http
timeout client 120s
option httpclose
#option forceclose
option forwardfor
bind 0.0.0.0:443 ssl crt /etc/nginx/ssl/ssl-bundle.pem
acl is_sockjs path_beg /echo /broadcast /close # SockJS
acl is_express path_beg /probe /loadHistory /activity # Express
use_backend www_express if is_express
use_backend sockjs if is_sockjs
default_backend www_nginx
backend tcp_socket
mode http
server server1 xx.xx.xx.xx:8080 check port 8080
backend www_express
mode http
option forwardfor #this sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server server1 xx.xx.xx.xx:8008 weight 1 maxconn 32768 check
backend sockjs
mode http
option forwardfor #this sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server server1 xx.xx.xx.xx:8081 weight 1 maxconn 32768 check
backend www_nginx
mode http
option forwardfor #this sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server server1 localhost:80 weight 1 maxconn 8024 check
listen stats :8181
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth helloxx:xx

Haproxy not matching path (with express.js & Socket.IO)

My config file:
global
maxconn 4096 # Total Max Connections. This is dependent on ulimit
nbproc 2
daemon
log 127.0.0.1 local1 notice
defaults
mode http
frontend all 0.0.0.0:80
timeout client 86400000
default_backend www_backend
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
acl is_websocket path_beg /socket.io
use_backend socket_backend if is_websocket
backend www_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server server1 localhost:9001 weight 1 maxconn 1024 check
server server2 localhost:9002 weight 1 maxconn 1024 check
backend socket_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
stats enable
timeout queue 5000
timeout server 86400000
timeout connect 86400000
server server1 localhost:5000 weight 1 maxconn 1024 check
As far as I can tell www_backend matches everything. When my web app requests http://myapp.com/socket.io/1/?t=1335831853491 it returns a 404, and the header shows the response came from Express. The odd thing is when I do curl -I http://myapp.com/socket.io/1/?t=1335831853491 it returns:
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: keep-alive
When I run sudo netstat -lptu I can confirm that my socket.io process is running on port 5000. Any thoughts?
Agreed with the response above. BTW, you should not use a 1-day timeout for the TCP connection to establish (timeout connect), it makes no sense at all and will cause connections to accumulate when your server goes down. A connection (especially a local one) is
supposed to establish immediately. I tend to set a 5s timeout for connect, which is far enough even across slow networks.
Concerning the other long timeouts, I'm planning on implementing a "timeout tunnel" so that users don't have to use that large timeouts for normal traffic.
Answer found here:
https://serverfault.com/questions/248897/haproxy-access-list-using-path-dir-having-issues-with-firefox
"ust add "option http-server-close" to your defaults section and it should work."