My config file:
global
maxconn 4096 # Total Max Connections. This is dependent on ulimit
nbproc 2
daemon
log 127.0.0.1 local1 notice
defaults
mode http
frontend all 0.0.0.0:80
timeout client 86400000
default_backend www_backend
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
acl is_websocket path_beg /socket.io
use_backend socket_backend if is_websocket
backend www_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server server1 localhost:9001 weight 1 maxconn 1024 check
server server2 localhost:9002 weight 1 maxconn 1024 check
backend socket_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
stats enable
timeout queue 5000
timeout server 86400000
timeout connect 86400000
server server1 localhost:5000 weight 1 maxconn 1024 check
As far as I can tell www_backend matches everything. When my web app requests http://myapp.com/socket.io/1/?t=1335831853491 it returns a 404, and the header shows the response came from Express. The odd thing is when I do curl -I http://myapp.com/socket.io/1/?t=1335831853491 it returns:
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: keep-alive
When I run sudo netstat -lptu I can confirm that my socket.io process is running on port 5000. Any thoughts?
Agreed with the response above. BTW, you should not use a 1-day timeout for the TCP connection to establish (timeout connect), it makes no sense at all and will cause connections to accumulate when your server goes down. A connection (especially a local one) is
supposed to establish immediately. I tend to set a 5s timeout for connect, which is far enough even across slow networks.
Concerning the other long timeouts, I'm planning on implementing a "timeout tunnel" so that users don't have to use that large timeouts for normal traffic.
Answer found here:
https://serverfault.com/questions/248897/haproxy-access-list-using-path-dir-having-issues-with-firefox
"ust add "option http-server-close" to your defaults section and it should work."
Related
I have a sample app running in a kubernetes cluster with 3 replicas. I am exposing the app with type=LoadBalancer using metallb.
The external ip issued is 10.10.10.11
When I run curl 10.10.10.11 I get a different pod responding for each request as you would expect from round robin. This is the behaviour I want.
I have now setup HAProxy with a backend pointing to 10.10.10.11, however each time I access the HAProxy frontend, I get the same node responding to each request. If I keep refreshing I intermittently get different pods, sometimes after 20 refreshes, sometimes after 50+ refreshes. I have tried clearing my browser history, but that has no effect.
I assume it is my HAProxy config which is the cause the problem, perhaps caching? but I have not configured any caching. I am a HAProxy newbie, so I might be missing something.
Here is my HAProxy config.
I have tried both mode tcp and mode http, but both give the same result (the same pod responding to each request)
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /home/simon/haproxy/haproxy_certs
crt-base /home/simon/haproxy/haproxy_certs
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
defaults
log global
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend https
bind *:443 ssl crt /home/simon/haproxy/haproxy_certs
timeout client 60s
mode tcp
#Hello App
acl ACL_hello_app hdr(host) -i hello.xxxxxxxxxdomain2.com
use_backend hello_app if ACL_hello_app
#Nginx App
acl ACL_nginx_app hdr(host) -i nginx.xxxxxxxxxdomain1.com
use_backend nginx_app if ACL_nginx_app
backend hello_app
timeout connect 10s
timeout server 100s
mode tcp
server hello_app 10.10.10.11:80
backend nginx_app
mode tcp
server nginx_app 10.10.10.10:80
UPDATE
Upon further testing the the issue seems to be related to the timeout client, timeout connect, timeout server. I reduce these to 1 second, then I get a different POD every 1 second, however these times are so short, I also get intermittent connection failures.
So, I also have the question. Is HAProxy able to work as a reverse proxy in front of another load balancer, or do I need to use another technology such as Nginx?
I eventually found the answer. I needed to use option http-server-close in my frontend settings.
frontend https
bind *:443 ssl crt /home/simon/haproxy/haproxy_certs
http-response set-header Strict-Transport-Security "max-age=16000000; includeSubDomains; preload;"
timeout client 5000s
option http-server-close
mode http
#Hello App
acl ACL_hello_app hdr(host) -i hello.soxprox.com
use_backend hello_app if ACL_hello_app
#Nginx App
acl ACL_nginx_app hdr(host) -i nginx.soxprox.com
use_backend nginx_app if ACL_nginx_app
With these settings I get correct round robin results from metallb
I have balancing host
192.168.1.12
receive input HTTP/HTTPS traffic
and balancing on backends
10.0.1.12
10.0.1.13
Use
HA-Proxy version 1.8.4-1deb90d 2018/02/08
config
global
log 127.0.0.1 local2
chroot /var/opt/rh/rh-haproxy18/lib/haproxy
pidfile /var/run/rh-haproxy18-haproxy.pid
maxconn 20000
daemon
# turn on stats unix socket
stats socket /var/opt/rh/rh-haproxy18/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 15s
timeout server 15s
timeout http-keep-alive 5s
timeout check 3s
maxconn 20001
frontend http_frontend
bind *:80
default_backend http_backend
backend http_backend
mode http
server server1 10.0.1.12:8081 check
server server1 10.0.1.13:8081 check
Start service OK
Check curl
# curl -iv 10.0.1.12:8081
# curl -iv 10.0.1.13:8081
Return OK
Why
curl http://localhost
return 503 Service Unavailable
No server is available to handle this request.
?
here is my haproxy.cfg file hope this will help you to resolve your issue.
# Nur Load Balancer #
frontend tomcat-service
bind *:8081
default_backend tomcat-server
mode http
backend tomcat-server
balance roundrobin
server mfsys-cm-01 192.168.10.31:8080 check
server mfsys-cm-02 192.168.10.30:8080 check
listen stats
bind *:8082
stats enable
stats hide-version
stats show-node
stats uri /stats
stats auth admin:mypassword
stats refresh 5s
i got same error when i access stats, and solution was simple i was not giving proper url as it should be http://192.168.10.1:8082/stats
I am fairly new to haproxy setup. I was able to successfully setup to route frontend requests to specific port on backend. But now I have got a request to route requests to same server but different ports. The backend port to where the request needs to be routed is same as the incoming port. I tried below config among many options but nothing seems to work
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
defaults
mode http
log global
option httplog
option dontlognull
timeout connect 10s
timeout client 1m
timeout server 1m
frontend haproxynode_https
bind 0.0.0.0:6443
bind 0.0.0.0:10111
bind 0.0.0.0:10121
bind 0.0.0.0:10131
bind 0.0.0.0:10141
bind 0.0.0.0:10181
bind 0.0.0.0:10191
bind 0.0.0.0:10011
bind 0.0.0.0:10021
bind 0.0.0.0:10041
bind 0.0.0.0:10051
bind 0.0.0.0:10061
bind 0.0.0.0:10071
bind 0.0.0.0:10091
bind 0.0.0.0:10241
mode tcp
option tcplog
timeout client 1h
default_backend backendnodes_https
backend backendnodes_https
mode tcp
timeout server 1h
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server master XX.XXX.XX.XX weight 1 port 80 maxconn 512 check
server master-1 XX.XXX.XX.XXX weight 1 port 80 maxconn 512 check
server master-2 XX.XXX.XX.XX weight 1 port 80 maxconn 512 check
Any pointers is highly appreciated
run first
haproxy -f /etc/haproxy/haproxy.cfg -c
Is all ok with the conf file ?
add at the end :
listen stats
bind :20000
mode http
stats enable
stats uri /stats
stats hide-version
stats refresh 60
stats realm Haproxy-Statistics
stats auth admin:password
stats admin if TRUE
Check the stats page : connect with a browser to
http://ip:20000/stats
send more info please
I know it is possible to make connections sticky based on url a parameter:
https://serverfault.com/questions/495049/using-url-parameters-for-load-balancing-with-haproxy?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa
Is it also possible to do it based on an ID in the url path?
If my url is: /objects/:objectId
Can I somehow use that :objectId to make the connection sticky?
EDIT
I was able to load balance making the request sticky on the url path using the configuration below:
global
#daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
backend servers
balance roundrobin
stick-table type string size 200k expire 30m
stick on path
server server1 127.0.0.1:8000
server server2 127.0.0.1:8001
listen stats
bind 127.0.0.1:9000
mode http
log global
maxconn 10
stats enable
stats hide-version
stats refresh 5s
stats show-node
stats auth admin:password
stats uri /haproxy?stats
The problem now is that if one of the servers go down the stick-table is not updated. How can I make it so that if one of the servers is not reachable the entries in the stick-table that point to that server are deleted?
Final Answer
Ok, I was able to figure that out. The configuration below makes the requests stick on the url path and HAProxy will make an HTTP GET to /health at every 250ms and if it doesn't returns 200 it will consider the server to be down and that will remove all entries from the stick-table.
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
backend servers
balance roundrobin
stick-table type string size 200k expire 30m
option httpchk GET /health
http-check expect status 200
stick on path,word(2,/) if { path_beg /objects/ }
server server1 127.0.0.1:8000 check inter 250
server server2 127.0.0.1:8001 check inter 250
listen stats
bind 127.0.0.1:9000
mode http
log global
maxconn 10
stats enable
stats hide-version
stats refresh 5s
stats show-node
stats auth admin:password
stats uri /haproxy?stats
Use this:
stick on path,word(2,/) if { path_beg /objects/ }
I've recently started to load test my app and found that HAProxy for some reason is not able to handle a lot of concurrent connections.
Im only using HAProxy to load balance my SSL traffic, for non-ssl (99% of my traffic is ssl) i use nginx.
I have tested my setup on blitz.io and when sending traffic to non-ssl (200 concurrent) i get no timeouts or errors. However when doing the same test over SSL (which HAProxy handles) i immediately get 100% CPU and requests are timing out.
This leads me to believe there is something wrong in my HAProxy config.
Below is my config, any ideas what could be wrong?
Oh and i am running this on a medium EC2 CPU optimized instance
My haproxy.cfg:
global
maxconn 400000
ulimit-n 800019
nbproc 1
debug
daemon
log 127.0.0.1 local0 notice
defaults
mode http
option httplog
log global
stats enable
stats refresh 60s
stats uri /stats
maxconn 32768
frontend secured
timeout client 86400000
mode http
timeout client 120s
option httpclose
#option forceclose
option forwardfor
bind 0.0.0.0:443 ssl crt /etc/nginx/ssl/ssl-bundle.pem
acl is_sockjs path_beg /echo /broadcast /close # SockJS
acl is_express path_beg /probe /loadHistory /activity # Express
use_backend www_express if is_express
use_backend sockjs if is_sockjs
default_backend www_nginx
backend tcp_socket
mode http
server server1 xx.xx.xx.xx:8080 check port 8080
backend www_express
mode http
option forwardfor #this sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server server1 xx.xx.xx.xx:8008 weight 1 maxconn 32768 check
backend sockjs
mode http
option forwardfor #this sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server server1 xx.xx.xx.xx:8081 weight 1 maxconn 32768 check
backend www_nginx
mode http
option forwardfor #this sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server server1 localhost:80 weight 1 maxconn 8024 check
listen stats :8181
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth helloxx:xx