I have my backend maxconn set to 5000 but the limit will not go up from 1000. The global maxconn in the screenshot is 2k. I changed that to 10, but the backend limit will not go above 1k
here is my config
global
user haproxy
group haproxy
log /dev/log local0
log-tag loggy
chroot /var/lib/haproxy
daemon
quiet
stats socket /var/lib/haproxy/stats mode 777 level admin
pidfile /var/run/haproxy.pid
maxconn 10000
defaults
timeout connect 10s
timeout client 60s
timeout server 120s
timeout tunnel 1h
log global
mode http
balance roundrobin
option httplog
option dontlognull
option redispatch
stats uri /haproxy-status
frontend http-in
default_backend servers
bind *:80
maxconn 10000
acl is_record_http hdr(Upgrade) -i websocket
use_backend servers-record if is_record_http
use_backend servers if !is_record_http
frontend httpssl-in
default_backend servers-ssl
bind *:443
maxconn 10000
use_backend servers-ssl-record if { req_ssl_sni -i something.something.com }
use_backend servers-ssl if { req_ssl_sni -i www.something.com }
tcp-request inspect-delay 10s
tcp-request content accept if { req_ssl_hello_type 1 }
mode tcp
backend servers
server server-app something.com
backend servers-record
server server-record something.com
backend servers-ssl
server server-app-ssl something.com
acl clienthello req_ssl_hello_type 1
acl serverhello rep_ssl_hello_type 2
tcp-request inspect-delay 5s
tcp-request content accept if clienthello
stick on payload_lv(43,1) if clienthello
stick store-response payload_lv(43,1) if serverhello
maxconn 5000
mode tcp
stick-table type binary len 32 size 30k expire 30m
tcp-response content accept if serverhello
backend servers-ssl-record
server server-record-ssl something.com
acl clienthello req_ssl_hello_type 1
acl serverhello rep_ssl_hello_type 2
tcp-request inspect-delay 5s
tcp-request content accept if clienthello
stick on payload_lv(43,1) if clienthello
stick store-response payload_lv(43,1) if serverhello
maxconn 5000
mode tcp
stick-table type binary len 32 size 30k expire 30m
tcp-response content accept if serverhello
Per answers here, here and here and the documentation:
The backend limit is the value of fullconn which is by default 10% of maxconn of frontend. You should only have to worry about the fullconn parameter if you have set up minconn parameter in server lines (to use dynamic maxconn), otherwise you can ignore it.
So the maximum amount of connections is sum of maxconn of your backed values which will only be limited if the global maxconn value is lower than the sum of the backend values.
Related
I am fairly new to haproxy setup. I was able to successfully setup to route frontend requests to specific port on backend. But now I have got a request to route requests to same server but different ports. The backend port to where the request needs to be routed is same as the incoming port. I tried below config among many options but nothing seems to work
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
defaults
mode http
log global
option httplog
option dontlognull
timeout connect 10s
timeout client 1m
timeout server 1m
frontend haproxynode_https
bind 0.0.0.0:6443
bind 0.0.0.0:10111
bind 0.0.0.0:10121
bind 0.0.0.0:10131
bind 0.0.0.0:10141
bind 0.0.0.0:10181
bind 0.0.0.0:10191
bind 0.0.0.0:10011
bind 0.0.0.0:10021
bind 0.0.0.0:10041
bind 0.0.0.0:10051
bind 0.0.0.0:10061
bind 0.0.0.0:10071
bind 0.0.0.0:10091
bind 0.0.0.0:10241
mode tcp
option tcplog
timeout client 1h
default_backend backendnodes_https
backend backendnodes_https
mode tcp
timeout server 1h
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server master XX.XXX.XX.XX weight 1 port 80 maxconn 512 check
server master-1 XX.XXX.XX.XXX weight 1 port 80 maxconn 512 check
server master-2 XX.XXX.XX.XX weight 1 port 80 maxconn 512 check
Any pointers is highly appreciated
run first
haproxy -f /etc/haproxy/haproxy.cfg -c
Is all ok with the conf file ?
add at the end :
listen stats
bind :20000
mode http
stats enable
stats uri /stats
stats hide-version
stats refresh 60
stats realm Haproxy-Statistics
stats auth admin:password
stats admin if TRUE
Check the stats page : connect with a browser to
http://ip:20000/stats
send more info please
I am looking for a haproxy (HAProxy version 1.5.18) configuration which will allow websocket loadbalancing as well as RabbitMQ load balancing. I have tried many options but none seem to work, below is my haproxy config file:
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 15s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
timeout tunnel 3600s
frontend http_web *:80
mode http
default_backend rgw
backend rgw
balance roundrobin
server rgw1 173.36.22.49:8080 maxconn 10000 weight 10 cookie rgw1 check
server rgw2 10.42.139.69:8080 maxconn 10000 weight 10 cookie rgw2 check
listen stats :9000
mode http
stats enable
stats realm Haproxy\ Statistics
stats uri /haproxy_stats # Stats URI
stats auth websocketadmin:websocketadmin
listen ampq
bind *:61613
mode tcp
option clitcpka
server rabbit1 10.42.6.112:61613 check inter 1s rise 3 fall 1
server rabbit2 10.42.6.113:61613 check inter 1s rise 3 fall 1
server rabbit3 10.42.6.114:61613 check inter 1s rise 3 fall 1
server rabbit4 10.42.6.115:61613 check inter 1s rise 3 fall 1
Haproxy doesn't give any error, it prints the below message, but it doesn't work, i cannot connect to websocket or connect to Rabbitmq. But as soon as i remove "listen ampq", everything starts working fine.
Sep 8 21:00:40 localhost haproxy[3184]: Proxy http_web started.
Sep 8 21:00:40 localhost haproxy[3184]: Proxy rgw started.
Sep 8 21:00:40 localhost haproxy[3184]: Proxy stats started.
The problem was the port 61613, which was already taken by another process. So i had to change to a new port and add it in the firewall rules and it is working now.
I'm trying to securely connect two servers (using reverse connectivity) using HAProxy. I'm using the following config for the proxy:
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 4096
uid 99
gid 99
daemon
debug
defaults
log global
log-format {"type":"haproxy","timestamp":%Ts,"http_status":%ST,"http_request":"%r","remote_addr":"%ci","bytes_read":%B,"upstream_addr":"%si","backend_name":"%b","retries":%rc,"bytes_uploaded":%U,"upstream_response_time":"%Tr","upstream_connect_time":"%Tc","session_duration":"%Tt","termination_state":"%ts"}
mode http
option httplog
option dontlognull
retries 3
option redispatch
option http-server-close
maxconn 250
timeout connect 5000
timeout client 50000
timeout server 50000
frontend front_reverse
mode http
bind haproxy:8081 ssl crt /x509/certs/example.com.pem
use_backend back_reverse
backend back_reverse
mode http
option ssl-hello-chk
server onpremsrv example.com:8882 check
http-request set-header X-Real-IP %[src]
option forwardfor
listen stats
bind haproxy:9000
mode http
stats enable
stats uri /
stats hide-version
stats auth admin:admin
The server that receives the traffic from the backend outputs the following:
onprem_1 | TRACE [ssl#8 172.32.0.4:39376] RECEIVED: RESPONSE: 503 Service Unavailable HTTP/1.0 HEADERS: {Cache-Control=[no-cache], Connection=[close], Content-Type=[text/html]} CONTENT: HeapBuffer[pos=0 lim=0 cap=0: empty] [...] [...]
onprem_1 | TRACE [ssl#8 172.32.0.4:39376] RECEIVED: CONTENT: HeapBuffer[pos=105 lim=212 cap=272: 3C 68 74 6D 6C 3E 3C 62 6F 64 79 3E 3C 68 31 3E...] [...]
onprem_1 | TRACE [tcp#7 172.32.0.4:39376] RECEIVED: SESSION_UNSECURED
The connection to the second server gets closed. I believe it's related to the ssl part of the HAProxy config. Any ideas?
I managed to connect the two servers using SSL passthrough. The whole setup runs in docker containers. First of all, I changed the hostname I used when generating the certificates. (using the haproxy hostname) Then I slightly modified the haproxy.cfg to reflect the changes in the docker-compose.yml.
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 4096
uid 99
gid 99
daemon
debug
defaults
log global
log-format {"type":"haproxy","timestamp":%Ts,"http_status":%ST,"http_request":"%r","remote_addr":"%ci","bytes_read":%B,"upstream_addr":"%si","backend_name":"%b","retries":%rc,"bytes_uploaded":%U,"upstream_response_time":"%Tr","upstream_connect_time":"%Tc","session_duration":"%Tt","termination_state":"%ts"}
mode http
option httplog
option dontlognull
retries 3
option redispatch
option http-server-close
maxconn 250
timeout connect 5000
timeout client 50000
timeout server 50000
# SSL/TLS Passthrough
frontend front_forward
mode tcp
bind haproxy:8080
use_backend back_forward
backend back_forward
server onpremsrv cloud:8881
mode tcp
timeout server 30s
frontend front_reverse
mode tcp
bind haproxy:8081
use_backend back_reverse
backend back_reverse
server onpremsrv cloud:8882
mode tcp
timeout server 30s
# SSL/TLS Passthrough
listen stats
bind haproxy:9000
mode http
stats enable
stats uri /
stats hide-version
stats auth admin:admin
I've recently started to load test my app and found that HAProxy for some reason is not able to handle a lot of concurrent connections.
Im only using HAProxy to load balance my SSL traffic, for non-ssl (99% of my traffic is ssl) i use nginx.
I have tested my setup on blitz.io and when sending traffic to non-ssl (200 concurrent) i get no timeouts or errors. However when doing the same test over SSL (which HAProxy handles) i immediately get 100% CPU and requests are timing out.
This leads me to believe there is something wrong in my HAProxy config.
Below is my config, any ideas what could be wrong?
Oh and i am running this on a medium EC2 CPU optimized instance
My haproxy.cfg:
global
maxconn 400000
ulimit-n 800019
nbproc 1
debug
daemon
log 127.0.0.1 local0 notice
defaults
mode http
option httplog
log global
stats enable
stats refresh 60s
stats uri /stats
maxconn 32768
frontend secured
timeout client 86400000
mode http
timeout client 120s
option httpclose
#option forceclose
option forwardfor
bind 0.0.0.0:443 ssl crt /etc/nginx/ssl/ssl-bundle.pem
acl is_sockjs path_beg /echo /broadcast /close # SockJS
acl is_express path_beg /probe /loadHistory /activity # Express
use_backend www_express if is_express
use_backend sockjs if is_sockjs
default_backend www_nginx
backend tcp_socket
mode http
server server1 xx.xx.xx.xx:8080 check port 8080
backend www_express
mode http
option forwardfor #this sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server server1 xx.xx.xx.xx:8008 weight 1 maxconn 32768 check
backend sockjs
mode http
option forwardfor #this sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server server1 xx.xx.xx.xx:8081 weight 1 maxconn 32768 check
backend www_nginx
mode http
option forwardfor #this sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server server1 localhost:80 weight 1 maxconn 8024 check
listen stats :8181
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth helloxx:xx
My config file:
global
maxconn 4096 # Total Max Connections. This is dependent on ulimit
nbproc 2
daemon
log 127.0.0.1 local1 notice
defaults
mode http
frontend all 0.0.0.0:80
timeout client 86400000
default_backend www_backend
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
acl is_websocket path_beg /socket.io
use_backend socket_backend if is_websocket
backend www_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server server1 localhost:9001 weight 1 maxconn 1024 check
server server2 localhost:9002 weight 1 maxconn 1024 check
backend socket_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
stats enable
timeout queue 5000
timeout server 86400000
timeout connect 86400000
server server1 localhost:5000 weight 1 maxconn 1024 check
As far as I can tell www_backend matches everything. When my web app requests http://myapp.com/socket.io/1/?t=1335831853491 it returns a 404, and the header shows the response came from Express. The odd thing is when I do curl -I http://myapp.com/socket.io/1/?t=1335831853491 it returns:
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: keep-alive
When I run sudo netstat -lptu I can confirm that my socket.io process is running on port 5000. Any thoughts?
Agreed with the response above. BTW, you should not use a 1-day timeout for the TCP connection to establish (timeout connect), it makes no sense at all and will cause connections to accumulate when your server goes down. A connection (especially a local one) is
supposed to establish immediately. I tend to set a 5s timeout for connect, which is far enough even across slow networks.
Concerning the other long timeouts, I'm planning on implementing a "timeout tunnel" so that users don't have to use that large timeouts for normal traffic.
Answer found here:
https://serverfault.com/questions/248897/haproxy-access-list-using-path-dir-having-issues-with-firefox
"ust add "option http-server-close" to your defaults section and it should work."