HAProxy, session sticky and balance algorithm - load-balancing

I got a HAProxy with basic configuration as below:
frontend fe_http
bind *:80
default_backend be_http
capture request header AWESOME-HEADER len 40
backend be_http
mode http
option forwardfor
balance hdr(AWESOME-HEADER)
stick-table type string len 40 size 5M expire 1m
stick on hdr(AWESOME-HEADER)
server s1 x.x.x.x1:8080 check
server s2 x.x.x.x2:8080 check
According to balance hdr(AWESOME-HEADER), requests with same AWESOME-HEADER will go to same server, and my test confirms that.
This is so called "session sticky", right? So, do we still need stick-table and stick on lines? (I do try to remove these 2 lines, and HAProxy still performs like session sticky as I expected)
Thanks.

Related

get client IP for an MQTT request over haproxy

I've configured X_FORWARDED_FOR to capture client IP for a HTTPS request and it works as expected.
However, for MQTT, the data is sent over SSL and HTTP/S does not come into the picture.
ssl://<HOST_NAME>:<PORT>
I've tried adding the following to the backend server on HAproxy config. No luck so far.
backend TestServer
mode tcp
server TestServer01 10.6.186.24:48080 send-proxy-v2
------
server TestServer01 10.6.186.24:48080 send-proxy
------
server TestServer01 10.6.186.24:48080 send-proxy-v2-ssl
Is there a way to capture client (source) IP for an incoming MQTT request by changing HAProxy configuration?
No, there is no where in the MQTT protocol to store the original client IP address (like adding extra headers to HTTP requests).
The proxy is literally just forwarding packets that arrive on it's public port to the backend servers (with the possible exception of doing SSL termination) it doesn't change the packets at all.
If you wanted the IP address to do stick-table based abuse protection, you will need to key your stick-table with the MQTT client identifier.
For example this will reject clients if their connection rate is greater than 1 per second, over a 10s window.
tcp-request content set-var(txn.client_id) req.payload(0,0),mqtt_field_value(connect,client_identifier) if data_in_buffer
stick-table type string len 64 size 100k expire 5m store gpc0,gpc0_rate(10s)
tcp-request content track-sc0 var(txn.client_id)
tcp-request content sc-inc-gpc0(0)
tcp-request content reject if { sc0_gpc0_rate gt 10 }

haproxy failover active-passive

i want to setup haproxy to switch to passive s2 after s1 fails but not to back to s1 when it gets healthy. i mean when switches to s2 if the s1 gets available, haproxy still send requests to s2 and s1 work as passive until failure of s1.
haproxy configuration :
listen http_web 192.168.1.3:80
mode http
balance roundrobin
option httpchk
option forwardfor
server server1 192.168.1.1:80 weight 1 maxconn 512 check backup
server server2 192.168.1.2:80 weight 1 maxconn 512 check backup
i set backup for both servers but when s1 fails haproxy send requests to s2 but when s1 gets back available it sends requests to s1 again.
round robin balancing mode, means that both servers will get requests one by one.
If you want persistence, you should use source method or add cookies.
Otherwise, if you don't need a load balacing feature and just active passive solution. You can use keepalived service ;)

Plone taking a long time to respond to byte-range request

We have two recently upgraded Plone 4.3.2 instances behind a haproxy load balancer which itself is behind Apache.
We limit each Plone instance to serving two concurrent requests using haproxy configuration.
We recently encountered an issue whereby a client sent 4 byte-range requests in quick succession for a PDF that each took between 6 and 8 minutes to get a response. This locked up all available requests for 6 minutes and so haproxy timed out other requests in the queue. The PDF is stored an ATFile object in Plone which I believe should have been migrated to blob storage in our recent upgrade.
My question is what steps should we take to prevent a similar scenario in the future?
I'm also interested in:
how to debug why the byte-range requests on an otherwise lightly loaded server should take so long to respond
how plone.app.blob deals with byte-range requests
is it possible to configure Apache such that byte-range requests are served from its cache but not from the back-end server
As requested here is the haproxy.cfg with superfluous configuration stripped out.
global
maxconn 450
spread-checks 3
defaults
log /dev/log local0
mode http
option http-server-close
option abortonclose
option redispatch
option httplog
timeout connect 7s
timeout client 300s
timeout queue 120s
timeout server 300s
listen cms 127.0.0.1:18181
id 3
balance leastconn
option httpchk
http-check send-state
timeout check 10s
acl cms_edit url_dom xxx.xxx.xxx.xxx
acl cms_not_ok nbsrv() lt 2
block if cms_edit cms_not_ok
server cms_instance1 app:18081 check downinter 10s maxconn 2 rise 1 slowstart 300s
server cms_instance2 app:18082 check downinter 10s maxconn 2 rise 1 slowstart 300s
You can install https://pypi.python.org/pypi/Products.LongRequestLogger and check its log file to see where the request gets stuck.
I've opted to disable byte-range requests to the back-end Zope server. I've added the following to the CMS listen section in haproxy.
reqidel ^Range:.*

HAProxy with SSL (https) and Sticky Session

I need to setup Load balancer as an alternative for ELB for Amazon as they have issue in connection timeout.
Currently, Im using HAProxy and it works normally. However, I need to use SSL for users who wants to connect in https (port 443) to the backend apache servers plus sticky session.
What will be the configuration would looks like? I heard that HAProxy doesn't support SSL in native and can use stunnel or nginx / apache to handle the SSL termination.
I would appreciate anyone to share their knowledge and experiences.
Thanks.
James
To http use something like that.
Change the XXX.XXX.XXX.XXX to your IP address.
listen example-cluster XXX.XXX.XXX.XXX:80
mode http
stats enable
stats auth user:password
stick store-request src
stick-table type ip size 200k expire 2m
balance source
cookie JSESSIONID prefix
option httplog
option httpclose
option forwardfor
option persist
option redispatch
option httpchk HEAD /check.txt HTTP/1.0
server example-webl XXX.XXX.XXX.XXX:80 cookie A check
server example-web2 XXX.XXX.XXX.XXX:80 cookie B check
server example-web3 XXX.XXX.XXX.XXX:80 cookie C check
server example-web4 XXX.XXX.XXX.XXX:80 cookie D check
server example-web5 XXX.XXX.XXX.XXX:80 cookie E check
To your SSL use the mode tcp with balance source:
listen example-cluster-ssl XXX.XXX.XXX.XXX:443
mode tcp
reqadd X-Forwarded-Proto:\ https
stick store-request src
stick-table type ip size 200k expire 2m
option persist
option redispatch
option ssl-hello-chk
balance source
server example-webl XXX.XXX.XXX.XXX:443 check
server example-web2 XXX.XXX.XXX.XXX:443 check
server example-web3 XXX.XXX.XXX.XXX:443 check
server example-web4 XXX.XXX.XXX.XXX:443 check
server example-web5 XXX.XXX.XXX.XXX:443 check
Another way is your upgrade your haproxy to version 1.5, in that version have support to ssl but isn't stable yet.
Take a look at the Stud project on github, which combines extremely well with haproxy, is very performant, scalable, and uses very little resource. Many users are switching to it right now because it's simple and efficient.

How to do sticky load-balancing with HAProxy with Session transfer to new servers

I am using appsession config element for sticky session. I have 5 weblogic instances 3 of them are active and serving load now when load increases i start additional 2 instances. Now HAProxy marks them "Helthy" but does not transfer any traffic to it because it sticky.
How do I transfer existing sessions to new weblogic servers. I am using Terracotta for session clustering so it does not matter which server is serving the request. Below is my config for HAProxy.
# this config needs haproxy-1.1.28 or haproxy-1.2.1
global
log 127.0.0.1 local0
maxconn 1024
daemon
# debug
#quiet
defaults
log global
mode http
option httplog
option httpchk
option httpclose
retries 3
option redispatch
contimeout 5000
clitimeout 50000
srvtimeout 50000
stats uri /admin?stats
stats refresh 5s
listen terracotta 0.0.0.0:10001
# balance url_param JSESSIONID
balance roundrobin
option httpchk OPTIONS /Townsend
server L1_1 10.211.55.1:7003 check
server L1_2 10.211.55.2:7004 check
server L1_3 10.211.55.3:7004 check
appsession JSESSIONID len 52 timeout 3h
Then if it does not matter which server serves the request, disable stickiness and remove the appsession line. You must understand that stickiness is the opposite of load-balancing. If your issue is that you don't scale, don't stick first.