I have the following haproxy backend configuration.
I'm surprised that in haproxy status page the check is reported as "L6ok".
Why Layer 6 and not Layer 7 ?
backend back:lb
option httpchk GET / HTTP/1.1\r\nHost:\ www.domain.tld
default-server ca-file my.ca inter 10s maxconn 50 maxqueue 10 ssl check
server lb-01.{{ domain }} 10.10.10.30:443 weight 100
server lb-02.{{ domain }} 10.10.10.31:443 weight 100
for information, I added the expected http return code (302):
http-check expect status 302
and now checks are reported in l7:
L7OK/302 in 12ms
Related
I'd like to store a custom "value" in stick-table and use that in another ACL to select the server.
I've this config, which creates stick-table with the header value "x-external-id" as key and server-id as its value.
frontend frontend
bind 125.213.51.144:8080
default_backend backend
backend backend
balance roundrobin
stick store-request req.hdr(x-external-id)
stick-table type string len 50 size 200k nopurge
server gw1 125.213.51.100:8080 check id 1
server gw2 125.213.51.101:8080 check id 2
This config produced this stick table:
# table: backend, type: string, size:204800, used:3
0x558955d52ac4: key=00000000000 use=0 exp=0 server_id=1
0x558955d53114: key=11111111111 use=0 exp=0 server_id=2
0x558955d87a34: key=22222222222 use=0 exp=0 server_id=2
The value (server-id) is set by HaProxy based on the server handled the request. But I'd like to save a custom value here. Is it possible?
Apparently HAProxy doesn't allow storing custom values. Only server_id and tracking counters can be stored in stick table.
So I defined two backends with one stick table each. Each client hits its own backend and populates the stick table.
From another HAProxy section, I could use table_server_id to lookup in stick tables and route to the backend which owned the stick table having the entry.
############## Frontend ################
frontend my-frontend
bind 125.213.51.100:38989
acl is_service1 req.hdr(x-external-id),table_server_id(stick-table-1) -m int gt 0
use_backend my-backend if is_service1
acl is_service2 req.hdr(x-external-id),table_server_id(stick-table-2) -m int gt 0
use_backend my-backend-2 if is_service2
default_backend my-backend-default
############## Backend 1 ################
backend my-backend
balance roundrobin
server service1 125.213.51.100:18989 check id 1 inter 10s fall 1 rise 1
server service2 125.213.51.200:18989 check id 2 backup
############## Backend 2 ################
backend my-backend-2
balance roundrobin
server service2 125.213.51.100:18989 check id 2 inter 10s fall 1 rise 1
server service1 125.213.51.200:18989 check id 1 backup
############## Backend Default ################
backend my-backend-default
balance roundrobin
server service1 125.213.51.100:18989 check id 1
server service2 125.213.51.200:28989 check id 2
HaProxy: How to log Response Body
I am able to capture Request Body, but I am unable to log response Body
I tried multiple options but I am unable to capture Response body.
is there any way to log Response Body?
Also, can it be done only for POST request?
HaProxy.cfg
global
log 127.0.0.1 local0
debug
maxconn 2000
user haproxy
group haproxy
defaults
log global
mode http
option httplog
option dontlognull
option http-keep-alive
timeout http-keep-alive 5m
timeout http-request 5s
timeout connect 10s
timeout client 300s
timeout server 300s
timeout check 2s
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
balance roundrobin
option httpchk
frontend LB
bind *:80
option httpclose
option forwardfor
option http-buffer-request
declare capture request len 400000
#declare capture response len 400000
#http-response capture res.header id 0
http-request capture req.body id 0
log-format "%ci:%cp-[%t]-%ft-%b/%s-[%Tw/%Tc/%Tt]-%B-%ts-%ac/%fc/%bc/%sc/%rc-%sq/%bq body:%[capture.req.hdr(0)]/ Response: %[capture.res(0)]"
monitor-uri /
#default_backend LB
# BEGIN YPS ACLs
acl is_yps path_beg /ems
use_backend LB if is_yps
# END YPS ACLs
backend LB
option httpchk GET /ems/login.html
server 10.164.29.225 10.164.30.50:8080 maxconn 300 check
server 10.164.27.31 10.164.30.50:8080 maxconn 300 check backup
You can log body by adding below in cfg file
In Frontend need to add below two lines
declare capture request line 400000
HTTP-request capture req.body id 0
As Default Log line length is 1024, So to log full request need to specify max length
log 127.0.0.1 len 65335 local0
No need to change the log format, use the default log format
The answer is little bit outdated but it is still important for me. And I did not find the answer anywhere else. You can capture and log response body and request body.
The tricky thing is that you have to define capture response in the backend section. It should looks that:
frontend
...
declare capture request len 80000
declare capture response len 80000
http-request capture req.body id
log /path/to/some/unix/socket format raw daemon debug debug
log-format Request\ %[capture.req.hdr(0)]\nResponse\ %[capture.res.hdr(0)]
backend
...
http-response capture res.body id 0
It works for me in version 2.2
You cannot log the request/response body. Take a look at the values you can log: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#8.2.4
I am using HAProxy for load balancing my HTTP requests. I would like to know if there is any way to customize the selection of backend server based on the responses returned by each server. I have a servlet which can return the responses (number of clients connected to it). I would like to use this information and route the request to the backend server which has the lowest number.
My HAProxy configuration looks like:
listen http_front xx.xx.xx.xx:8080
mode http
option httpchk GET /servlet/GetClientCountServlet
server app1 xx.xx.xx.xx:8080 check port 8080
server app2 xx.xx.xx.xx:8080 check port 8080
server app3 xx.xx.xx.xx:8080 check port 8080
Would not leastconn balance mode work for your use case? Otherwise I you can use Lua scripts to customize the way load balancing is done using HAProxy
As I am searching for a solution in the same direction, maye this helps as a base:
Loadbalancing via custom lua script
Create a file called least_sessions.lua and add the following code:
local function backend_with_least_sessions(txn)
-- Get the frontend that was used
local fe_name = txn.f:fe_name()
local least_sessions_backend = ""
local least_sessions = 99999999999
-- Loop through all the backends. You could change this
-- so that the backend names are passed into the function too.
for _, backend in pairs(core.backends) do
-- Look at only backends that have names that start with
-- the name of the frontend, e.g. "www_" prefix for "www" frontend.
if backend and backend.name:sub(1, #fe_name + 1) == fe_name .. '_' then
local total_sessions = 0
-- Using the backend, loop through each of its servers
for _, server in pairs(backend.servers) do
-- Get server's stats
local stats = server:get_stats()
-- Get the backend's total number of current sessions
if stats['status'] == 'UP' then
total_sessions = total_sessions + stats['scur']
core.Debug(backend.name .. ": " .. total_sessions)
end
end
if least_sessions > total_sessions then
least_sessions = total_sessions
least_sessions_backend = backend.name
end
end
end
-- Return the name of the backend that has the fewest sessions
core.Debug("Returning: " .. least_sessions_backend)
return least_sessions_backend
end
core.register_fetches('leastsess_backend', backend_with_least_sessions)
This code will loop through all of the backends that start with the same letters as the current frontend, for example finding the backends www_dc1 and www_dc2 for the frontend www. It will then find the backend that currently has the fewest sessions and return its name.
Use a lua-load directive to load the file into HAProxy. Then, add a use_backend line to your frontend to route traffic to the backend that has the fewest, active sessions.
global
lua-load /path/to/least_sessions.lua
frontend www
bind :80
use_backend %[lua.leastsess_backend]
backend www_dc1
balance roundrobin
server server1 192.168.10.5:8080 check maxconn 30
backend www_dc2
balance roundrobin
server server1 192.168.11.5:8080 check maxconn 30
More details:
https://www.haproxy.com/de/blog/5-ways-to-extend-haproxy-with-lua/
I am trying to solving a scenario now using haproxy. The scenario as below
Block all IP by default
Allow only connection from a specific IP address
If any connections come from a whilelist IP, if should reject if it exceed more than 10 concurrent connection in 30 sec
I want to do this to reduce number of API calls into my server. Could any one please help me with this?
Thanks
First two things are easy, simply allow only whitelisted IP
acl whitelist src 10.12.12.23
use_backend SOMESERVER if whitelist
The third - throttling - requires to use stick-tables (there are many data type - counters conn, sess, http, rates...) as a rate counter:
# max entries count request in 60s periods
stick-table type ip size 200k expire 100s store http_req_rate(60s)
next you have to fill the table, by tracking each request eg. by IP
tcp-request content track-sc0 src
# more info at http://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4.2-tcp-request%20connection
and finally the acl:
# is there more than 5req/1min from IP
acl http_rate_abuse sc0_http_req_rate gt 5
# update use_backend condition
use_backend SOMESERVER if whitelisted !http_rate_abuse
For example some working config file with customized errors:
global
log /dev/log local1 debug
defaults
log global
mode http
option httplog
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend http
bind *:8181
stick-table type ip size 200k expire 100s store http_req_rate(60s)
tcp-request content track-sc0 src
acl whitelist src 127.0.0.1
acl http_rate_abuse sc0_http_req_rate gt 5
use_backend error401 if !whitelist
use_backend error429 if http_rate_abuse
use_backend realone
backend realone
server local stackoverflow.com:80
# too many requests
backend error429
mode http
errorfile 503 /etc/haproxy/errors/429.http
# unauthenticated
backend error401
mode http
errorfile 503 /etc/haproxy/errors/401.http
Note: the error handling is a bit tricky. Because above error backends are missing server entries, haproxy will throw HTTP 503, errorfile catch them and send different errors (with different codes).
Example /etc/haproxy/errors/401.http content:
HTTP/1.0 401 Unauthenticated
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>401 Unauthenticated</h1>
</body></html>
Example /etc/haproxy/errors/429.http content:
HTTP/1.0 429 Too many requests
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>429 Too many requests</h1>
</body></html>
Is it possible to have HaProxy failover when it encounters a certain http-status codes?
I have the following generic haproxy code that works fine if the tomcat server itself stops/fails. However I would like to fail-over when http-status codes 502 Bad Gateway or 500 Internal Server Error are also encountered from tomcat. The following configuration will continue to send traffic even when 500, 404 status codes are encountered in any node.
backend db01_replication
mode http
bind 192.168.0.1:80
server app1 10.0.0.19:8080 check inter 10s rise 2 fall 2
server app2 10.0.0.11:8080 check inter 10s rise 2 fall 2
server app3 10.0.0.13:8080 check inter 10s rise 2 fall 2
Thanks In Advance
I found the following HaProxy http-check expect to resolve the load-balancing based on http status codes.
# Only accept status 200 as valid
http-check expect status 200
# Consider SQL errors as errors
http-check expect ! string SQL\ Error
# Consider all http status 5xx as errors
http-check expect ! rstatus ^5
In order to fail-over when a 500 error is encountered, the HaProxy configuration would look like:
backend App1_replication
mode http
bind 192.168.0.1:80
http-check expect ! rstatus ^5
server app1 10.0.0.19:8080 check inter 10s rise 2 fall 2
server app2 10.0.0.11:8080 check inter 10s rise 2 fall 2
server app3 10.0.0.13:8080 check inter 10s rise 2 fall 2
Source
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#http-check%20expect