HaProxy : How to log Response Body - load-balancing

HaProxy: How to log Response Body
I am able to capture Request Body, but I am unable to log response Body
I tried multiple options but I am unable to capture Response body.
is there any way to log Response Body?
Also, can it be done only for POST request?
HaProxy.cfg
global
log 127.0.0.1 local0
debug
maxconn 2000
user haproxy
group haproxy
defaults
log global
mode http
option httplog
option dontlognull
option http-keep-alive
timeout http-keep-alive 5m
timeout http-request 5s
timeout connect 10s
timeout client 300s
timeout server 300s
timeout check 2s
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
balance roundrobin
option httpchk
frontend LB
bind *:80
option httpclose
option forwardfor
option http-buffer-request
declare capture request len 400000
#declare capture response len 400000
#http-response capture res.header id 0
http-request capture req.body id 0
log-format "%ci:%cp-[%t]-%ft-%b/%s-[%Tw/%Tc/%Tt]-%B-%ts-%ac/%fc/%bc/%sc/%rc-%sq/%bq body:%[capture.req.hdr(0)]/ Response: %[capture.res(0)]"
monitor-uri /
#default_backend LB
# BEGIN YPS ACLs
acl is_yps path_beg /ems
use_backend LB if is_yps
# END YPS ACLs
backend LB
option httpchk GET /ems/login.html
server 10.164.29.225 10.164.30.50:8080 maxconn 300 check
server 10.164.27.31 10.164.30.50:8080 maxconn 300 check backup

You can log body by adding below in cfg file
In Frontend need to add below two lines
declare capture request line 400000
HTTP-request capture req.body id 0
As Default Log line length is 1024, So to log full request need to specify max length
log 127.0.0.1 len 65335 local0
No need to change the log format, use the default log format

The answer is little bit outdated but it is still important for me. And I did not find the answer anywhere else. You can capture and log response body and request body.
The tricky thing is that you have to define capture response in the backend section. It should looks that:
frontend
...
declare capture request len 80000
declare capture response len 80000
http-request capture req.body id
log /path/to/some/unix/socket format raw daemon debug debug
log-format Request\ %[capture.req.hdr(0)]\nResponse\ %[capture.res.hdr(0)]
backend
...
http-response capture res.body id 0
It works for me in version 2.2

You cannot log the request/response body. Take a look at the values you can log: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#8.2.4

Related

haproxy health check on https backend strange results

I have the following haproxy backend configuration.
I'm surprised that in haproxy status page the check is reported as "L6ok".
Why Layer 6 and not Layer 7 ?
backend back:lb
option httpchk GET / HTTP/1.1\r\nHost:\ www.domain.tld
default-server ca-file my.ca inter 10s maxconn 50 maxqueue 10 ssl check
server lb-01.{{ domain }} 10.10.10.30:443 weight 100
server lb-02.{{ domain }} 10.10.10.31:443 weight 100
for information, I added the expected http return code (302):
http-check expect status 302
and now checks are reported in l7:
L7OK/302 in 12ms

Written only dash in apache access log

A normal log looks like this:
111.111.111.111 222.222.222.222 - - [06/Jun/2017:02:19:00 +0900] "GET /monitor/l7check.nhn HTTP/1.1" 200 4 1222 "-" "-"
but some log looks like this:
111.111.111.111 333.333.333.333 - - [06/Jun/2017:02:18:58 +0900] "-" 408 - 13 "-" "-"
I can't understand the meaning of this log.
Why does it have only a 'dash' instead of a 'get URL'?
Is it possible to log to a URL without requesting a URL?
https://www.rfc-editor.org/rfc/rfc7231#section-6.5.7
6.5.7. 408 Request Timeout
The 408 (Request Timeout) status code indicates that the server did not receive a complete request message within the time that it was prepared to wait. A server SHOULD send the "close" connection option (Section 6.1 of [RFC7230]) in the response, since 408 implies that the server has decided to close the connection rather than continue waiting. If the client has an outstanding request in transit, the client MAY repeat that request on a new connection.
So, the client connected, but did not send any HTTP request. The server waited, and eventually closed the connection.

HAproxy ACL . Block all connection to Haproxy By default and allow only Specific IP

I am trying to solving a scenario now using haproxy. The scenario as below
Block all IP by default
Allow only connection from a specific IP address
If any connections come from a whilelist IP, if should reject if it exceed more than 10 concurrent connection in 30 sec
I want to do this to reduce number of API calls into my server. Could any one please help me with this?
Thanks
First two things are easy, simply allow only whitelisted IP
acl whitelist src 10.12.12.23
use_backend SOMESERVER if whitelist
The third - throttling - requires to use stick-tables (there are many data type - counters conn, sess, http, rates...) as a rate counter:
# max entries count request in 60s periods
stick-table type ip size 200k expire 100s store http_req_rate(60s)
next you have to fill the table, by tracking each request eg. by IP
tcp-request content track-sc0 src
# more info at http://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4.2-tcp-request%20connection
and finally the acl:
# is there more than 5req/1min from IP
acl http_rate_abuse sc0_http_req_rate gt 5
# update use_backend condition
use_backend SOMESERVER if whitelisted !http_rate_abuse
For example some working config file with customized errors:
global
log /dev/log local1 debug
defaults
log global
mode http
option httplog
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend http
bind *:8181
stick-table type ip size 200k expire 100s store http_req_rate(60s)
tcp-request content track-sc0 src
acl whitelist src 127.0.0.1
acl http_rate_abuse sc0_http_req_rate gt 5
use_backend error401 if !whitelist
use_backend error429 if http_rate_abuse
use_backend realone
backend realone
server local stackoverflow.com:80
# too many requests
backend error429
mode http
errorfile 503 /etc/haproxy/errors/429.http
# unauthenticated
backend error401
mode http
errorfile 503 /etc/haproxy/errors/401.http
Note: the error handling is a bit tricky. Because above error backends are missing server entries, haproxy will throw HTTP 503, errorfile catch them and send different errors (with different codes).
Example /etc/haproxy/errors/401.http content:
HTTP/1.0 401 Unauthenticated
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>401 Unauthenticated</h1>
</body></html>
Example /etc/haproxy/errors/429.http content:
HTTP/1.0 429 Too many requests
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>429 Too many requests</h1>
</body></html>

HaProxy failover based on http status

Is it possible to have HaProxy failover when it encounters a certain http-status codes?
I have the following generic haproxy code that works fine if the tomcat server itself stops/fails. However I would like to fail-over when http-status codes 502 Bad Gateway or 500 Internal Server Error are also encountered from tomcat. The following configuration will continue to send traffic even when 500, 404 status codes are encountered in any node.
backend db01_replication
mode http
bind 192.168.0.1:80
server app1 10.0.0.19:8080 check inter 10s rise 2 fall 2
server app2 10.0.0.11:8080 check inter 10s rise 2 fall 2
server app3 10.0.0.13:8080 check inter 10s rise 2 fall 2
Thanks In Advance
I found the following HaProxy http-check expect to resolve the load-balancing based on http status codes.
# Only accept status 200 as valid
http-check expect status 200
# Consider SQL errors as errors
http-check expect ! string SQL\ Error
# Consider all http status 5xx as errors
http-check expect ! rstatus ^5
In order to fail-over when a 500 error is encountered, the HaProxy configuration would look like:
backend App1_replication
mode http
bind 192.168.0.1:80
http-check expect ! rstatus ^5
server app1 10.0.0.19:8080 check inter 10s rise 2 fall 2
server app2 10.0.0.11:8080 check inter 10s rise 2 fall 2
server app3 10.0.0.13:8080 check inter 10s rise 2 fall 2
Source
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#http-check%20expect

Why Rebol Raw Http Head Request to get remote file size very slow compared to info? function

Response with info? is very quick:
i: info? http://cdimage.ubuntu.com/daily/current/natty-alternate-i386.iso
i/size
With http head request it takes maybe 10 times more time why ?
port: open tcp://cdimage.ubuntu.com:80
insert port "HEAD /daily/current/natty-alternate-i386.iso HTTP/1.1 ^/"
insert port "Host: cdimage.ubuntu.com ^/^/"
out: copy ""
while [data: copy port][append out data]
block: parse out rejoin [": " newline]
select block "Content-Length"
the port modes are responsible in this case. you where using buffered I/O with the wait mode (which is on by default).
in http, the client is responsible closing of the port when you've read all the server bytes.
since you are basically using tcp directly, using insert port, you are responsible for also detecting the end of the request and closing the port when sufficient bytes have arrived. this can only be done in /lines or /no-wait when doing low-level tcp fun.
Something that read and info? do for you.
while [data: copy port][append out data]
doesn't terminate until a timeout occurs (which is 30 seconds by default in REBOL).
also, your request seems to be in error...
try this:
port: open/lines tcp://cdimage.ubuntu.com:80
insert port {HEAD /daily/current/natty-alternate-i386.iso HTTP/1.0
Accept: */*
Connection: close
User-Agent: REBOL View 2.7.7.3.1
Host: cdimage.ubuntu.com
}
out: form copy port
block: parse out none ;rejoin [": ^/"]
probe select block "Content-Length:"
here it seems that adding /lines will prevent the wait. its probably related to how the http scheme handles the line mode on open.
look around for REBOL port modes within the documentation and on the net its well explained all over the place.
if you had used trace/net on, you'd realized that all the packets where received and that the interpreter was just still waiting. btw your code actually returned an error 400 in my tests.