haproxy 504 timeout to apache - apache

Very new to haproxy and loving it, apart from a 504 issue that we're getting. The relevant log output is:
Jun 21 13:52:06 localhost haproxy[1431]: 192.168.0.2:51435 [21/Jun/2017:13:50:26.740] www-https~ beFootprints/foorprints 0/0/2/-1/100003 504 195 - - sH-- 2/2/0/0/0 0/0 "POST /MRcgi/MRlogin.pl HTTP/1.1"
Jun 21 13:54:26 localhost haproxy[1431]: 192.168.0.2:51447 [21/Jun/2017:13:52:46.577] www-https~ beFootprints/foorprints 0/0/3/-1/100005 504 195 - - sH-- 2/2/0/0/0 0/0 "POST /MRcgi/MRlogin.pl HTTP/1.1"
Jun 21 14:15:57 localhost haproxy[1431]: 192.168.0.1:50225 [21/Jun/2017:14:14:17.771] www-https~ beFootprints/foorprints 0/0/2/-1/100004 504 195 - - sH-- 3/3/0/0/0 0/0 "POST /MRcgi/MRlogin.pl HTTP/1.1"
Jun 21 14:22:26 localhost haproxy[1431]: 192.168.0.1:50258 [21/Jun/2017:14:20:46.608] www-https~ beFootprints/foorprints 0/0/2/-1/100003 504 195 - - sH-- 2/2/0/0/0 0/0 "POST /MRcgi/MRlogin.pl HTTP/1.1"
Using the following timeout values in the haproxy.cfg
defaults
log global
mode http
option forwardfor
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 100000
Running on Ubuntu 16.04.2 LTS
Any help and comment very much appreciated!

The problem appears to be with the web server. Check the logs, there, and you should find long-running requests.
Here's how I conclude that.
Note sH-- in your logs. This is the session state at disconnection. It's extremely valuable for troubleshooting. The values are positional and case-sensitive.
s: the server-side timeout expired while waiting for the server to send or receive data.
...so, timeout server fired, while...
H: the proxy was waiting for complete, valid response HEADERS from the server (HTTP only).
The server had not finished (perhaps not even started) returing all the response headers to the proxy, but the connection was established and the request had been sent.
HAProxy returns 504 Gateway Timeout, indicating that the backend did not respond in a timely fashion.
If your backend needs longer than 100 seconds (?!) then you need to increase timeout server. Otherwise, your Apache server seems to have a problem being too slow to respond.

I had a similar issue and found the problem was with how I had configured my backend server section.
backend no_match_backend
mode http
balance roundrobin
option forwardfor
option httpchk HEAD / HTTP/1.1\r\nHost:\ example.com
server nginx-example 192.168.0.10 check port 80
My problem is that I did not specify the port for the connection.
When connecting via HTTP it would work but as I have my SSL terminated on my haproxy.
This attempts to connect via 443 to the backends.
As the backends cannot / don't correctly communicate. The setup of the SSL session with haproxy and the backend that causes the gateway to time out.
I need to force unencrypted communications to the backends.
backend no_match_backend
mode http
balance roundrobin
option forwardfor
option httpchk HEAD / HTTP/1.1\r\nHost:\ example.com
server nginx-example 192.168.0.10:80 check port 80
The change might be hard to spot server nginx-example 192.168.0.10 check port 80 now has :80 after the ip 192.168.0.10:80
This problem was made more complicated by my backend servers having SSL redirects configured. So all my requests would arrive as HTTP and be redirected to HTTPS. So it was difficult to identify where the problem was. I
It looked like https requests were being redirected correctly to the backend servers. I need to disable this redirect on the backend servers and move it forward to haproxy config.

Related

metallb round robin not working when accessed from external HAProxy

I have a sample app running in a kubernetes cluster with 3 replicas. I am exposing the app with type=LoadBalancer using metallb.
The external ip issued is 10.10.10.11
When I run curl 10.10.10.11 I get a different pod responding for each request as you would expect from round robin. This is the behaviour I want.
I have now setup HAProxy with a backend pointing to 10.10.10.11, however each time I access the HAProxy frontend, I get the same node responding to each request. If I keep refreshing I intermittently get different pods, sometimes after 20 refreshes, sometimes after 50+ refreshes. I have tried clearing my browser history, but that has no effect.
I assume it is my HAProxy config which is the cause the problem, perhaps caching? but I have not configured any caching. I am a HAProxy newbie, so I might be missing something.
Here is my HAProxy config.
I have tried both mode tcp and mode http, but both give the same result (the same pod responding to each request)
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /home/simon/haproxy/haproxy_certs
crt-base /home/simon/haproxy/haproxy_certs
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
defaults
log global
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend https
bind *:443 ssl crt /home/simon/haproxy/haproxy_certs
timeout client 60s
mode tcp
#Hello App
acl ACL_hello_app hdr(host) -i hello.xxxxxxxxxdomain2.com
use_backend hello_app if ACL_hello_app
#Nginx App
acl ACL_nginx_app hdr(host) -i nginx.xxxxxxxxxdomain1.com
use_backend nginx_app if ACL_nginx_app
backend hello_app
timeout connect 10s
timeout server 100s
mode tcp
server hello_app 10.10.10.11:80
backend nginx_app
mode tcp
server nginx_app 10.10.10.10:80
UPDATE
Upon further testing the the issue seems to be related to the timeout client, timeout connect, timeout server. I reduce these to 1 second, then I get a different POD every 1 second, however these times are so short, I also get intermittent connection failures.
So, I also have the question. Is HAProxy able to work as a reverse proxy in front of another load balancer, or do I need to use another technology such as Nginx?
I eventually found the answer. I needed to use option http-server-close in my frontend settings.
frontend https
bind *:443 ssl crt /home/simon/haproxy/haproxy_certs
http-response set-header Strict-Transport-Security "max-age=16000000; includeSubDomains; preload;"
timeout client 5000s
option http-server-close
mode http
#Hello App
acl ACL_hello_app hdr(host) -i hello.soxprox.com
use_backend hello_app if ACL_hello_app
#Nginx App
acl ACL_nginx_app hdr(host) -i nginx.soxprox.com
use_backend nginx_app if ACL_nginx_app
With these settings I get correct round robin results from metallb

Reverse proxy setup on haproxy gives 500; wget on base server & nginx works

I am trying to setup haproxy as a reverse proxy for a server. I am on Centos.
The config goes like this:
global
#log /dev/log local0
#log /dev/log local1 notice
log 127.0.0.1 local2 info
log 127.0.0.1 local2 notice
log 127.0.0.1 local2 debug
chroot /var/lib/haproxy
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
frontend http_front
bind *:801
option forwardfor
stats enable
default_backend http_back
backend http_back
mode http
option httpchk
option forwardfor
http-send-name-header Host
balance roundrobin
server server1 stg-hostserv.com:80
But, if I do a wget against it, I am getting the below error.
# wget http://0.0.0.0:801
--2018-07-16 14:26:24-- http://0.0.0.0:801/
Connecting to 0.0.0.0:801... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2018-07-16 14:26:24 ERROR 500: Internal Server Error.
haproxy -f /etc/haproxy/haproxy.cfg -d
[WARNING] 197/200148 (13833) : config : frontend 'GLOBAL' has no 'bind' directive. Please declare it as a backend if this was intended.
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result FAILED
Total: 3 (2 usable), will use epoll.
Using epoll() as the polling mechanism.
00000000:http_front.accept(0004)=0006 from [127.0.0.1:60696]
00000000:http_front.clireq[0006:ffffffff]: GET / HTTP/1.1
00000000:http_front.clihdr[0006:ffffffff]: User-Agent: Wget/1.14 (linux-gnu)
00000000:http_front.clihdr[0006:ffffffff]: Accept: */*
00000000:http_front.clihdr[0006:ffffffff]: Host: 0.0.0.0:801
00000000:http_front.clihdr[0006:ffffffff]: Connection: Keep-Alive
00000000:http_back.srvrep[0006:0007]: HTTP/1.1 500 Internal Server Error
00000000:http_back.srvhdr[0006:0007]: Content-Type: text/html
00000000:http_back.srvhdr[0006:0007]: Server: Microsoft-IIS/8.0
00000000:http_back.srvhdr[0006:0007]: X-Powered-By: ASP.NET
00000000:http_back.srvhdr[0006:0007]: Date: Tue, 17 Jul 2018 12:02:00 GMT
00000000:http_back.srvhdr[0006:0007]: Connection: close
00000000:http_back.srvhdr[0006:0007]: Content-Length: 1208
00000001:http_front.clicls[0006:ffffffff]
00000001:http_front.closed[0006:ffffffff]
^C
[root#izp0w3tkx2yr8zhes26ajqz ~]#
I tried different config for the server and consistently hit 500 error. Wget to the base server works without any issues.
I setup nginix to the same thing and it works beautifully. Just haproxy does not seem to work. The customer wants it on haproxy. :)
Can you please advise where I can look at to further debug. Appreciate your assistance.
This update from nuster cache server helped solve the problem:
Does your backend Microsoft-IIS/8.0 check host header? as you set http-send-name-header Host, so request from HAProxy to stg-hostserv.com:80 looks like GET / HTTP/1.1 Host: izp0w3tkx2yr8zhes26ajqz
HAProxy worked when I set:
http-request set-header Host stg-hostserv.com

HAProxy + Keycloak redirect issue

I have an HAProxy acting as a load balancer in front of 2 machines running Keycloak in standalone mode.
Versions
HAProxy version 1.6.3, released 2015/12/25
Keycloak version
2.4.0.Final
HAProxy config
global
user haproxy
group haproxy
log /dev/log local0
log-tag WARDEN
chroot /var/lib/haproxy
daemon
quiet
stats socket /var/lib/haproxy/stats level admin
maxconn 256
pidfile /var/run/haproxy.pid
tune.bufsize 262144
defaults
timeout connect 5000ms
timeout client 5000ms
timeout server 5000ms
log global
mode http
option httplog
option dontlognull
option redispatch
retries 5
stats uri /haproxy-status
frontend http-in
mode http
bind *:80
maxconn 2000
redirect scheme https code 301 if !{ ssl_fc }
frontend https
mode http
default_backend servers
bind *:443 ssl crt /etc/letsencrypt/live/authhomolog2.portaltecsinapse.com.br/combined.pem
maxconn 2000
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request set-header X-Forwarded-For %[src]
http-request set-header X-Forwarded-Proto https
backend servers
mode http
balance source
cookie JSESSIONID prefix
server master 172.30.0.74:8080 maxconn 32 check cookie master
server slave 172.30.0.124:8080 maxconn 32 check cookie slave
Keycloak relevant configs
<subsystem xmlns="urn:jboss:domain:undertow:3.0">
<buffer-cache name="default"/>
<server name="default-server">
<http-listener name="default"
socket-binding="http"
proxy-address-forwarding="true"
redirect-socket="proxy-https"/>
...
...
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<socket-binding name="proxy-https" port="443"/>
...
...
When I try to login in a Java application that uses Keycloak as Single Sign-On, I got the 403 Forbidden error on the screen:
HAProxy log
Dec 16 13:18:49 keycloak-haproxy-test WARDEN[8714]:
191.205.78.16:35794 [16/Dec/2017:13:18:48.582] https~ servers/master 487/0/0/72/559 302 2765 - - --NN 2/2/0/1/0 0/0 "GET
/realms/BMW/protocol/openid-connect/auth?response_type=code&client_id=BMWGestaoDealer&redirect_uri=https%3A%2F%2Fhomolog2gd.bmwbic.com.br%2Ffavicon.ico&state=81%2F4ad46389-fe45-4dec-b804-5563c29c51db&login=true&scope=openid
HTTP/1.1" Dec 16 13:18:49 keycloak-haproxy-test WARDEN[8714]:
54.233.89.231:54608 [16/Dec/2017:13:18:48.606] https~ servers/slave 552/0/0/4/556 400 457 - - --NN 2/2/0/1/0 0/0 "POST
/realms/BMW/protocol/openid-connect/token HTTP/1.1"
I realized that the request that the GET request started on my machine (191.205.78.16) was responded by master Keycloak machine and the redirected POST request started by the application server (54.233.89.231) was responded by the slave Keycloak machine. I want all these request to get responded by the same machine (master or slave). Do you know how I do that? I've tried a lot of different configuration in HAProxy without success. :-(
Just one more information, if I leave just the master or the slave Keycloak instance up it works fine.
Keycloak slave log
2017-12-16 14:43:13.235 WARN [org.keycloak.events] (default task-1)
type=CODE_TO_TOKEN_ERROR, realmId=BMW, clientId=BMWGestaoDealer,
userId=null, ipAddress=54.233.89.231, error=invalid_code,
grant_type=authorization_code,
code_id=52204563-53c8-4c72-bd8c-cb7540ebda3b,
client_auth_method=client-secret
I'd appreciate any help here.
I'm not really familiar with Haproxy or keycloak, but it looks like a problem with session stickiness. So my guess is, sticky sessions should be enable on the haproxy side so that he could remain on the same backend as redirection occurs. Hope it gives you a hint.
Could try adding nocache in the haproxy configuration file.
Excatly here: cookie JSESSIONID prefix nocache
I hope this can help you!
Leaving this here in case it is useful: try using
option httpclose
or
option http-server-close
to the backend configuration. Looking at https://www.haproxy.org/download/2.0/doc/configuration.txt, use of the prefix option to append the selected server requires HTTP close mode (i.e. HAPROXY has to use a new backend connection every time).

How to configure: HAProxy to log the time for each request twice

I have a web application environment as follows:
Apache web server(172.17.0.82) -> HAProxy -> |-"php-fpm-1" App server
-> |-"php-fpm-2" App server
Requests arrive at Apache server, then passed over to HAProxy which in turn passess them to the two php app servers based on Roundrobin fashion.
I wanted this web app to be structured this way. The rsyslog is enabled and it starts receiving log entries from haproxy to be logged in /var/log/haproxy.log.
My questions:
How to configure HAProxy so that each request will have two timestamps:
1) 1st one is the time when HAProxy receives an incoming request forwarded from Apache server.
2) 2nd timestamp is when request finished processing by one of the php App servers and sent back to HAProxy to be forwarded to Apache server.
Is it possible to have these two timestamps in single record in haproxy.log ?
Can I have all these timestamps recorded in microseconds in haproxy.log ?
Thank you all.
Here is my haproxy configuration file:
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
user haproxy
group haproxy
daemon
defaults
mode tcp
option tcplog
#option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http-in
mode tcp
option tcplog
bind 0.0.0.0:9000
log global
#which backend
default_backend php_appservers
backend php_appservers
mode tcp
option tcplog
balance roundrobin
server php-fpm-1 172.17.0.125:9000 weight 5 check slowstart 5000ms
server php-fpm-2 172.17.0.126:9000 weight 5 check slowstart 5000ms
When I checked the haproxy.log, it looks like:
May 7 08:28:00 localhost haproxy[4884]: 172.17.0.82:53369 [07/May/2015:08:28:00.287] http-in php_appservers/php-fpm-2 1/0/9 92320 -- 0/0/0/0/0 0/0
May 7 08:28:01 localhost haproxy[4884]: 172.17.0.82:53373 [07/May/2015:08:28:01.683] http-in php_appservers/php-fpm-1 1/0/4 92344 -- 0/0/0/0/0 0/0
May 7 08:28:02 localhost haproxy[4884]: 172.17.0.82:53376 [07/May/2015:08:28:02.514] http-in php_appservers/php-fpm-2 1/0/4 92320 -- 0/0/0/0/0 0/0
May 7 08:28:04 localhost haproxy[4884]: 172.17.0.82:53380 [07/May/2015:08:28:04.808] http-in php_appservers/php-fpm-1 1/0/8 92344 -- 0/0/0/0/0 0/0
May 7 08:28:05 localhost haproxy[4884]: 172.17.0.82:53382 [07/May/2015:08:28:05.247] http-in php_appservers/php-fpm-2 1/0/4 92320 -- 0/0/0/0/0 0/0
May 7 08:28:06 localhost haproxy[4884]: 172.17.0.82:53386 [07/May/2015:08:28:06.754] http-in php_appservers/php-fpm-1 1/0/4 92344 -- 0/0/0/0/0 0/0
Try using %U, upstream response time and %Tr, upstream connect time in the HAproxy logs. These should give you the latency added by the backend server.

Haproxy not matching path (with express.js & Socket.IO)

My config file:
global
maxconn 4096 # Total Max Connections. This is dependent on ulimit
nbproc 2
daemon
log 127.0.0.1 local1 notice
defaults
mode http
frontend all 0.0.0.0:80
timeout client 86400000
default_backend www_backend
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
acl is_websocket path_beg /socket.io
use_backend socket_backend if is_websocket
backend www_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server server1 localhost:9001 weight 1 maxconn 1024 check
server server2 localhost:9002 weight 1 maxconn 1024 check
backend socket_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
stats enable
timeout queue 5000
timeout server 86400000
timeout connect 86400000
server server1 localhost:5000 weight 1 maxconn 1024 check
As far as I can tell www_backend matches everything. When my web app requests http://myapp.com/socket.io/1/?t=1335831853491 it returns a 404, and the header shows the response came from Express. The odd thing is when I do curl -I http://myapp.com/socket.io/1/?t=1335831853491 it returns:
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: keep-alive
When I run sudo netstat -lptu I can confirm that my socket.io process is running on port 5000. Any thoughts?
Agreed with the response above. BTW, you should not use a 1-day timeout for the TCP connection to establish (timeout connect), it makes no sense at all and will cause connections to accumulate when your server goes down. A connection (especially a local one) is
supposed to establish immediately. I tend to set a 5s timeout for connect, which is far enough even across slow networks.
Concerning the other long timeouts, I'm planning on implementing a "timeout tunnel" so that users don't have to use that large timeouts for normal traffic.
Answer found here:
https://serverfault.com/questions/248897/haproxy-access-list-using-path-dir-having-issues-with-firefox
"ust add "option http-server-close" to your defaults section and it should work."