Reverse proxy setup on haproxy gives 500; wget on base server & nginx works - reverse-proxy

I am trying to setup haproxy as a reverse proxy for a server. I am on Centos.
The config goes like this:
global
#log /dev/log local0
#log /dev/log local1 notice
log 127.0.0.1 local2 info
log 127.0.0.1 local2 notice
log 127.0.0.1 local2 debug
chroot /var/lib/haproxy
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
frontend http_front
bind *:801
option forwardfor
stats enable
default_backend http_back
backend http_back
mode http
option httpchk
option forwardfor
http-send-name-header Host
balance roundrobin
server server1 stg-hostserv.com:80
But, if I do a wget against it, I am getting the below error.
# wget http://0.0.0.0:801
--2018-07-16 14:26:24-- http://0.0.0.0:801/
Connecting to 0.0.0.0:801... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2018-07-16 14:26:24 ERROR 500: Internal Server Error.
haproxy -f /etc/haproxy/haproxy.cfg -d
[WARNING] 197/200148 (13833) : config : frontend 'GLOBAL' has no 'bind' directive. Please declare it as a backend if this was intended.
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result FAILED
Total: 3 (2 usable), will use epoll.
Using epoll() as the polling mechanism.
00000000:http_front.accept(0004)=0006 from [127.0.0.1:60696]
00000000:http_front.clireq[0006:ffffffff]: GET / HTTP/1.1
00000000:http_front.clihdr[0006:ffffffff]: User-Agent: Wget/1.14 (linux-gnu)
00000000:http_front.clihdr[0006:ffffffff]: Accept: */*
00000000:http_front.clihdr[0006:ffffffff]: Host: 0.0.0.0:801
00000000:http_front.clihdr[0006:ffffffff]: Connection: Keep-Alive
00000000:http_back.srvrep[0006:0007]: HTTP/1.1 500 Internal Server Error
00000000:http_back.srvhdr[0006:0007]: Content-Type: text/html
00000000:http_back.srvhdr[0006:0007]: Server: Microsoft-IIS/8.0
00000000:http_back.srvhdr[0006:0007]: X-Powered-By: ASP.NET
00000000:http_back.srvhdr[0006:0007]: Date: Tue, 17 Jul 2018 12:02:00 GMT
00000000:http_back.srvhdr[0006:0007]: Connection: close
00000000:http_back.srvhdr[0006:0007]: Content-Length: 1208
00000001:http_front.clicls[0006:ffffffff]
00000001:http_front.closed[0006:ffffffff]
^C
[root#izp0w3tkx2yr8zhes26ajqz ~]#
I tried different config for the server and consistently hit 500 error. Wget to the base server works without any issues.
I setup nginix to the same thing and it works beautifully. Just haproxy does not seem to work. The customer wants it on haproxy. :)
Can you please advise where I can look at to further debug. Appreciate your assistance.

This update from nuster cache server helped solve the problem:
Does your backend Microsoft-IIS/8.0 check host header? as you set http-send-name-header Host, so request from HAProxy to stg-hostserv.com:80 looks like GET / HTTP/1.1 Host: izp0w3tkx2yr8zhes26ajqz
HAProxy worked when I set:
http-request set-header Host stg-hostserv.com

Related

Getting 404 when call request by haproxy (directly works fine)

I directly call a web service with url
curl http://venesh.ir/webservice/oauth/token
and I got error 403,
but when I call it by reverse proxy from some server I got 404,is it possible that haproxy change my address?
haproxy config:
frontend localhost
bind *:8081
option tcplog
mode tcp
acl isVenesh dst_port 8081
use_backend venesh if isVenesh
default_backend venesh
backend venesh
mode tcp
balance roundrobin
server web01 venesh.ir:80 check
when I call mySerevrIp:8081/webservice/oauth/token I expect getting the result that I directly call
curl http://venesh.ir/webservice/oauth/token that is 403,
but when I call curl mySerevrIp:8081/webservice/oauth/token I get error 404,
Is a problem with my haproxy or my config or is it possible that this problem is because of venesh.ir website?
It appears that http://venesh.ir/webservice/oauth/token expects the host header to be venesh.ir. You can test this from the command line. If the host header is not venesh.ir, it will return 404:
$ curl -I -H 'Host: 1.1.1.1' http://venesh.ir/webservice/oauth/token
HTTP/1.1 404 Not Found
Date: Mon, 24 Jun 2019 17:48:56 GMT
Server: Apache/2
Content-Type: text/html; charset=iso-8859-1
You can add the host header to your configuration if you change your mode to http:
frontend localhost
bind *:8081
option httplog
mode http
default_backend venesh
backend venesh
mode http
balance roundrobin
http-request set-header Host venesh.ir
server web01 venesh.ir:80 check
The answer of #mweiss was true, and an alternative way that I found is Setting HOST value to venesh.ir in my request header then the tcp reverse proxy works fine.

How to use HA Proxy server as Load Balancer

I am trying to use HA Proxy server as a load-balancer between my two web servers.
The Individual servers are up and running on the specified ports.
But I am not able to hit the web servers through HA Proxy server.
I have configured HA proxy using standard settings as below
/etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local0
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main *:80
mode http
default_backend app
backend app
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server app1 10.368.240.116:9091
server app2 10.368.240.317:9092
#server app3 127.0.0.1:5003 check
#server app4 127.0.0.1:5004 check
I can successfully curl the above servers from my HA Proxy host machine
curl -i "10.368.240.116:9091"
HTTP/1.1 200 OK
curl -i "10.368.240.317:9091"
HTTP/1.1 200 OK
My HA Proxy server is listening on 80 port
But when I hit it it is giving 503 error as below
curl -iv "127.0.0.1"
* About to connect() to 127.0.0.1 port 80 (#0)
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 503 Service Unavailable
HTTP/1.0 503 Service Unavailable
< Cache-Control: no-cache
Cache-Control: no-cache
< Connection: close
Connection: close
< Content-Type: text/html
Content-Type: text/html
<
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
* Closing connection 0
Can anyone guide where am I doing wrong

HAProxy + Keycloak redirect issue

I have an HAProxy acting as a load balancer in front of 2 machines running Keycloak in standalone mode.
Versions
HAProxy version 1.6.3, released 2015/12/25
Keycloak version
2.4.0.Final
HAProxy config
global
user haproxy
group haproxy
log /dev/log local0
log-tag WARDEN
chroot /var/lib/haproxy
daemon
quiet
stats socket /var/lib/haproxy/stats level admin
maxconn 256
pidfile /var/run/haproxy.pid
tune.bufsize 262144
defaults
timeout connect 5000ms
timeout client 5000ms
timeout server 5000ms
log global
mode http
option httplog
option dontlognull
option redispatch
retries 5
stats uri /haproxy-status
frontend http-in
mode http
bind *:80
maxconn 2000
redirect scheme https code 301 if !{ ssl_fc }
frontend https
mode http
default_backend servers
bind *:443 ssl crt /etc/letsencrypt/live/authhomolog2.portaltecsinapse.com.br/combined.pem
maxconn 2000
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request set-header X-Forwarded-For %[src]
http-request set-header X-Forwarded-Proto https
backend servers
mode http
balance source
cookie JSESSIONID prefix
server master 172.30.0.74:8080 maxconn 32 check cookie master
server slave 172.30.0.124:8080 maxconn 32 check cookie slave
Keycloak relevant configs
<subsystem xmlns="urn:jboss:domain:undertow:3.0">
<buffer-cache name="default"/>
<server name="default-server">
<http-listener name="default"
socket-binding="http"
proxy-address-forwarding="true"
redirect-socket="proxy-https"/>
...
...
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<socket-binding name="proxy-https" port="443"/>
...
...
When I try to login in a Java application that uses Keycloak as Single Sign-On, I got the 403 Forbidden error on the screen:
HAProxy log
Dec 16 13:18:49 keycloak-haproxy-test WARDEN[8714]:
191.205.78.16:35794 [16/Dec/2017:13:18:48.582] https~ servers/master 487/0/0/72/559 302 2765 - - --NN 2/2/0/1/0 0/0 "GET
/realms/BMW/protocol/openid-connect/auth?response_type=code&client_id=BMWGestaoDealer&redirect_uri=https%3A%2F%2Fhomolog2gd.bmwbic.com.br%2Ffavicon.ico&state=81%2F4ad46389-fe45-4dec-b804-5563c29c51db&login=true&scope=openid
HTTP/1.1" Dec 16 13:18:49 keycloak-haproxy-test WARDEN[8714]:
54.233.89.231:54608 [16/Dec/2017:13:18:48.606] https~ servers/slave 552/0/0/4/556 400 457 - - --NN 2/2/0/1/0 0/0 "POST
/realms/BMW/protocol/openid-connect/token HTTP/1.1"
I realized that the request that the GET request started on my machine (191.205.78.16) was responded by master Keycloak machine and the redirected POST request started by the application server (54.233.89.231) was responded by the slave Keycloak machine. I want all these request to get responded by the same machine (master or slave). Do you know how I do that? I've tried a lot of different configuration in HAProxy without success. :-(
Just one more information, if I leave just the master or the slave Keycloak instance up it works fine.
Keycloak slave log
2017-12-16 14:43:13.235 WARN [org.keycloak.events] (default task-1)
type=CODE_TO_TOKEN_ERROR, realmId=BMW, clientId=BMWGestaoDealer,
userId=null, ipAddress=54.233.89.231, error=invalid_code,
grant_type=authorization_code,
code_id=52204563-53c8-4c72-bd8c-cb7540ebda3b,
client_auth_method=client-secret
I'd appreciate any help here.
I'm not really familiar with Haproxy or keycloak, but it looks like a problem with session stickiness. So my guess is, sticky sessions should be enable on the haproxy side so that he could remain on the same backend as redirection occurs. Hope it gives you a hint.
Could try adding nocache in the haproxy configuration file.
Excatly here: cookie JSESSIONID prefix nocache
I hope this can help you!
Leaving this here in case it is useful: try using
option httpclose
or
option http-server-close
to the backend configuration. Looking at https://www.haproxy.org/download/2.0/doc/configuration.txt, use of the prefix option to append the selected server requires HTTP close mode (i.e. HAPROXY has to use a new backend connection every time).

haproxy 504 timeout to apache

Very new to haproxy and loving it, apart from a 504 issue that we're getting. The relevant log output is:
Jun 21 13:52:06 localhost haproxy[1431]: 192.168.0.2:51435 [21/Jun/2017:13:50:26.740] www-https~ beFootprints/foorprints 0/0/2/-1/100003 504 195 - - sH-- 2/2/0/0/0 0/0 "POST /MRcgi/MRlogin.pl HTTP/1.1"
Jun 21 13:54:26 localhost haproxy[1431]: 192.168.0.2:51447 [21/Jun/2017:13:52:46.577] www-https~ beFootprints/foorprints 0/0/3/-1/100005 504 195 - - sH-- 2/2/0/0/0 0/0 "POST /MRcgi/MRlogin.pl HTTP/1.1"
Jun 21 14:15:57 localhost haproxy[1431]: 192.168.0.1:50225 [21/Jun/2017:14:14:17.771] www-https~ beFootprints/foorprints 0/0/2/-1/100004 504 195 - - sH-- 3/3/0/0/0 0/0 "POST /MRcgi/MRlogin.pl HTTP/1.1"
Jun 21 14:22:26 localhost haproxy[1431]: 192.168.0.1:50258 [21/Jun/2017:14:20:46.608] www-https~ beFootprints/foorprints 0/0/2/-1/100003 504 195 - - sH-- 2/2/0/0/0 0/0 "POST /MRcgi/MRlogin.pl HTTP/1.1"
Using the following timeout values in the haproxy.cfg
defaults
log global
mode http
option forwardfor
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 100000
Running on Ubuntu 16.04.2 LTS
Any help and comment very much appreciated!
The problem appears to be with the web server. Check the logs, there, and you should find long-running requests.
Here's how I conclude that.
Note sH-- in your logs. This is the session state at disconnection. It's extremely valuable for troubleshooting. The values are positional and case-sensitive.
s: the server-side timeout expired while waiting for the server to send or receive data.
...so, timeout server fired, while...
H: the proxy was waiting for complete, valid response HEADERS from the server (HTTP only).
The server had not finished (perhaps not even started) returing all the response headers to the proxy, but the connection was established and the request had been sent.
HAProxy returns 504 Gateway Timeout, indicating that the backend did not respond in a timely fashion.
If your backend needs longer than 100 seconds (?!) then you need to increase timeout server. Otherwise, your Apache server seems to have a problem being too slow to respond.
I had a similar issue and found the problem was with how I had configured my backend server section.
backend no_match_backend
mode http
balance roundrobin
option forwardfor
option httpchk HEAD / HTTP/1.1\r\nHost:\ example.com
server nginx-example 192.168.0.10 check port 80
My problem is that I did not specify the port for the connection.
When connecting via HTTP it would work but as I have my SSL terminated on my haproxy.
This attempts to connect via 443 to the backends.
As the backends cannot / don't correctly communicate. The setup of the SSL session with haproxy and the backend that causes the gateway to time out.
I need to force unencrypted communications to the backends.
backend no_match_backend
mode http
balance roundrobin
option forwardfor
option httpchk HEAD / HTTP/1.1\r\nHost:\ example.com
server nginx-example 192.168.0.10:80 check port 80
The change might be hard to spot server nginx-example 192.168.0.10 check port 80 now has :80 after the ip 192.168.0.10:80
This problem was made more complicated by my backend servers having SSL redirects configured. So all my requests would arrive as HTTP and be redirected to HTTPS. So it was difficult to identify where the problem was. I
It looked like https requests were being redirected correctly to the backend servers. I need to disable this redirect on the backend servers and move it forward to haproxy config.

Haproxy not matching path (with express.js & Socket.IO)

My config file:
global
maxconn 4096 # Total Max Connections. This is dependent on ulimit
nbproc 2
daemon
log 127.0.0.1 local1 notice
defaults
mode http
frontend all 0.0.0.0:80
timeout client 86400000
default_backend www_backend
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
acl is_websocket path_beg /socket.io
use_backend socket_backend if is_websocket
backend www_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server server1 localhost:9001 weight 1 maxconn 1024 check
server server2 localhost:9002 weight 1 maxconn 1024 check
backend socket_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
stats enable
timeout queue 5000
timeout server 86400000
timeout connect 86400000
server server1 localhost:5000 weight 1 maxconn 1024 check
As far as I can tell www_backend matches everything. When my web app requests http://myapp.com/socket.io/1/?t=1335831853491 it returns a 404, and the header shows the response came from Express. The odd thing is when I do curl -I http://myapp.com/socket.io/1/?t=1335831853491 it returns:
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: keep-alive
When I run sudo netstat -lptu I can confirm that my socket.io process is running on port 5000. Any thoughts?
Agreed with the response above. BTW, you should not use a 1-day timeout for the TCP connection to establish (timeout connect), it makes no sense at all and will cause connections to accumulate when your server goes down. A connection (especially a local one) is
supposed to establish immediately. I tend to set a 5s timeout for connect, which is far enough even across slow networks.
Concerning the other long timeouts, I'm planning on implementing a "timeout tunnel" so that users don't have to use that large timeouts for normal traffic.
Answer found here:
https://serverfault.com/questions/248897/haproxy-access-list-using-path-dir-having-issues-with-firefox
"ust add "option http-server-close" to your defaults section and it should work."