I am trying to put an HTTPS web server behind HAProxy load balancer.
HTTPS server runs on Node.js on port 8000, had all the correct SSL certs and keys and works if I try access it directly via URL, e.g. https://ssltest.mydomain.com:8000/. I can also check SSL connection using openssl s_client -connect ssltest.mydomain.com:8000 and it shows data that seems correct.
The front end for this HTTPS server is another subdomain, e.g. app.mydomain.com and it runs on default SSL port 443.
But when I try to access https://app.mydomain.com from my browser, I get an SSL error. The above command line check also fails: openssl s_client -connect app.mydomain.com:443 -state -debug
Below is how my HA Proxy config looks like:
global
log /dev/log local0 info
log /dev/log local0 notice
maxconn 20000
user haproxy
group haproxy
daemon
spread-checks 5
defaults
log global
mode tcp
option dontlognull
retries 3
option redispatch
timeout connect 30s
timeout client 30s
timeout server 30s
timeout check 5s
balance roundrobin
frontend https_proxy
bind :443
acl is_app hdr_dom(host) -i app.mydomain.com
acl is_rest hdr_dom(host) -i rest.mydomain.com
acl is_io hdr_dom(host) -i io.mydomain.com
use_backend core_https if is_app
use_backend core_rest_api if is_rest
use_backend core_www_io if is_io
backend core_https
server www1 10.0.0.174:8000 maxconn 25 check inter 5s rise 18 fall 2
backend core_www_io
server io1 10.0.0.174:8100 maxconn 25 check inter 5s rise 18 fall 2
backend core_rest_api
server api1 10.0.0.174:8200 maxconn 25 check inter 5s rise 18 fall 2
I can't figure out why this does not work via HAProxy but works directly, and I would REALLY appreciate your suggestions.
You're confusing layer 4 and layer 7 load balancing. To separate requests using hdr_dom you need layer 7 that's only available for HTTP and as you may guess HTTPS works on layer 4.
So as haproxy can't inspect the host, none of your ifs are returning true and there is no backend selected, to fix you should add a default_backend entry.
frontend https_proxy
bind :443
acl is_app hdr_dom(host) -i app.mydomain.com
acl is_rest hdr_dom(host) -i rest.mydomain.com
acl is_io hdr_dom(host) -i io.mydomain.com
use_backend core_https if is_app
use_backend core_rest_api if is_rest
use_backend core_www_io if is_io
default_backend core_https
Related
I have a sample app running in a kubernetes cluster with 3 replicas. I am exposing the app with type=LoadBalancer using metallb.
The external ip issued is 10.10.10.11
When I run curl 10.10.10.11 I get a different pod responding for each request as you would expect from round robin. This is the behaviour I want.
I have now setup HAProxy with a backend pointing to 10.10.10.11, however each time I access the HAProxy frontend, I get the same node responding to each request. If I keep refreshing I intermittently get different pods, sometimes after 20 refreshes, sometimes after 50+ refreshes. I have tried clearing my browser history, but that has no effect.
I assume it is my HAProxy config which is the cause the problem, perhaps caching? but I have not configured any caching. I am a HAProxy newbie, so I might be missing something.
Here is my HAProxy config.
I have tried both mode tcp and mode http, but both give the same result (the same pod responding to each request)
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /home/simon/haproxy/haproxy_certs
crt-base /home/simon/haproxy/haproxy_certs
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
defaults
log global
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend https
bind *:443 ssl crt /home/simon/haproxy/haproxy_certs
timeout client 60s
mode tcp
#Hello App
acl ACL_hello_app hdr(host) -i hello.xxxxxxxxxdomain2.com
use_backend hello_app if ACL_hello_app
#Nginx App
acl ACL_nginx_app hdr(host) -i nginx.xxxxxxxxxdomain1.com
use_backend nginx_app if ACL_nginx_app
backend hello_app
timeout connect 10s
timeout server 100s
mode tcp
server hello_app 10.10.10.11:80
backend nginx_app
mode tcp
server nginx_app 10.10.10.10:80
UPDATE
Upon further testing the the issue seems to be related to the timeout client, timeout connect, timeout server. I reduce these to 1 second, then I get a different POD every 1 second, however these times are so short, I also get intermittent connection failures.
So, I also have the question. Is HAProxy able to work as a reverse proxy in front of another load balancer, or do I need to use another technology such as Nginx?
I eventually found the answer. I needed to use option http-server-close in my frontend settings.
frontend https
bind *:443 ssl crt /home/simon/haproxy/haproxy_certs
http-response set-header Strict-Transport-Security "max-age=16000000; includeSubDomains; preload;"
timeout client 5000s
option http-server-close
mode http
#Hello App
acl ACL_hello_app hdr(host) -i hello.soxprox.com
use_backend hello_app if ACL_hello_app
#Nginx App
acl ACL_nginx_app hdr(host) -i nginx.soxprox.com
use_backend nginx_app if ACL_nginx_app
With these settings I get correct round robin results from metallb
I have an nginx from my client where I can POST successfully with:
curl -v --cacert ca.crt --cert client.crt --key client.key -POST https://nginx:8443/api/ -H 'Content-Type: application/json' -H 'cache-control: no-cache' -d#test.json
Now I installed an haproxy in front of nginx and I'm trying to do a POST the same way, unsuccessful:
curl -v --cacert ca.crt --cert client.crt --key client.key -POST http://haproxy:8443/api/ -H 'Content-Type: application/json' -H 'cache-control: no-cache' -d#test.json
Error:
<center>The plain HTTP request was sent to HTTPS port</center>
<hr><center>nginx</center>
Here is my haproxy configuration:
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode tcp
log global
option tcplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
frontend main *:8443
acl url_static path_beg -i /static /images /javascript /stylesheets
acl url_static path_end -i .jpg .gif .png .css .js
use_backend static if url_static
default_backend app
backend static
balance roundrobin
server static 127.0.0.1:8443
backend app
mode tcp
balance roundrobin
server nginx nginx01:8443
I want to forward SSL traffic through HAProxy and pass the certificates for authentication to nginx.
I know it doesn't make any sense to have two LBs but I can't modify nginx and the api server behind, but the clients will be internal.
As you can see at this point I'm able to reach nginx but haproxy doesn't pass the certificates and keys from the request to nginx backend.
Am I missing something? Is this something that I can achieve?
ps: If I'm setting 'ssl verify none' at backend, I'm getting 'No required SSL certificate was sent'.
If I'm setting 'send-proxy' at backend, I'm getting '400 Bad Request' from nginx.
You will need to add the ssl configuration to haproxy and set some headers which will be forwarded to the nginx.
# your other config from above
backend app
mode tcp
balance roundrobin
server nginx nginx01:8443 ssl ca-file <The ca from nginx backend>
the solution implemented was with SS/TLS pass-through from https://www.haproxy.com/documentation/haproxy/deployment-guides/tls-infrastructure/
Setting both frontend and backend to mode tcp I was able to pass the certificates and nginx validate and made the authentication.
We have an architecture where have several APIs on multiple hosts. Each API sits behind an AWS ELB. We want to put a proxy in front that will route requests based on URI to the ELB.
What I have so far mostly works, but about 3 out of 10 requests result in the following error (using cURL, but the problem isn't cURL):
curl: (35) Unknown SSL protocol error in connection to test-router.versal.com:-9847
I have a feeling that the ELB is the culprit. The SSL is terminated there.
Here is our HAProxy config:
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 10240
user haproxy
group haproxy
daemon
debug
stats socket /var/run/haproxy.sock
log-send-hostname test-router.domain.com
description "HAProxy on test-router.domain.com"
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
option forwardfor
option httpclose
option dontlognull
option tcpka
maxconn 10240
timeout connect 10000ms
timeout client 600000ms
timeout server 600000ms
frontend public
bind *:80
bind *:443
acl elb0 path_beg /elb0
acl elb1 path_beg /elb1
use_backend elb0 if elb0
use_backend elb1 if elb1
bind 0.0.0.0:443 ssl crt /etc/ssl/cert.pem no-sslv3 ciphers AES128+EECDH:AES128+EDH
backend elb0
server server_vcat elb0.domain.com:443 ssl verify none
backend elb1
server server_laapi elb1.domain.com:443 ssl verify none
The SSL from curl is not terminated at the ELB. It is terminated at HAProxy, in your configuration...
bind 0.0.0.0:443 ssl crt /etc/ssl/cert.pem no-sslv3 ciphers AES128+EECDH:AES128+EDH
...and then an entirely different SSL session is established by HAProxy on the connection to the ELB:
server ... ssl verify none
An SSL problem with ELB could not possibly be propagated back to curl through HAProxy in this configuration.
The problem is in your HAProxy configuration, here:
bind *:443
Remove that line. It's redundant (and incorrect).
You are telling HAProxy to bind to port 443 twice: once speaking SSL, and once not speaking SSL.
So, statistically, on approximately 50% of your connection attempts, curl finds HAProxy is not speaking SSL on port 443 -- it's just speaking HTTP, and curl can't (and shouldn't) handle that gracefully.
I believe this (mis)configuration goes undetected by HAProxy, not because of an actual bug, but rather because of the way some things are implemented in the HAProxy internals, related to multi-process deployments and hot reloads, in which case it would be valid to have HAProxy bound to the same socket more than once.
I am working on haproxy. I want to make my site open with http. I have purchased ssl certificate and install on the server.
In ha.cfg I have configured as follow :
global
tune.bufsize 32786
tune.maxrewrite 16384
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 8192
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
balance roundrobin
stats enable
stats refresh
stats uri /ssproxy_stats
stats realm Haproxy\ Statistics
stats auth haproxy:haproxy
maxconn 4000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend http
bind *:80
acl hari path_beg /customers
acl css path_beg /assets
reqadd X-Forwarded-Proto:\ http
use_backend appointpress_app if hari
use_backend appointpress_app if css
default_backend appointpress_site
frontend https
bind *:443 ssl crt /etc/ssl/ssl.key/mydomain.crt
default_backend appointpress_site
backend appointpress_app :80
stats enable
stats auth haproxy:haproxy
cookie SERVERID insert
option httpclose
option forwardfor
server app_server ec2-elastic-domain:80 cookie haproxy_app check
backend appointpress_site :80
stats enable
stats auth haproxy:haproxy
cookie SERVERID insert
option httpclose
option forwardfor
server wordpress someip:443 cookie haproxy_site check
After running the command haproxy -f ha.cfg I am getting no error,
and when I am running url http://ec2-instance, its working fine
but while running https://ec2-instance,
I am getting error :
in chrome : Error code: ERR_SSL_PROTOCOL_ERROR
in firefox : Error code: ssl_error_rx_record_too_long
How to resolve the error
Check to make sure that your EC2 security rules allow port 443 to your running instance. A simple way to test this is to use telnet from your client:
telnet ec2-instance 443
My config file:
global
maxconn 4096 # Total Max Connections. This is dependent on ulimit
nbproc 2
daemon
log 127.0.0.1 local1 notice
defaults
mode http
frontend all 0.0.0.0:80
timeout client 86400000
default_backend www_backend
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
acl is_websocket path_beg /socket.io
use_backend socket_backend if is_websocket
backend www_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server server1 localhost:9001 weight 1 maxconn 1024 check
server server2 localhost:9002 weight 1 maxconn 1024 check
backend socket_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
stats enable
timeout queue 5000
timeout server 86400000
timeout connect 86400000
server server1 localhost:5000 weight 1 maxconn 1024 check
As far as I can tell www_backend matches everything. When my web app requests http://myapp.com/socket.io/1/?t=1335831853491 it returns a 404, and the header shows the response came from Express. The odd thing is when I do curl -I http://myapp.com/socket.io/1/?t=1335831853491 it returns:
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: keep-alive
When I run sudo netstat -lptu I can confirm that my socket.io process is running on port 5000. Any thoughts?
Agreed with the response above. BTW, you should not use a 1-day timeout for the TCP connection to establish (timeout connect), it makes no sense at all and will cause connections to accumulate when your server goes down. A connection (especially a local one) is
supposed to establish immediately. I tend to set a 5s timeout for connect, which is far enough even across slow networks.
Concerning the other long timeouts, I'm planning on implementing a "timeout tunnel" so that users don't have to use that large timeouts for normal traffic.
Answer found here:
https://serverfault.com/questions/248897/haproxy-access-list-using-path-dir-having-issues-with-firefox
"ust add "option http-server-close" to your defaults section and it should work."