I have setup the new version of haproxy but I need to disable TLS and the "notlsv1" keyword doesn't works.
In my actual configuration, I use stud to manage https sessions with these parameters:
-B 1000 -n 8 -b 127.0.0.1 8080 -f *,443 --ssl -c ALL --write-proxy
And I'm trying to replace him by the new haproxy version.
My configuration file:
global
log 127.0.0.1 local0 info
maxconn 32000
user haproxy
group haproxy
daemon
nbproc 1
stats socket /tmp/haproxy.sock
defaults
timeout connect 10000
timeout client 30000
timeout server 30000
listen ha_stats 0.0.0.0:8088
balance source
mode http
timeout client 30000ms
stats enable
stats uri /lb?stats
frontend https-requests
mode http
bind :80
bind :443 ssl crt ./haproxy.pem notlsv1
acl is_front hdr(host) -i front.mydomain.com
acl is_service hdr(host) -i service.mydomain.com
use_backend bkfront if is_front
use_backend bkservice if is_service
default_backend mydomain.com
backend mydomain.com
mode http
server mywebsite www.mydomain.com:80
backend bkfront
mode http
balance roundrobin
option httpchk GET / HTTP/1.1\r\nHost:\ front.mydomain.com
server web05 192.168.200.5:80 check
backend bkservice
mode http
balance roundrobin
option httpchk GET / HTTP/1.1\r\nHost:\ service.mydomain.com
server web01 192.168.200.1:80 check
The http and https sessions work very well with firefox but I have problems with chrome and Internet explorer. To solve them, with Stud I need to add --ssl.
Thanks,
SOLUTION:
Thanks to Willy for his help. Below I give the commands to solve this problem:
wget http://haproxy.1wt.eu/download/1.5/src/devel/haproxy-1.5-dev12.tar.gz
wget http://haproxy.1wt.eu/download/1.5/src/snapshot/haproxy-1.5-dev12-patches-LATEST.tar.gz
tar xvzf haproxy-1.5-dev12.tar.gz
mv haproxy-1.5-dev12-patches-LATEST.tar.gz haproxy-1.5-dev12
cd haproxy-1.5-dev12/
tar xvzf haproxy-1.5-dev12-patches-LATEST.tar.gz
patch -p1 < haproxy-1.5-dev12-patches-20121009/*.diff
make TARGET=linux26 USE_OPENSSL=1
sudo make PREFIX=/opt/haproxy-ssl install
And replace:
bind :443 ssl crt ./haproxy.pem notlsv1
to:
bind :443 ssl crt ./haproxy.pem force-sslv3
This is because in OpenSSL, notlsv1 only disables TLSv1.0, not later versions ! If you need this, you'd better download the latest snapshot from the site and use "force-sslv3" instead of "notlsv1". It will force use of SSLv3 exclusively and do what you currently have with stud.
Related
I'm having a problem with configuring HAproxy in tcp mode to forward SSL connections to my backends.
Here's my haproxy configuration:
global
maxconn 10000
user haproxy
group haproxy
defaults
timeout connect 10s
timeout client 30s
timeout server 30s
log global
mode tcp
maxconn 3000
backend lamacorp_ynh_web
mode tcp
balance roundrobin
server lamacorp_ynh 172.20.20.2:443
backend lamacorp_ynh
mode tcp
balance roundrobin
server lamacorp_ynh 172.20.20.2
backend duck
mode tcp
balance roundrobin
server duck 127.0.0.1
frontend www
mode tcp
option tcplog
bind 200.200.200.200:80
bind 200.200.200.200:443
acl lamacorp_www1 hdr(host) -i domain.com
acl lamacorp_www2 hdr(host) -i www.domain.com
acl lamacorp_hub hdr(host) -i hub.domain.com
acl lamacorp_chat hdr(host) -i chat.domain.com
acl lamacorp_cloud hdr(host) -i cloud.domain.com
acl lamacorp_git hdr(host) -i git.domain.com
acl lamacorp_mail hdr(host) -i mail.domain.com
acl lamacorp_apps hdr(host) -i apps.domain.com
acl risson_www1 hdr(host) -i domain2.com
acl risson_www2 hdr(host) -i www.domain2.com
use_backend duck if lamacorp_www1
use_backend duck if lamacorp_www2
use_backend lamacorp_ynh_web if lamacorp_hub
use_backend lamacorp_ynh_web if lamacorp_chat
use_backend lamacorp_ynh_web if lamacorp_cloud
use_backend lamacorp_ynh_web if lamacorp_git
use_backend lamacorp_ynh_web if lamacorp_mail
use_backend lamacorp_ynh_web if lamacorp_apps
use_backend duck if risson_www1
use_backend duck if risson_www2
frontend mail
mode tcp
option tcplog
bind 200.200.200.200:25
bind 200.200.200.200:587
bind 200.200.200.200:993
default_backend lamacorp_ynh
curl -vvvkL https://172.20.20.2:443 from behind the HAproxy gives a correct answer (as does curling the other backend). Both backends use self-signed certificates.
However, curl -vvvkL https://200.200.200.200:443 gives:
* TCP_NODELAY set
* Connected to domain.com (200.200.200.200) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to domain.com:443
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to domain.com443
and curl -vvvkL http://hub.lama-corp.space:80 gives:
* TCP_NODELAY set
* Connected to hub.lama-corp.space (200.200.200.200) port 80 (#0)
> GET / HTTP/1.1
> Host: domain.com
> User-Agent: curl/7.65.3
> Accept: */*
>
* Empty reply from server
* Connection #0 to host domain.com left intact
curl: (52) Empty reply from server
The mail forwarding seems to be working correctly. SSL termination is expected to be done by the backends. Any tip to refactor that config is also welcomed.
Thanks in advance,
I have a proxy pass which redirects all BE service calls to the API-Gateway. For debugging one particular scenario, I want to proxy all urls with base path /abc to a netcat proxy which would dumplt the complete request on console.
ATM I am using following proxy pass:
ProxyPass /abc/ http://localhost:8089/apigateway/api/
Whereas I am listening on port 8089 as follow:
nc -p 8089 localhost 8080
But the nc connection is closing up within few seconds after i run the above mentioned command. Any idea what am I do wrong?
When I curl the url http://localhost/abc/messaage, I see 503 as response.
Following worked for me:
sudo nc -l localhost 8089 < abc.txt | tee -a in | nc localhost 8080 | tee -a out.html > def.txt
Listening on port 8089 (httpd forwards everything on 8089). nc then forwards the request to port 8080 (actual apigateway). In the middle it dumps the request and response in different files.
I am trying to develop a proxy server which will help clients to auto logged into my application without password by setting cookies on that session in proxy.
It means if user is using proxy to access that site then he will auto logged into that site using cookie set on that traffic.
I already installed & configured SQUID on centos 6.8 x64.
After setting everything & using
request_header_add Cookie name=value
in /etc/squid/squid.conf.
The cookie is set to all HTTP traffic but my application uses HTTPS.
So, i tried to setup OpenSSl, ssl-bump, and all the setup regarding SSL including ip tables
This is how my squid.conf looks like:
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow all
http_port 3130
http_port 3128 intercept
https_port 3129 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=8MB cert=/etc/squid/ssl_cert/myca.pem key=/etc/squid/ssl_cert/myca.pem
request_header_add Cookie name=value all
#always_direct allow all
ssl_bump server-first all
#sslproxy_cert_error deny all
#sslproxy_flags DONT_VERIFY_PEER
sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/ssl_db -M 8MB
sslcrtd_children 8 startup=1 idle=1
After researching more i also activate ip tables to forward packets to proxy for intercept.
iptables -t nat -A PREROUTING -p tcp -s 0.0.0.0/0 -j DNAT --to-destination 192.2xx.xx4.xx4:3128 --dport 80
iptables -t nat -A PREROUTING -p tcp -s 0.0.0.0/0 -j DNAT --to-destination 192.2xx.xx4.xx4:3129 --dport 443
Above configuration is working fine without any issue on HTTP traffic
But still the Cookie header is not added to "HTTPS" traffic.
My main motive is to logged into the application if anyone use this proxy without entering login details using cookie set into HTTPS header.
Can anyone help me to tell that this task can be done to setup cookie (Change header) on HTTPS traffic using SQUID or not.
If possible please help me to find out the error or what else i have to do.
Thanks in advance !
From inside a docker container, I'm running
# openssl s_client -connect rubygems.org:443 -state -nbio 2>&1 | grep "^SSL"
SSL_connect:before/connect initialization
SSL_connect:SSLv2/v3 write client hello A
SSL_connect:error in SSLv2/v3 read server hello A
That's all I get
I can't connect to any https site from within the docker container. The container is running on an openstack vm. The vm can connect via https.
Any advice?
UPDATE
root#ce239554761d:/# curl -vv https://google.com
* Rebuilt URL to: https://google.com/
* Hostname was NOT found in DNS cache
* Trying 216.58.217.46...
* Connected to google.com (216.58.217.46) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
and then it hangs.
Also, I'm getting intermittent successes now.
Sanity Checks:
changing the docker ips doesn't fix the problem
The docker containers work on my local machine
The docker containers work on other clouds
Docker 1.10.0 doesn't work in the vms
Docker 1.9.1 works in the vms
I was given a solution by the Docker community
OpenStack network seems to use lower MTU values and Docker does not infer the MTU settings from the host's network card since 1.10.
To run docker daemon with custom MTU settings, you can follow this blog post, that says:
$ cp /lib/systemd/system/docker.service /etc/systemd/system/docker.service
Edit a line in the new file to look like this:
ExecStart=/usr/bin/docker daemon -H fd:// --mtu=1454
Or (as suggested below by Dionysius), create and edit the file
/etc/systemd/system/docker.service.d/fixmtu.conf as follow:
[Service]
# Reset ExecStart & update mtu (see original command in /lib/systemd/system/docker.service)
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --mtu=1454.
MTU of 1454 is the value that seems to be common with OpenStack. You can look it up in your host using ifconfig.
Finally restart Docker:
$ sudo systemctl daemon-reload
$ sudo service docker restart
I need some help with my HAProxy config. I am currently using HAProxy 1.5 to access geoblocked websites by reverse proxying them through altered DNS entries.
(https://github.com/trick77/tunlr-style-dns-unblocking).
Now I stumbled upon a problem as I have to proxy many subdomains of one server (lets say abc.xyz.com, def.xyz.com, ...).
Is it possible to create a wildcard for my config by using something like *.xyz.com and that actually works with SNI and all subdomains of this domain?
Thank you very much in advance!
global
daemon
maxconn 200
user haproxy
group haproxy
stats socket /var/run/haproxy.sock mode 0600 level admin
log /dev/log local0 debug
pidfile /var/run/haproxy.pid
spread-checks 5
defaults
maxconn 195
log global
mode http
option httplog
option abortonclose
option http-server-close
option persist
option accept-invalid-http-response
timeout connect 20s
timeout server 120s
timeout client 120s
timeout check 10s
retries 3
# catchall ------------------------------------------------------------------------
frontend f_catchall
mode http
bind *:80
log global
option httplog
option accept-invalid-http-request
capture request header Host len 50
capture request header User-Agent len 150
#--- xyz.com
use_backend b_catchall if { hdr(host) -i abc.xyz.com }
default_backend b_deadend
backend b_catchall
log global
mode http
option httplog
option http-server-close
#--- xyz.com
use-server abc.xyz.com if { hdr(host) -i abc.xyz.com }
server abc.xyz.com abc.xyz.com:80 check inter 10s fastinter 2s downinter 2s fall 1800
frontend f_catchall_sni
bind *:443
mode tcp
log global
option tcplog
no option http-server-close
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
#--- abc
use_backend b_catchall_sni if { req_ssl_sni -i abc.xyz.com }
default_backend b_deadend_sni
backend b_catchall_sni
log global
option tcplog
mode tcp
no option http-server-close
no option accept-invalid-http-response
#---xyz.com
use-server abc.xyz.com if { req_ssl_sni -i abc.xyz.com }
server abc.xyz.com abc.xyz.com:443 check inter 10s fastinter 2s downinter 2s fall 1800
# deadend ------------------------------------------------------------------------
backend b_deadend
mode http
log global
option httplog
backend b_deadend_sni
mode tcp
log global
option tcplog
no option accept-invalid-http-response
no option http-server-close
I finally found a solution. It wasn't in the documentation though.
Use -m end instead of -i for wildcard
if { req.ssl_sni -m end .abc.xyz.com }
You can using ssl_fc_sni. Something like this works:
use_backend api if { ssl_fc_sni api.example.com }
use_backend web if { ssl_fc_sni app.example.com }
If someone struggling why it's not working, I was missing this on the frontend:
tcp-request inspect-delay 3s