HAProxy - SSL SNI inconvenience - ssl

I found some inconvenience in haproxy 1.5 when i try to configure SSL SNI.
There is a fragment of haproxy configuration: pastebin
I would like to pass client IP to backend. No matter how I configure reqadd / set-header X-Forwarded-For / Real-IP I always got a haproxy IP address in X-Forwarded-For.
Someone try to pass real IP with SSL SNI on HAProxy
? :/

From this configuration, you seem to be doing SNI-sniffing, yet all of the backends are looping back to HAProxy itself... which is not a case where SNI-sniffing is required. Perhaps I'm overlooking something else that would require this.
It should be apparent why you are getting the proxy's IP in X-Forwarded-For -- HAProxy is talking to itself. The first pass through the proxy is the client connection, as far as the proxy can determine on the second pass, because only the second pass is speaking HTTP. It only sees that an incoming TCP connection has arrived... from itself.
The solution is for the first-pass backend to pass the original client information using the Proxy Protocol and the second-pass frontend to decode it.
Add accept-proxy to the bind lines for the second-pass frontends, and add send-proxy to the server lines on the first-pass backends. This way, on the connection where HAProxy is talking to itself, the first-pass backend will send the Proxy protocol preamble and the second-pass frontend will decode the incoming value and place it in X-Forwarded-For.

Related

How to know if SSL established at HAProxy level on app level?

I have an HAProxy instance that is available from the web, and redirects incoming requests to my local app.
The communication between the client and my HAProxy can be secured via ssl but not necessarily, and I need to know at my application level if the communication is secure or not.
Unfortunately, from my understanding, the communication I get in my app is already "decrypted" from all the SSL communications, HAProxy handling the "SSL wrapping".
Is there a way to know for sure that the client is using SSL/TLS or no?
Thank you in advance.
Update The communication is not HTTP but TCP at HAProxy.
This is the purpose of the X-Forwarded-Proto.
You can make HAProxy insert that header if ssl traffic was decrypted by HAProxy:
http-request set-header X-Forwarded-Proto https if { ssl_fc }
Then your application will just have to check the X-Forwarded-Proto.

How is TLS termination implemented in AWS NLB?

AWS NLB supports TLS termination
https://aws.amazon.com/blogs/aws/new-tls-termination-for-network-load-balancers/
NLB being a Layer 4 load balancer I would expect it to work in a passthrough mode by directing the incoming packets to one of the backends without much of state maintenance (except for the flow tracking)
Are there any details available on how AWS implements the TLS termination in NLB ?
Is it possible to do it with open source tooling (like IPVS or haproxy) or AWS has some secret sauce here ?
The TLS termination itself is just what it says it is. TLS is a generic streaming protocol just like TCP one level up so you can unwrap it at the LB in a generic way. The magic is that they keep the IPs intact probably with very fancy routing magic, but it seems unlikely AWS will tell you how they did it.
In my SO question here, I have an example of how to terminate a TCP session in HAProxy and pass the unencrypted traffic to a backend.
In short, you need to use ssl in the frontend bind section and both frontend and backend configurations require use of tcp mode. Here is an example of terminating on port 443 and forwarding to port 4567.
frontend tcp-proxy
bind :443 ssl crt combined-cert-key.pem
mode tcp
default_backend bk_default
backend bk_default
mode tcp
server server1 1.2.3.4:4567

How to setup SSL bumping for content adaptation

I need to setup a reverse proxy which intercepts HTTPS requests, decrypts them, performs body adaptation and finally forwards the re-encrypted request.
I'm now using Squid which provides support for eCAP plugins and ssl bumping: http://wiki.squid-cache.org/Features/SslBump
If I understood well, by configuring SSL bumping I can do exactly what I said above. However, ssl bumping is not working for now.
Here is my Squid configuration:
https_port 8080 cert=/etc/squid/cert.pem key=/etc/squid/key.pem
http_port 3128 ssl-bump cert=/etc/squid/cert.pem key=/etc/squid/key.pem dynamic_cert_mem_cache_size=4MB generate-host-certificates=on
cache_peer 52.170.25.214 parent 8080 0 no-query originserver login=PASS
#always_direct allow all
ssl_bump allow all
sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER
Client-side, when trying to send a request to https:// 127.0.0.1:8080 I'm getting the following error:
Connection reset by peer
This happens if the destination server is running HTTPS. Looks like Squid is trying to establish a simple HTTP connection instead of a HTTPS request. Indeed, server-side I'm getting a SSL23_GET_CLIENT_HELLO error.
Is there anything wrong in my configuration? Is there anything I missed in how SSL bump works?
I digged into the problem and here is what I found:
1) ssl-bump option is not needed
2) the problem was that in the following line the ssl option was missing
cache_peer 52.170.25.214 parent 8080 0 no-query originserver login=PASS **ssl**

how to provide tcp/ssl support on the same port

Le'ts say you open a tcp socket on port 80 to handle http request, and a ssl socket on port 443 to deal with https...how can some proxy provide access to both of them on the same port??
I found only this link but it wasn't very useful. Can you provide me an erlang example or suggest me some resources from which i can learn more on the topic?
Thanks in advance
how can some proxy provide access to both of them on the same port??
By implementing the HTTP CONNECT method, the (non-transparent) proxy may switch to providing a TCP tunnel over which a browser may, for example, access an HTTPS resource.
A rather sparse specification:
https://www.rfc-editor.org/rfc/rfc2616#section-9.9
As outlined in the link you provide, you will need to write your own custom server that sniffs the request and then redirects to the correct protocol accordingly.
As http://www.faqs.org/rfcs/rfc2818.html indicates, an HTTP session will start with an Initial Request Line (e.g. GET /), whereas a TLS session will start with a ClientHello (more on the TLS session on wikipedia)
There are lots of resources online about writing servers in Erlang, e.g. How to write a simple webserver in Erlang?
Incidentally your terminology is incorrect: http, https SSL and TLS are protocols, and all operate (over the web) using TCP sockets.

CONNECT request to a forward HTTP proxy over an SSL connection?

I am writing an HTTP proxy and I am having trouble understanding some details of making a CONNECT request over TLS. To get a better picture, I am experimenting with Apache to observe how it interacts with clients. This is from my default virtual host.
NameVirtualHost *:443
<VirtualHost>
ServerName example.com
DocumentRoot htdocs/example.com
ProxyRequests On
AllowConnect 22
SSLEngine on
SSLCertificateFile /root/ssl/example.com-startssl.pem
SSLCertificateKeyFile /root/ssl/example.com-startssl.key
SSLCertificateChainFile /root/ssl/sub.class1.server.ca.pem
SSLStrictSNIVHostCheck off
</VirtualHost>
The conversation between Apache and my client goes like this.
a. client connects to example.com:443 and sends example.com in the TLS handshake.
b. client sends HTTP request.
CONNECT 192.168.1.1:22 HTTP/1.1
Host: example.com
Proxy-Connection: Keep-Alive
c. Apache says HTTP/1.1 400 Bad Request. The Apache error log says
Hostname example.com provided via SNI and hostname 192.168.1.1
provided via HTTP are different.
It appears that Apache does not look at the Host header other than to see that it is there since HTTP/1.1 requires it. I get identical failed behavior if the client sends Host: foo. If I make the HTTP request to example.com:80 without TLS, then Apache will connect me to 192.168.1.1:22.
I don't completely understand this behavior. Is there something wrong with the CONNECT request? I can't seem to locate the relevant parts of the RFCs that explain all this.
It's not clear whether you're trying to use Apache Httpd as a proxy server, this would explain the 400 status code you're getting.
CONNECT is used by the client, and sent to the proxy server (possibly Apache Httpd, but usually not), not to the destination web server.
CONNECT is used between the client and the proxy server before establishing the TLS connection between the client and the end server. The client (C) connects to the proxy (P) proxy.example.com and sends this request (including blank line):
C->P: CONNECT www.example.com:443 HTTP/1.1
C->P: Host: www.example.com:443
C->P:
The proxy opens a TCP connection to www.example.com:443 (P-S) and responds to the client with a 200 status code, accepting the request:
P->C: 200 OK
P->C:
After this, the connection between the client and the proxy (C-P) is kept open. The proxy server relays everything on the C-P connection to and from P-S. The client upgrades its active (P-S) connection to an SSL/TLS connection, by initiating a TLS handshake on that channel. Since everything is now relayed to the server, it's as if the TLS exchange was done directly with www.example.com:443.
The proxy doesn't play any role in the handshake (and thus with SNI). The TLS handshake effectively happens directly between the client and the end server.
If you're writing a proxy server, all you need to do for allowing your clients to connect to HTTPS servers is read in the CONNECT request, make a connection from the proxy to the end server (given in the CONNECT request), send the client with a 200 OK reply and then forward everything that you read from the client to the server, and vice versa.
RFC 2616 treats CONNECT as a a way to establish a simple tunnel (which it is). There is more about it in RFC 2817, although the rest of RFC 2817 (upgrades to TLS within a non-proxy HTTP connection) is rarely used.
It looks like what you're trying to do is to have the connection between the client (C) and the proxy (P) over TLS. That's fine, but the client won't use CONNECT to connect to external web servers (unless it's a connection to an HTTPS server too).
You're doing everything right. It's Apache that got things wrong. Support for CONNECT over TLS was only added recently (https://issues.apache.org/bugzilla/show_bug.cgi?id=29744) and there's still some things to be ironed out. The issue you're hitting is one of them.
From RFC 2616 (section 14.23):
The Host request-header field specifies the Internet host and port
number of the resource being requested, as obtained from the original
URI given by the user or referring resource (generally an HTTP URL,
as described in section 3.2.2). The Host field value MUST represent
the naming authority of the origin server or gateway given by the
original URL.
My understanding is that you need to copy the address from CONNECT line to HOST line. All in all, the address of the resource is 192.168.1.1, and the fact that you are connecting via example.com doesn't change anything from RFC point of view.
It is quite seldom to see CONNECT Method inside TLS (https). I actually don't know any client who does that (and I would be interested to know who it does, cause I think it is actually a good feature).
Normally the client connects with http (plain tcp) to the proxy and sends the CONNECT method (and host header) to host:443. Then the proxy will make a transparent connection to the endpoint and then the client sends the SSL handshake through.
In this scenario the data is ssl protected "end to end".
The CONNECT method is not really specified, it is only reserved in the HTTP RFC. But typically it is quite simple so it is interoperable. The Method specifies host[:port]. Host: header can simply be ignored. Some additional proxy authentication headers might be needed. When the body of the connection begins no parsing has to happen by the proxy anymore (some do, because they check for valid SSL handshake).