How is TLS termination implemented in AWS NLB? - load-balancing

AWS NLB supports TLS termination
https://aws.amazon.com/blogs/aws/new-tls-termination-for-network-load-balancers/
NLB being a Layer 4 load balancer I would expect it to work in a passthrough mode by directing the incoming packets to one of the backends without much of state maintenance (except for the flow tracking)
Are there any details available on how AWS implements the TLS termination in NLB ?
Is it possible to do it with open source tooling (like IPVS or haproxy) or AWS has some secret sauce here ?

The TLS termination itself is just what it says it is. TLS is a generic streaming protocol just like TCP one level up so you can unwrap it at the LB in a generic way. The magic is that they keep the IPs intact probably with very fancy routing magic, but it seems unlikely AWS will tell you how they did it.

In my SO question here, I have an example of how to terminate a TCP session in HAProxy and pass the unencrypted traffic to a backend.
In short, you need to use ssl in the frontend bind section and both frontend and backend configurations require use of tcp mode. Here is an example of terminating on port 443 and forwarding to port 4567.
frontend tcp-proxy
bind :443 ssl crt combined-cert-key.pem
mode tcp
default_backend bk_default
backend bk_default
mode tcp
server server1 1.2.3.4:4567

Related

HAProxy TCP (443) Loadbalancing with different backend ports

I'm implementing a Frontend Loadbalancer which passthrough the traffic coming to port 80 and 443 to different backend ports. SSL termination is happening in the backend and HAproxy should not engage with anything other than forwarding the traffic coming to the frontend port 80 and 443 to the respective backend ports.
Port 80 forwarding seems fine and 443 is not working as expected and giving SSL handshake failure. Even my backend service is not coming up on the web browser with a warning saying this is not trusted. I have no clue why this is happening and my HAProxy experience is not that high and below is the current configuration. Please correct me if I'm wrong.
HAProxy is installed on Ubuntu 18.04.5 LTS
Config after the defaults section
frontend k8s_lb
mode tcp
bind x.x.x.x:80
default_backend kube_minions
frontend k8s_lb_https
mode tcp
bind x.x.x.x:443
default_backend kube_minions_https
backend kube_minions
mode tcp
balance roundrobin
server k8s_worker-01 x.x.x.x:32080
server k8s_worker-02 x.x.x.x:32080
backend kube_minions_https
mode tcp
balance roundrobin
server k8s_worker-01 x.x.x.x:32443
server k8s_worker-02 x.x.x.x:32443
The backend story:
I have a k8s cluster and traefik ingress which is running as a DaemonSet on each and every node, and minions are my backend servers. CertManager is in place to do the cert automation with Let's encrypt ACME protocol in the ingress resources, hence SSL termination should be happening through the ingress resources.
I have completed the certificates and everything seems perfect as I have already implemented a similar setup on AWS with a TCP loadbalancer and everything is perfectly working and running prod workloads.
So, I need to mention that backend services are all good and up and running. In this I replaced the AWS loadbalancer with HAProxy and need to implement the same.
Please assist me to fix this as I'm struggling with this and still no luck with the issue.
Thank you.
Sorry, I was able to figure it out and there is nothing to do with traefik and HAProxy for this SSL issue. My Client's DNS is configured in CloudFlare and they have enabled the universal SSL and it caused the issue.
I checked with a new DNS record from route53 working as expected so my HAProxy config do what I need.

Connect to RabbitMQ via URL

I'm trying to connect to the rabbitmq which is hidden behind nginx proxy. It's declared as:
location ^~ /rabbitmq/ {
proxy_pass http://127.0.0.1:5672/;
}
The problem is that as I found AMPQ only specifies host but it doesn't know anything about urls.
Can I connect rabbit client to www.myserver.com/rabbitmq somehow? I'm using EasyNetQ to connect, but it looks like a protocol limitation, and implementation doesn't matter.
If it's not possible at all maybe there are some workarounds?
For AMQP, If using Nginx probably doing a TCP load balancing could help: https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/, otherwise if you could use HAProxy you could try something like this:
frontend rabbitmq
mode tcp
bind *:5672
use_backend bunny
backend bunny
mode tcp
server bunny 127.0.0.1:5672 check
If you want to publish message over HTTP probably you would like to expose the rabbitmq API:
http://localhost:15672/api/index.html
Notice the port 15672, from the docs:
Note that the UI and HTTP API port — typically 15672 — does not support AMQP 0-9-1, AMQP 1.0, STOMP or MQTT connections. Separate ports should be used by those clients.

haproxy reverse ssl termination

How can I achieve reverse SSL termination with ha proxy?
From my backend via HAproxy I need to a https enabled web service. How can I successfully proxy all traffic to that service via HAProxy?
Below results in Unable to communicate securely with peer: requested domain name does not match the server's certificate.
frontend foofront
bind 127.0.0.1:443
mode tcp
default_backend foo
backend fooback
mode tcp
balance leastconn
server foo foo.bar.com:443 check
With HAProxy you usually have two options for handling TLS-related scenarios. TLS Passthrough and TLS Termination.
TLS Passthrough
Looks like you're trying to do this in the example you gave.
In this mode, HAProxy does not touch traffic in any way, but is just forwarding it to the backend. When TLS is involved, that means that the backend has to have a proper certificate for a domain it's accessed from - if your HAProxy is handling traffic for myexample.com, backend servers will need to have appropriate certificates for myexample.com installed.
You can always check which certificate is served by using openssl s_client:
openssl s_client -connect localhost:443
TLS Termination
Alternatively, you can terminate TLS traffic on HAProxy itself. This will allow you to use any backend (both encrypted and unencrypted). In this case, HAProxy itself decrypts traffic for myexample.com and forwards it to backend.
In your case, configuration would look something like:
frontend foofront
bind 127.0.0.1:80
bind 127.0.0.1:443 ssl crt /path/to/cert/for/myexample.com
mode tcp
default_backend foo
backend foo
mode tcp
balance leastconn
server foo foo.bar.com:443 check ssl verify none # or verify all to enforce ssl checking
You can find more info on both approaches here.
Hope this helps.

WebSockets: wss from client to Amazon AWS EC2 instance through ELB

How can I connect over ssl to a websocket served by GlassFish on an Amazon AWS EC2 instance through an ELB?
I am using Tyrus 1.8.1 in GlassFish 4.1 b13 pre-release as my websocket implementation.
Port 8080 is unsecured, and port 8181 is secured with ssl.
ELB dns name: elb.xyz.com
EC2 dns name: ec2.xyz.com
websocket path: /web/socket
I have successfully used both ws & wss to connect directly to my EC2 instance (bypassing my ELB). i.e. both of the following urls work:
ws://ec2.xyz.com:8080/web/socket
wss://ec2.xyz.com:8181/web/socket
I have successfully used ws (non-ssl) over my ELB by using a tcp 80 > tcp 8080 listener. i.e. the following url works:
ws://elb.xyz.com:80/web/socket
I have not, however, been able to find a way to use wss though my ELB.
I have tried many things.
I assume that the most likely way of getting wss to work through my ELB would be to create a tcp 8181 > tcp 8181 listener on my ELB with proxy protocol enabled and use the following url:
wss://elb.xyz.com:8181/web/socket
Unfortunately, that does not work. I guess that I might have to enable the proxy protocol on glassfish, but I haven't been able to find out how to do that (or if it's possible, or if it's necessary for wss to work over my ELB).
Another option might be to somehow have ws or wss run over an ssl connection that's terminated on the ELB, and have it continue unsecured to glassfish, by using an ssl > tcp 8080 listener. That didn't work for me, either, but maybe some setting was incorrect.
Does anyone have any modifications to my two aforementioned trials. Or does anyone have some other suggestions?
Thanks.
I had a similar setup and originally configured my ELB listeners as follows:
HTTP 80 HTTP 80
HTTPS 443 HTTPS 443
Although this worked fine for the website itself, the websocket connection failed. In the listener, you need to allow all secure TCP connection as opposed to SSL only to allow wss to pass through as well:
HTTP 80 HTTP 80
SSL (Secure TCP) 443 SSL (Secure TCP) 443
I would also recommend raising the Idle timeout of the ELB.
I recently enabled wss between my browser and an EC2 Node.js instance.
There were 2 things to consider:
in the ELB listeners tab, add a row for the wss port with SSL as load balancer protocol.
in the ELB description tab, set an higher idle timeout (connection settings), which is 60 sec by default. The ELB was killing the websocket connections after 1 minute, setting the idle timeout to 3600 (the max value) enables much longer communication.
It is obviously not the ultimate solution since the timeout is still there, but 1 hour is probably good enough for what we usually do.
hope this help

how to provide tcp/ssl support on the same port

Le'ts say you open a tcp socket on port 80 to handle http request, and a ssl socket on port 443 to deal with https...how can some proxy provide access to both of them on the same port??
I found only this link but it wasn't very useful. Can you provide me an erlang example or suggest me some resources from which i can learn more on the topic?
Thanks in advance
how can some proxy provide access to both of them on the same port??
By implementing the HTTP CONNECT method, the (non-transparent) proxy may switch to providing a TCP tunnel over which a browser may, for example, access an HTTPS resource.
A rather sparse specification:
https://www.rfc-editor.org/rfc/rfc2616#section-9.9
As outlined in the link you provide, you will need to write your own custom server that sniffs the request and then redirects to the correct protocol accordingly.
As http://www.faqs.org/rfcs/rfc2818.html indicates, an HTTP session will start with an Initial Request Line (e.g. GET /), whereas a TLS session will start with a ClientHello (more on the TLS session on wikipedia)
There are lots of resources online about writing servers in Erlang, e.g. How to write a simple webserver in Erlang?
Incidentally your terminology is incorrect: http, https SSL and TLS are protocols, and all operate (over the web) using TCP sockets.