AWS - SSL/HTTPS on load balancer - ssl

I have a problem to add https to my EC2 instance and maybe you guys can have the answer to make it work.
I have a load balancer that is forwarding the connection to my EC2 instance, I've add the SSL certificate to the load balancer and everything went fine, I've add a listener to the port 443 that will forward to the port 443 of my instance and I've configured Apache to listen on both port 443 and 80, now here the screenshot of my load balancer:
The SSL certificate is valid and on port 80 (HTTP) everything is fine, but if I try the with https the request does not got through.
Any idea?
Cheers

Elastic Load Balancer can not forward your HTTPS requests to the server. This is why SSL is there : to prevent a man in the middle attack (amongst others)
The way you can get this working is the following :
configure your ELB to accept 443 TCP connection and install an SSL certificate through IAM (just like you did)
relay traffic on TCP 80 to your fleet of web servers
configure your web server to accept traffic on TCP 80 (having SSL between the load balancer and the web servers is also supported, but not required most of the time)
configure your web servers Security Group to only accept traffic from the load balancer.
(optional) be sure your Web Servers are running in a private subnet, i.e. with only private IP addressed and no route to the Internet Gateway
If you really need to have an end-to-end SSL tunnel between your client and you backend servers (for example, to perform client side SSL authentication), then you'll have to configure your load balancer in TCP mode, not in HTTP mode (see Support for two-way TLS/HTTPS with ELB for more details)
More details :
SSL Load Balancers : http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/US_SettingUpLoadBalancerHTTPS.html
Load Balancers in VPC :
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/UserScenariosForVPC.html

Do you have an HTTPS listener on your EC2 instance? If not, your instance port should be 80 for both load balancer listeners.

Related

HAProxy TCP (443) Loadbalancing with different backend ports

I'm implementing a Frontend Loadbalancer which passthrough the traffic coming to port 80 and 443 to different backend ports. SSL termination is happening in the backend and HAproxy should not engage with anything other than forwarding the traffic coming to the frontend port 80 and 443 to the respective backend ports.
Port 80 forwarding seems fine and 443 is not working as expected and giving SSL handshake failure. Even my backend service is not coming up on the web browser with a warning saying this is not trusted. I have no clue why this is happening and my HAProxy experience is not that high and below is the current configuration. Please correct me if I'm wrong.
HAProxy is installed on Ubuntu 18.04.5 LTS
Config after the defaults section
frontend k8s_lb
mode tcp
bind x.x.x.x:80
default_backend kube_minions
frontend k8s_lb_https
mode tcp
bind x.x.x.x:443
default_backend kube_minions_https
backend kube_minions
mode tcp
balance roundrobin
server k8s_worker-01 x.x.x.x:32080
server k8s_worker-02 x.x.x.x:32080
backend kube_minions_https
mode tcp
balance roundrobin
server k8s_worker-01 x.x.x.x:32443
server k8s_worker-02 x.x.x.x:32443
The backend story:
I have a k8s cluster and traefik ingress which is running as a DaemonSet on each and every node, and minions are my backend servers. CertManager is in place to do the cert automation with Let's encrypt ACME protocol in the ingress resources, hence SSL termination should be happening through the ingress resources.
I have completed the certificates and everything seems perfect as I have already implemented a similar setup on AWS with a TCP loadbalancer and everything is perfectly working and running prod workloads.
So, I need to mention that backend services are all good and up and running. In this I replaced the AWS loadbalancer with HAProxy and need to implement the same.
Please assist me to fix this as I'm struggling with this and still no luck with the issue.
Thank you.
Sorry, I was able to figure it out and there is nothing to do with traefik and HAProxy for this SSL issue. My Client's DNS is configured in CloudFlare and they have enabled the universal SSL and it caused the issue.
I checked with a new DNS record from route53 working as expected so my HAProxy config do what I need.

How to deliver websocket traffic from GCP load balancer to websocket server

currently my websocket traffic is delivering from gcp load balancer to nginx to websocket server. am planning to remove one hop so that if i remove nginx. Then how to configure my websocket port (Reserved port) to gcp load balancer so that my websocket traffic will come from gcp load balancer.
Is GCP load balancer support libwebsocket?
Can i configure my own port at GCP load balancer (except 443/80)
I have created a Load Balancer setup today which supports WebSockets with the new TCP SSL Proxy Load Balancer from GCP.
Here's how:
You need to use SSL on the frontend configuration with your SSL
certificate.
Then you need to have a TCP backend configuration pointing to your instance group and correct WebSocket port on your server.
You need to have session affinity enabled on the backend configuration.

cloudflare - ssl error

I try to get Cloudflare work with my website.
I have my website running on port 80 and my api on port 8443.
My proxy doesn't have a ssl certificate, I rely only on the one on Cloudflare.
I have set ssl to flexible.
I can access my website, but when I make an API call to my api on port 8443, I have the following message : CloudFlare is unable to establish an SSL connection to the origin server.
Do I need to have a certificate on my proxy for the API ?
Thanks for your help.
It sounds like you're using Cloudflare's Flexible SSL option whereby traffic is unencrypted to the origin web server (but encrypted from Cloudflare's Edge to the end-user).
This setting will only work for port 443->80, not for the other ports Cloudflare supports like 2053 (or 8443 in your case).
If you want to serve SSL traffic through a port other than 443, you will need to ensure your web server is configured to work with Cloudflare in either Full or Full (Strict) SSL mode.
For more info:
What do the SSL options mean?

Can't get https working on Elastic Load Balancer (AWS)

I have a load balancer in front on an ec2-Classic instance. I have checked that the load balancer is working properly by directly linking to the DNS Name value that is listed in the Description tab for my load balancer. This gives me the main page of the webpage that lies on the EC2 instance. Thus my load balancer is working. My load balancer and my EC2 instance are in the same avalibility zone.
My load balancer has set up an SSL certificate and I have two listeners setup to forward http (port 80) and https (port 443) to instance port 80 as http. My EC2 instance has a security group set to accept http and https with protocol TCP on ports 80 and 443 respectively. Although my understanding is that only the port 80 would be useful, right? The data for the certificate are in the pem format. I have addded to my instance security group a custom TCP on Port Range 0 - 65535 for amazon-elb/amazon-elb-sg. This did nothing.
I can access my site using http just fine. If I try to access using https then I get Error code: ERR_CONNECTION_REFUSED on Chrome and Unable to Connect on Firefox.
I have checked similar posts for this question and nothing seems to help.
Any help or ideas would be greatly appreciated. Thanks
Have you made sure that the ELB is in a security group that allows https on port 443?
I had a similar problem with both classic and advanced load balancer. The thing that was missing for me is that the https to http translation stuff only workers AFTER you make an A record in the DNS for the domain your SSL is on ALIASED to the load balancer you just created. Once I did that all was well through that new A record DNS. Your instance doesn't need to accept port 443 and your LB definitely should not be forwarding over 443.
Hopefully it is something straightforward like this for you as well.
Wait, what SSL certificate in PEM format? I used an Amazon SSL certificate I just got from the dropdown. Are you sure you used an SSL certificate?
In your description I see that maybe you are not following Step 6 from Amazon's "Elastic Load Balancing in Amazon EC2-Classic ->Create HTTPS/SSL Load Balancer
Using the AWS Management Console -> Configure Listeners" guide.
There, it says that you should configure "HTTPS (...) in the Load Balancer Protocol [and] HTTPS (Secure HTTP) (...) in the Instance Protocol box.", whereas in your configuration you are forwarding ELB's 443 to port 80 in the instance.
For further reference, this is the guide that I'm talking about DEAD LINKhttp://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/configure-https-listener.htmlDEAD LINK
Also, check if your SSL certificate is well built according to the rules specified here: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/ssl-server-cert.html

WebSockets: wss from client to Amazon AWS EC2 instance through ELB

How can I connect over ssl to a websocket served by GlassFish on an Amazon AWS EC2 instance through an ELB?
I am using Tyrus 1.8.1 in GlassFish 4.1 b13 pre-release as my websocket implementation.
Port 8080 is unsecured, and port 8181 is secured with ssl.
ELB dns name: elb.xyz.com
EC2 dns name: ec2.xyz.com
websocket path: /web/socket
I have successfully used both ws & wss to connect directly to my EC2 instance (bypassing my ELB). i.e. both of the following urls work:
ws://ec2.xyz.com:8080/web/socket
wss://ec2.xyz.com:8181/web/socket
I have successfully used ws (non-ssl) over my ELB by using a tcp 80 > tcp 8080 listener. i.e. the following url works:
ws://elb.xyz.com:80/web/socket
I have not, however, been able to find a way to use wss though my ELB.
I have tried many things.
I assume that the most likely way of getting wss to work through my ELB would be to create a tcp 8181 > tcp 8181 listener on my ELB with proxy protocol enabled and use the following url:
wss://elb.xyz.com:8181/web/socket
Unfortunately, that does not work. I guess that I might have to enable the proxy protocol on glassfish, but I haven't been able to find out how to do that (or if it's possible, or if it's necessary for wss to work over my ELB).
Another option might be to somehow have ws or wss run over an ssl connection that's terminated on the ELB, and have it continue unsecured to glassfish, by using an ssl > tcp 8080 listener. That didn't work for me, either, but maybe some setting was incorrect.
Does anyone have any modifications to my two aforementioned trials. Or does anyone have some other suggestions?
Thanks.
I had a similar setup and originally configured my ELB listeners as follows:
HTTP 80 HTTP 80
HTTPS 443 HTTPS 443
Although this worked fine for the website itself, the websocket connection failed. In the listener, you need to allow all secure TCP connection as opposed to SSL only to allow wss to pass through as well:
HTTP 80 HTTP 80
SSL (Secure TCP) 443 SSL (Secure TCP) 443
I would also recommend raising the Idle timeout of the ELB.
I recently enabled wss between my browser and an EC2 Node.js instance.
There were 2 things to consider:
in the ELB listeners tab, add a row for the wss port with SSL as load balancer protocol.
in the ELB description tab, set an higher idle timeout (connection settings), which is 60 sec by default. The ELB was killing the websocket connections after 1 minute, setting the idle timeout to 3600 (the max value) enables much longer communication.
It is obviously not the ultimate solution since the timeout is still there, but 1 hour is probably good enough for what we usually do.
hope this help