How to use port 8443 with ECS Fargate and ALB? - ssl-certificate

Is it possible to run spring boot containerized apps on port 8443 going through a 443 ALB listener and deployed on ECS Fargate in AWS? The 443 listener would have an issued cert, not a self-signed cert. I would use an NLB but I need to set route paths, so that's a no go. Would using nginx as a proxy be used in a situation like this?

Is it possible to run spring boot containerized apps on port 8443
going through a 443 ALB listener and deployed on ECS Fargate in AWS?
Yes it is absolutely possible, there should be no issue with this at all. What you are describing is actually just a very standard and basic ECS/Fargate setup.
Would using nginx as a proxy be used in a situation like this?
Only if you want to. You don't need Nginx just to make this work.

Related

How to use SSL/TLS on ECS Fargate

I am trying to use SSL/TLS for Python Flask/Waitress server running the ECS Fargate. I haven't found a solution for our use case.
Here are the designs for the ECS Fargate:
Container will only interacts with backend AWS Lambda
Public IP disabled, only private ip is enabled.
No use of load balancer. The Python server is stateful and spinning a new container when requested is more cost effective.
How should I make a HTTPS request from Lambda to the ECS Fargate?
Why do even need to make an HTTPS request from the lambda?
Answer to your question
Enable the security group port 443 on your ecs fargate instance and you should be able to make requests even without ssl certs as browsers only block them,
2nd thing is if for any reason you need a SSL cert on localhost you can use this library https://github.com/FiloSottile/mkcert
Solved the issue:
Create a self-signed cert using OpenSSL in Flask server
Trust self-signed certs in Lambda

How to expose minikube service urls to outside system

I 've apache camel application deployed on kubernetes. My application is esposed in kubernetes cluster which is accessible at http://192.168.99.100:31750. so how to make it accessiible accross.
I suggest you do 2 things :
run an NginX Ingress Controller in your minikube and expose it with NodePort service. Meaning it will be available somewhat similar to your service right now (high port range)
run HAProxy on your host that runs minikube that will forward 80/443 port to your high ports on minikube (ie. 80->32080, 443->32443)
that way you can expose your ingress controller on standard ports and have your services exposed with regular kubernetes Ingress definitions on these ports.

What is the recommended way to update SSL certs in a Nginx cluster behind HAProxy?

So I want to have this:
/ Nginx1 (SSL)
HAProxy-- Nginx2 (SSL)
\ Nginx3 (SSL)
But I have questions:
How do I update Letsencrypt certs on all nodes?
If I can't do this with certbot (+some config) - how do you do this? Maybe some distributed k/v storages?
The best thing is to use HTTP only services (not HTTPS) on Nginx nodes and configure SSL on balancer.
Options:
Traefik. Can be configured to auto update LetsEncrypt certs.
Fabio. Also can be configured to use SSL certs. (I've used Hashicorp Vault to store them). Need to configure updates myself.
Those 2 integrate well with service discovery tools like Consul.

AWS - SSL/HTTPS on load balancer

I have a problem to add https to my EC2 instance and maybe you guys can have the answer to make it work.
I have a load balancer that is forwarding the connection to my EC2 instance, I've add the SSL certificate to the load balancer and everything went fine, I've add a listener to the port 443 that will forward to the port 443 of my instance and I've configured Apache to listen on both port 443 and 80, now here the screenshot of my load balancer:
The SSL certificate is valid and on port 80 (HTTP) everything is fine, but if I try the with https the request does not got through.
Any idea?
Cheers
Elastic Load Balancer can not forward your HTTPS requests to the server. This is why SSL is there : to prevent a man in the middle attack (amongst others)
The way you can get this working is the following :
configure your ELB to accept 443 TCP connection and install an SSL certificate through IAM (just like you did)
relay traffic on TCP 80 to your fleet of web servers
configure your web server to accept traffic on TCP 80 (having SSL between the load balancer and the web servers is also supported, but not required most of the time)
configure your web servers Security Group to only accept traffic from the load balancer.
(optional) be sure your Web Servers are running in a private subnet, i.e. with only private IP addressed and no route to the Internet Gateway
If you really need to have an end-to-end SSL tunnel between your client and you backend servers (for example, to perform client side SSL authentication), then you'll have to configure your load balancer in TCP mode, not in HTTP mode (see Support for two-way TLS/HTTPS with ELB for more details)
More details :
SSL Load Balancers : http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/US_SettingUpLoadBalancerHTTPS.html
Load Balancers in VPC :
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/UserScenariosForVPC.html
Do you have an HTTPS listener on your EC2 instance? If not, your instance port should be 80 for both load balancer listeners.

WebSockets: wss from client to Amazon AWS EC2 instance through ELB

How can I connect over ssl to a websocket served by GlassFish on an Amazon AWS EC2 instance through an ELB?
I am using Tyrus 1.8.1 in GlassFish 4.1 b13 pre-release as my websocket implementation.
Port 8080 is unsecured, and port 8181 is secured with ssl.
ELB dns name: elb.xyz.com
EC2 dns name: ec2.xyz.com
websocket path: /web/socket
I have successfully used both ws & wss to connect directly to my EC2 instance (bypassing my ELB). i.e. both of the following urls work:
ws://ec2.xyz.com:8080/web/socket
wss://ec2.xyz.com:8181/web/socket
I have successfully used ws (non-ssl) over my ELB by using a tcp 80 > tcp 8080 listener. i.e. the following url works:
ws://elb.xyz.com:80/web/socket
I have not, however, been able to find a way to use wss though my ELB.
I have tried many things.
I assume that the most likely way of getting wss to work through my ELB would be to create a tcp 8181 > tcp 8181 listener on my ELB with proxy protocol enabled and use the following url:
wss://elb.xyz.com:8181/web/socket
Unfortunately, that does not work. I guess that I might have to enable the proxy protocol on glassfish, but I haven't been able to find out how to do that (or if it's possible, or if it's necessary for wss to work over my ELB).
Another option might be to somehow have ws or wss run over an ssl connection that's terminated on the ELB, and have it continue unsecured to glassfish, by using an ssl > tcp 8080 listener. That didn't work for me, either, but maybe some setting was incorrect.
Does anyone have any modifications to my two aforementioned trials. Or does anyone have some other suggestions?
Thanks.
I had a similar setup and originally configured my ELB listeners as follows:
HTTP 80 HTTP 80
HTTPS 443 HTTPS 443
Although this worked fine for the website itself, the websocket connection failed. In the listener, you need to allow all secure TCP connection as opposed to SSL only to allow wss to pass through as well:
HTTP 80 HTTP 80
SSL (Secure TCP) 443 SSL (Secure TCP) 443
I would also recommend raising the Idle timeout of the ELB.
I recently enabled wss between my browser and an EC2 Node.js instance.
There were 2 things to consider:
in the ELB listeners tab, add a row for the wss port with SSL as load balancer protocol.
in the ELB description tab, set an higher idle timeout (connection settings), which is 60 sec by default. The ELB was killing the websocket connections after 1 minute, setting the idle timeout to 3600 (the max value) enables much longer communication.
It is obviously not the ultimate solution since the timeout is still there, but 1 hour is probably good enough for what we usually do.
hope this help