Manual ALB Creation for AWS EKS Fargate - amazon-eks

How can we create application load balancer in aws eks fargate manually instead of using alb ingress controller?
I was able to create and associate the Application Load balancer with fargate pods. However, is there a way to automatically register new pods as targets in ALB.
When using targets as IP in the target groups, I cannot find a way to define an IP range. It only takes individual IP's, in which case I need to manually add the targets every time a new pod comes up.

What you are trying to achieve is usually done using the ingress object. Sorry if I ask but is there a reason for which you can't use the ALB ingress which would give you that out of the box? Note that there were some limitations in the past (e.g. one ALB per ingress object etc) but we have just made available a new version of the ALB ingress that overcomes some of those limitations: https://aws.amazon.com/blogs/containers/introducing-aws-load-balancer-controller/

Related

EKS subdomain for each namespac?

I have the following setup. Where I manually add new DNS records when adding new services.
Route53 ------> AWS ALB -----> Ingress-Nginx in EKS ---> Ingress-Rules -------> Service
(app|api).ex.de | A record target | Target group of listener | Pointing to service
Now I want to “duplicate” my environment using namespaces. To do that, I need subdomains and automatic wiring of domains. So for example, I want to have
dev-namespace -> (app|api).dev.ex.de and
pr-1-namespace -> (app|api).pr-1.ex.de
The twist is, that the domains should be automatically wired and setup when I spin up a new environment.
Has anyone an idea how to do this in Kubernetes and AWS? Any help would be appreciated.

Running multiple applications in the same AKS cluster with ingress controller(s) tls termination

I managed to run successfully
multiple applications in different namespaces with http
one application with https (using cert-manager and letsencrypt)
But I need to run multiple https apps.
I tried two paths:
Using multiple dedicated ingress controllers+cert-managers
Using only one controller+cert and route traffic with ingress rules
Is there an open source (complete) example of a working solution for this configuration? Also one based on Azure Application Gateway Ingress Controller (AGIC) would do.

It's possible to use a dynamic route in the nginx ingress controller?

Our services use a K8s service with a reverse proxy to receive a request by multiple domains and redirect to our services, additionally, we manage SSL certificates powered by let's encrypt for every user that configures their domain in our service. Resuming I have multiple .conf files in the nginx for every domain that is configured. Works really great.
But now we need to increase our levels of security and availability and now we ready to configure the ingress in K8s to handle this problem for us because they are built for it.
Everything looks fine until we discover that every time that I need to configure a new domain as a host in the ingress I need to alter the config file and re-apply.
So that's the problem, I want to apply the same concept that I already have running, but in the nginx ingress controller. It's that possible? I have more than 10k domains up and running, I can't configure all in my ingress resource file.
Any thoughts?
In terms of scaling Kubernetes 10k domains should be fine to be configured in an Ingress resource. You might want to check how much storage you have in the etcd nodes to make sure you can store enough data there.
The default etcd storage is 2Gb, but if you keep increasing it's something to keep in mind.
You can also refer to the K8s best practices when it comes to building large clusters.
Another practice that you can use is to use apply and not create when changing the ingress resource, that way the changes are incremental. Furthermore, if you are using K8s 1.18 or later you can take advantage of Server Side Apply.

How to set up an architecture of scalable custom domains & auto-SSL on Google Kubernetes Engine

We are researching the best solution to allow customers to use their domain names with our hosting services. The hosting services are based on Google App Engine standard. The requirements are:
Customers can point their domain name to our server via CNAME or A record
Our server should be able to generate SSL certs for them automatically using Let's Encrypt
Our server should be able to handle custom SSL certs uploaded by customers
Should be robust and reliable when adding new customers (new confs, SSL certs etc.) into our servers
Should be scalable, and can handle a large number of custom domains and traffic (e.g. from 0 to 10000)
Minimum operation costs (the less time needed for maintaining the infrastructure, the better)
It seems Google Kubernetes Engine (formerly known as Google Container Engine) would be the direction to go. Is there a specific, proven way to set it up? Any suggestions/experiences sharing would be appreciated.
I would recommend going through this link to get started with setting up a GKE cluster.
For your purpose of SSL on GKE I would recommend creating an Ingress as specified in this link which automatically creates a Loadbalancer Resource in GCP if you use the default GLBC ingress controller. The resulting LB's configuration (Ports, Host Path rules, Certificates, Backend Services, etc. ) are defined by the configuration of the Ingress Object itself. You can point the domain the domain name to the IP of the Loadbalancer.
If you want to configure your Ingress(and consequently the resulting LB) to use certs created by 'Let's Encrypt', you would be modifying the configuration presented in the YAML of the ingress.
For actually integrating Let's Encrypt for Kubernetes, it is actually possible by using a service called cert-manager to automate the process of obtaining TLS/SSL certificates and store them inside secrets.
This link shows how to use cert-manager with GKE.
If you want to use self managed SSL certificates please see this link for more information. The GKE is scalable by the GKE's cluster autoscaler which automatically resizes clusters based on the demands of the workloads you want to run.

sshing in aws load balancer and configuring it for subdomain routing?

We want to use Amazon Elastic BeanStalk service for deployment in EC2 Boxes.
We want to deploy our Ruby on Rails Application in such a way that we can do sub-domain based routing to different rails app.
And we want to use single SSL Certificate for our load balancer and want to configure our load balancer in susch a away tha subdomain based routing takes place.
HA Proxy does this work well but when we are trying to use Amazon Elastic BeanStalk service for our deployment, aws creates a load balancer but didn't associate it with any Key-Pair.
So we are not able to ssh in load balancer and add our configuration for subdomain based routing.
Can someone please point me to some solution ?
Thanks,
Ankit.
You don't SSH into AWS load balancers, they are basically a black box that you have only a limited set of configuration options for. You probably need to look at the Route 53 services for DNS routing.
Your configuration would have routing based on domain DNS to different load balancers, one for each separate service you need. You can't have a single ELB route traffic to different EC2 instances based on domain or URI fragments.