AKS in a private VNET behind a corporate proxy - azure-container-service

we have our AKS running in a private VNET, behind a corporate proxy. The proxy is not a "transparent" proxy and needs to be configured manually an all nodes. Is this a supported behavior? Is it possible to configure worker nodes and all system containers to work via proxy?

Actually, Azure Kubernetes managed by Azure and in a private Vnet create yourself or Azure. You can use the Load Balancer to transfer the traffic or use ingress. But you just can select one size and type for the nodes when you create the cluster and it seems not support multi size for the nodes currently. Maybe it will be supported in the future on Azure.
For more details about AKS, see Azure Kubernetes Service.

Related

Which channels should use SSL in a Kubernetes cluster?

I have the following Kubernetes setup (forgive the poor ASCII art):
Azure SQL DB_1 > deployment_1 > service_1 \
Azure SQL DB_2 > deployment_2 > service_2 > -> nginx_ingress
Azure SQL DB_N > deployment_N > service_N /
The DBs are outside the Kubernetes cluster. They are exposed through a Private Endpoint to the VNet the Kubernetes cluster is on. They obtain a private IP address inside that VNet, and are otherwise unreachable.
Every deployment is a different microservice. Each one has a service in front of it to handle communication. In turn, all these services can be reached through the NGINX ingress. All services are configured as ClusterIPs, so they cannot be reached from outside the cluster. The only entrypoint from outside the VNet is through the ingress.
My question is, which of these channels should be secured with SSL, and where is it not worth it (for example, because of impact on performance)?
The Ingress of course, will have SSL in front of it. This is a given.
Should there be SSL between the ingress and the services?
Should there be SSL between the services and the microservices behind them?
The DB itself seems to already do encrypted connections automatically. Is there any reason why this would be unnecessary, or conversely, can/should it be made more secure somehow?
Of course, I understand that more encryption is usually A Good Thing. But for example, is it worth generating and keeping track of certificates for comms between the microservices and the services, since these are internal to the cluster and cannot be reached in any other way?
Thank you for any information / examples / experiences you can provide!
simple is to terminate the TLS at ingress layer only, as it is inside AKS ( I am assuming ) and AKS' VNET is secure, so no direct exposure to external world and only ingress nginx controller will be exposed to external world.
The DB based communication , if you are using SQL server , then is already under the hood of TLS.
Apart from this you can define CORS too, wherever required.

Static IP address for IoT Hub

For the scenario where a firewall/proxy doesn't support IoT Hub's FQDN.
The recommended approach is to script the updating of the firewall's whitelist - not going to happen in our case.
My plan B is to introduce a "gateway" on the IoT Hub side to provide a static IP address, and forward traffic to IoT Hub. I can see a few azure appliances which might serve here:
Azure Application Gateway
Azure Firewall
Azure Load Balancer
Proxy Server on VM
Has somebody been through this? What was your experience, and where did you land?
I have implemented something like this by building an HA proxy solution (based on Squid proxy) on a VM Scale Set with a Load Balancer in front. You can find the full solution here: https://github.com/sebader/azure-samples-collection/tree/master/VmssProxySolution
This one uses an internal LB (private IP) but you can also easily modify this to expose a static, public IP.

Where can i find node public ip on aks made cluster?

I've been asked by Azure support to open the question here, though i think this is an AKS bug.
When deploying a cluster each node 'node.status.addresses' should show an externalip or hostname of the node by design but there is a VM name in hostname address in instead of it in AKS made cluster. Which makes it is really hard to know node public ips for various reasons we need them.
Is there any standard or nonstandard way to get node public ip ?
There is the public IP exposed for the Azure Kubernetes Service, but it's not directly to the node. Actually, the Kubernetes node will not be exposed to the internet with a public IP.
The AKS nodes create in a VNet on Azure and access or can be accessed through the Azure Load Balancer with a public IP. The VNet is a private network as a resource of Azure. For the VNet, there are two types such as Basic and Advanced. You can get more details, see Network concepts for applications in Azure Kubernetes Service (AKS).
AKS nodes are not exposed to the public internet and therefore will not have an exposed public IP.
With that said, I’ve been investigating an issue where nodes either lose or fail to ever get an internal IP. We (AKS) have implemented an initial fix, which restarts kubelet, and does seem to at least temporarily mitigate the lack of an internal IP. There are ongoing efforts upstream to find and fix the real root cause.
I don’t think I’ve come across the scenario of a node not having a hostname address though. I’m going to log a backlog item to investigate any clusters that appear to be experiencing this symptom. I can’t promise an immediate fix, but I am definitely going to look into this further early next week.
There is a preview of a feature enabling a public IP per node. Please see https://learn.microsoft.com/en-us/azure/aks/use-multiple-node-pools#assign-a-public-ip-per-node-in-a-node-pool
In common scenarios, each AKS node cluster will be behind a Load Balancer, which in turn will have an Public IP. You can get the public IP by going to your AKS Cluster -> Services & Ingresses -> Check for Service with Type Load Balancer. This will have a Public IP.
You can also configure the cluster so each Node has a Public IP. You can then access the details from the Node Pool tab.

Will there be support to establish a private connection to Azure AKS

My client is currently evulating AKS which seems to be really promising. Our current platform is based on Azure VM's we provision ourselves. We would like to create private communication between both our existing platform and the managed AKS cluster but so far that does not seem to be supported yet.
Some example use cases for us are:
- Proxying incoming HTTP traffic via our main entrypoint, a Varnish server, to the new AKS environment so we don't have to change url's
- Accessing non publically exposed API's from the AKS environment
Right now the AKS cluster is it's a different subscription and resource group than other parts of our platform. The main reason we we can't connect though seems to be that it's not possible to specify which private IP range should be used when creating an AKS cluster.
Is there support planned for this or is there a reliable workaround?
Thanks for the inquiry, there's a workaround for the stated case, it's through the use of ACS Engine, "ACS Engine, for Azure Container Service Engine, is a CLI tool that helps to generate Azure Resource Manager templates to deploy Docker enabled clusters on Microsoft Azure. It works with all the orchestrators supported by ACS: Docker Swarm, Mesosphere DC/OS and Kubernetes"
So using this solution will allow you to integrate Azure Container Service Cluster into an existing Virtual Network.More details and step by step guide can be found here: https://blogs.msdn.microsoft.com/jcorioland/2017/01/10/how-to-integrate-a-new-azure-container-service-cluster-into-an-existing-virtual-network-using-acs-engine/

Azure Container Services Port Load Balancer

While trying to port my application which is running on docker Swarm locally to Azure Container Service I am struck on the load balancer part of the Azure.
Locally I have a container instance of HAproxy running on Swarm Master and multiple web containers running.
Web containers have just exposed the ports and they are not mapped to machines on which they are running.
HAproxy container has mapped port to the master and internally is talking to my web containers for load balancing.
This gives me the leverage to run any number of containers with limited number of workers in Docker Swarm.
In azure container service I see that Azure load balancer will talk to only ports that are mapped, that means that I can only run 1 container per agent or I keep an internal load balancer in my containers, which implies that users will be going through 2 load balancers before hitting my application.
Not an ideal scenario when my application uses sticky sessions.
So Apparently Microsoft's statement "Everything works same in Azure containers" goes for a toss ?
what are the solutions available or am I doing something wrong here?
Regards,
Harneet
The solution in ACS is almost identical. Use HAProxy and have the Azure LB talk to that. The only difference is that you will not be running the proxy on the master, you will have Swarm deploy it to an agent for you.
You shouldn't really be running workloads on your masters. What would you do if you have a DDoS attack and can't reach your masters, for example. Having Swarm deploy the proxy for you means that you can also have swarm monitor the health of the proxy.
You could, if you really wanted to, run the proxy on the master as you do now. The solution would be the same, have the Azure LB provide a public connection to the proxy just as you currently do.