When AKS deletes public IPs it assigns to LoadBalancer type of services? - azure-container-service

I have AKS cluster on which I constantly deploy and then delete namespaces with applications which mostly exposed by LoadBalancer type of services.
There are max 20 applications with public ip's at any given time thought in Resource Group which holds AKS nodes I see 20 * 4.5 public ip addresses.
My question is when are these deleted if at all ?

Every new service that you expose through type loadbalancer creates a public ip and sets it on the Azure loadbalancer.
When you delete a service the public ip will be removed by Azure.

Related

Connection to Azure SQL database on Azure Private Link/Endpoint using Azure VPN Client not working

I'm trying to setup an Azure SQL database using P2S VPN for users who are remote working. They are using some applications like SSMS and Visual Studio that require access to the database. We allow them to connect by white listing their IP addresses but we would like to stop this and to use the deny public network access option on the SQL server on Azure.
Whenever I try to connect using SSMS I get the following message:
I've followed the steps outlined in the documentation and tutorials on MS Docs but I have not been able to get the private endpoint to work with the database.
I have created the virtual network gateway and connected it to Azure Active Directory and I can see the sessions being created by the users as they log in.
I have created the virtual network using the address range = 10.1.0.0/16 and the subnet address range = 10.1.0.0/24. I have attached the private endpoint connection to the Azure SQL server and added the virtual network to the firewall.
Is there some setting required to allow the user to connect to the database from their PC without whitelisting IP addresses?
WAY 1:
You may Use domain name instead of IP directly from your virtual
network. So, you need some service in Azure which can translate domain
name to IP.
It is necessary to properly configure your DNS settings to resolve the private endpoint IP address to the fully qualified domain name (FQDN) of the connection string.
Use a DNS forwarder for on-premises workloads to resolve the FQDN of a private endpoint, to resolve the Azure service public DNS zone in Azure.
A DNS forwarder is a Virtual Machine running on the Virtual Network
linked to the Private DNS Zone that can proxy DNS queries coming from
other Virtual Networks or from on-premises. This is required as the
query must be originated from the Virtual Network to Azure DNS.
.
Use the host file on a virtual machine to override the DNS: Azure
creates a canonical name DNS record (CNAME) on the public DNS. The
CNAME record redirects the resolution to the private domain name
(privatelink.database.windows.net). You can override the resolution
with the private IP address of your private
endpoints. See azure-provided-name-resolution.
References:
Azure services DNS zone configuration and
on-premises-workloads-using-a-dns-forwarder
Refer this for connectivity troubleshooting using Private Link
See how to resolve-azure-internal-dns-from-your-on-prem-network
WAY 2 :
You may go for SQL managed instance which is another Azure SQL
PaaS offering .It is deployed with in VNet with no public service
endpoints and uses root and client certificates to authenticate in
azure.
(Go for this when one prefers not to use Private endpoint:)
To configure P2S VPN using certificates Refer :
configure-p2s-vpn-using-certificates-and-connect-to-sql-managed-instance-from-on-premise-machine.
Other references:
DNS-Client-Configuration-Options
DNS-Integration-Scenarios
DNS-Scenario-Using-AD

ACI - VNET - IP Address

I have created Virtual Network and connected API Management to Virtual Network.
I am thinking to host my REST API in Azure Container Instances in my VNET and then expose those API in Azure API Management by configuring IP Address of Azure Container Instance REST API into Azure API Management web service url.
I have one doubt, if this is right way of doing it.
I am wondering if Azure Container Instance gets restarted and if IP Address will change, then my API exposed in API Mangament will be broken. Does IP Address gets changed if Azure Container Instances gets restarted for some reason.
There are some limitations for Azure container instances.
The IP address of a container won't typically change between updates,
but it's not guaranteed to remain the same. As long as the container
group is deployed to the same underlying host, the container group
retains its IP address. Although rare, and while Azure Container
Instances makes every effort to redeploy to the same host, there are
some Azure-internal events that can cause redeployment to a different
host. To mitigate this issue, always use a DNS name label for your
container instances.
Terminated or deleted container groups can't be updated. Once a
container group has stopped (is in the Terminated state) or has been
deleted, the group is deployed as new.
However, It's a rare case that Azure container instances will be redeployed to a different host. Also, if you have a container instance in a VNet, you're unable to directly set a --dns-name-label value and you only could access the instance via its private IP address from the outside world and other container groups. Note: Containers in a group are not discoverable through DNS. They can only be accessed through ‘localhost’, in combination with their exposed ports. You could get more references from More about networking in this blog.

Where can i find node public ip on aks made cluster?

I've been asked by Azure support to open the question here, though i think this is an AKS bug.
When deploying a cluster each node 'node.status.addresses' should show an externalip or hostname of the node by design but there is a VM name in hostname address in instead of it in AKS made cluster. Which makes it is really hard to know node public ips for various reasons we need them.
Is there any standard or nonstandard way to get node public ip ?
There is the public IP exposed for the Azure Kubernetes Service, but it's not directly to the node. Actually, the Kubernetes node will not be exposed to the internet with a public IP.
The AKS nodes create in a VNet on Azure and access or can be accessed through the Azure Load Balancer with a public IP. The VNet is a private network as a resource of Azure. For the VNet, there are two types such as Basic and Advanced. You can get more details, see Network concepts for applications in Azure Kubernetes Service (AKS).
AKS nodes are not exposed to the public internet and therefore will not have an exposed public IP.
With that said, I’ve been investigating an issue where nodes either lose or fail to ever get an internal IP. We (AKS) have implemented an initial fix, which restarts kubelet, and does seem to at least temporarily mitigate the lack of an internal IP. There are ongoing efforts upstream to find and fix the real root cause.
I don’t think I’ve come across the scenario of a node not having a hostname address though. I’m going to log a backlog item to investigate any clusters that appear to be experiencing this symptom. I can’t promise an immediate fix, but I am definitely going to look into this further early next week.
There is a preview of a feature enabling a public IP per node. Please see https://learn.microsoft.com/en-us/azure/aks/use-multiple-node-pools#assign-a-public-ip-per-node-in-a-node-pool
In common scenarios, each AKS node cluster will be behind a Load Balancer, which in turn will have an Public IP. You can get the public IP by going to your AKS Cluster -> Services & Ingresses -> Check for Service with Type Load Balancer. This will have a Public IP.
You can also configure the cluster so each Node has a Public IP. You can then access the details from the Node Pool tab.

AKS in a private VNET behind a corporate proxy

we have our AKS running in a private VNET, behind a corporate proxy. The proxy is not a "transparent" proxy and needs to be configured manually an all nodes. Is this a supported behavior? Is it possible to configure worker nodes and all system containers to work via proxy?
Actually, Azure Kubernetes managed by Azure and in a private Vnet create yourself or Azure. You can use the Load Balancer to transfer the traffic or use ingress. But you just can select one size and type for the nodes when you create the cluster and it seems not support multi size for the nodes currently. Maybe it will be supported in the future on Azure.
For more details about AKS, see Azure Kubernetes Service.

How can you publish a Kubernetes Service without using the type LoadBalancer (on GCP)

I would like to avoid using type: "LoadBalancer" for a certain Kubernetes Service, but still to be able to publish it on the Internet. I am using Google Cloud Platform (GCP) to run a Kubernetes cluster currently running on a single node.
I tried to us the externalIPs Service configuration and to give at turns, the IPs of:
the instance hosting the Kubernetes cluster (External IP; which also conincides with the IP address of the Kubernetes node as reported by kubernetes describe node)
the Kubernetes cluster endpoint (as reported by the Google Cloud Console in the details of the cluster)
the public/external IP of another Kubernetes Service of type LoadBalancer running on the same node.
None of the above helped me reach my application using the Kubernetes Service with an externalIPs configuration.
So, how can I publish a service on the Internet without using a LoadBalancer-type Kubernetes Service.
If you don't want to use a LoadBalancer service, other options for exposing your service publicly are:
Type NodePort
Create your service with type set to NodePort, and Kubernetes will allocate a port on all of your node VMs on which your service will be exposed (docs). E.g. if you have 2 nodes, w/ public IPs 12.34.56.78 and 23.45.67.89, and Kubernetes assigns your service port 31234, then the service will be available publicly on both 12.34.56.78:31234 & 23.45.67.89:31234
Specify externalIPs
If you have the ability to route public IPs to your nodes, you can specify externalIPs in your service to tell Kubernetes "If you see something come in destined for that IP w/ my service port, route it to me." (docs)
The cluster endpoint won't work for this because that is only the IP of your Kubernetes master. The public IP of another LoadBalancer service won't work because the LoadBalancer is only configured to route the port of that original service. I'd expect the node IP to work, but it may conflict if your service port is a privileged port.
Use the /proxy/ endpoint
The Kubernetes API includes a /proxy/ endpoint that allows you to access services on the cluster endpoint IP. E.g. if your cluster endpoint is 1.2.3.4, you could reach my-service in namespace my-ns by accessing https://1.2.3.4/api/v1/proxy/namespaces/my-ns/services/my-service with your cluster credentials. This should really only be used for testing/debugging, as it takes all traffic through your Kubernetes master on the way to the service (extra hops, SPOF, etc.).
There's another option: set the hostNetwork flag on your pod.
For example, you can use helm3 to install nginx this way:
helm install --set controller.hostNetwork=true nginx-ingress nginx-stable/nginx-ingress
The nginx is then available at port 80 & 443 on the IP address of the node that runs the pod. You can use node selectors or affinity or other tools to influence this choice.
There are a few idiomatic ways to expose a service externally in Kubernetes (see note#1):
Service.Type=LoadBalancer, as OP pointed out.
Service.Type=NodePort, this would exposing node's IP.
Service.Type=ExternalName, Maps the Service to the contents of the externalName field by returning a CNAME record (You need CoreDNS version 1.7 or higher to use the ExternalName type.)
Ingress. This is a new concept that expose eternal HTTP and/or HTTPS routes to services within the Kubernetes cluster, you can even map a route to multiple services. However, this only maps HTTP and/or HTTPS routes only. (See note#2)