I have an Azure standard internal load balancer inside a VNET that contains several virtual machines. Two of the VMs are not listed as options when I want to add them to a back end pool of the load balancer. They are were created under ARM and not included in any other load balancer pool. They are also in the same VNET that is associated to the backend pool.
If I create a basic load balancer, I can see them and successfully add them to the pool. Is there documentation on the VM requirements that must be met before you can add a VM to a pool within a standard load balancer?
When you add the backend pool, you will see Only VMs in the same region with standard SKU public IP or no public IP can be attached to this load balancer.
In this case, you can randomly disable the public IP address from the virtual machine---network interface---Ip configurations---ipconfig1---disabled---save. Then you can add the desired VMs to backend pool again.
Related
When using the Microsoft internal load balancer (ILB), I would like to create a pool for VMs that reside in a VNET that is a different than the VNET where the ILB is located. The UI would seem to support this as I can select any VNET in my environment when creating the pool. Yet, when I create this pool, I receive the following error that would imply this is not allowed.
NetworkInterfaceAndInternalLoadBalancerMustUseSameVnet
{"code":"BadRequest","message":"{\r\n \"error\":
{\r\n \"code\":
\"NetworkInterfaceAndInternalLoadBalancerMustUseSameVnet\",\r\n
\"message\": \"Network interface
/subscriptions/2f46d973-XXXX-XXXX-80a7-7222a103acb4/resourceGroups/ihde_operations/providers/Microsoft.Network/networkInterfaces/op-vm-ftp1463
uses internal load balancer
/subscriptions/2f46d973-cea1-XXXX-XXXX-7222a103acb4/resourceGroups/ihde_dev/providers/Microsoft.Network/loadBalancers/dev-lb-CSL-Internal
but does not use the same VNET
(/subscriptions/2f46d973-cea1-4856-80a7-7222a103acb4/resourceGroups/IHDE_DEV/providers/Microsoft.Network/virtualNetworks/VNET_BACKEND)
as the load balancer.\",\r\n \"details\": []\r\n }\r\n}"}]}
As a side note, the public version of the load balancer does support this this scenario without any issues.
Per this doc.
An internal Load Balancer differs from a public Load Balancer. Azure
infrastructure restricts access to the load-balanced frontend IP
addresses of a virtual network.
For an internal Load Balancer, It enables load balancing of VMs in a virtual network to a set of VMs that reside within the same virtual network.
So you could not create a pool for VMs that reside in a VNET that is a different than the VNET where the ILB is located.
Following along from the Use a static IP address with the Azure Container Service (AKS) load balancer documentation I have created a static IP and assigned it to the load balancer. This worked fine on the initial run, but now I am getting the following error and the external ip for my load balancer is stuck <pending> (personal info omitted):
Failed to ensure load balancer for service default/[...]: network.LoadBalancersClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="PublicIPReferencedByMultipleIPConfigs" Message="Public ip address /subscriptions/[...]/providers/Microsoft.Network/publicIPAddresses/[PublicIPName] is referenced by multiple ipconfigs in resource
As far as I can tell, this isn't referenced by multiple configs - just the load balancer service that I'm trying to run. Removing the loadBalancerIP option from my yaml file allows this to work but then I don't think the server address is static - which is not ideal for the applications trying to communicate with this container
Is this supposed to be happening? Is there a way to configure this so that the same IP can be reused after the container restarts?
Seeing as this issue appears to still be present, for anyone else stumbling upon this issue it seems that the Azure load balancer resource itself may be taking the first configured static IP address.
GitHub issue response:
the first public IP address created is used for egress traffic
Microsoft Docs:
Once a Kubernetes service of type LoadBalancer is created, agent nodes are added to an Azure Load Balancer pool. For outbound flow, Azure translates it to the first public IP address configured on the load balancer.
As far as I can tell, once you provision an IP address and configure an AKS load balancer to use it, that IP gets picked up by the provisioned load balancer resource in Azure. My best guess is that when Kubernetes attempts to provision a new load balancer with the same IP address, if the previous Azure load balancer still exists the IP config will fail as it's still in use.
Workaround was to provision an extra static IP (one specifically for the Azure load balancer resource, and one for the actual AKS load balancer service) to avoid conflicts. It's obviously not ideal but it solves the issue...
We are in the process of configuring and hosting our services on Google Cloud Services. We are using few instances of GCE. We also have a network load balancer. What I want to do is to block all direct HTT/S requests to individual instances and only be available via N/W load balancer.
Also, N/W load balancer will be mapped to a DNS.
By default GCE will not allow any ports be accessible from outside the network unless you create a firewall rule to allow.
One way is to remove the external IP for all the instances and use only one server as gateway instance with external IP to go to all the other instances. Make sure you add firewall rule allowing access from intended source to the gateway instance. This way only gateway instance is exposed to intended sources or external world based on firewall rule.
Then you can create network load balancer adding up instances that have no direct external access.
This is one way these are more ways to achieve.
I use CoreOS on Google Compute Engine, to launch 3 instances: core1, core2, core3
I use Fleet to start a web service container on one of the three instances (ex: core1).
Because I don't know which machine the container is on, so I use Network Load Balance to forward request to the container, with the Targer Pool contain 3 machines
The issue is that I can not curl the load balancer IP from the core2 or core3 machine.
I expect the load balancer should forward the request to web serivce container on core1. How do I do that?
I have certain doubts about load balancing VMs in Azure.
When i add a vm to an existing deployment(already running VM) it assigns the same dns name to the added VM but there is no Virtual IP assigned to the VM. But the port shows load balanced VM.
I believe what you're saying is that VM number 2 through N appear to come online with the same DNS name as VM number 1.
This is expected behavior. The VIP is the Virtual IP address, which is the DNS name on the outside edge of the load balancer. This should be the same for every load balanced machine behind the load balancer. The internal IP of each instance is known as the DIP, or Dedicated IP address.
The job of the load balancer is to take requests against the VIP and redistribute those calls to each of the DIPs. Once the request is serviced by an individual instance, it is routed back to the caller via the load balancer.
Is that what you're seeing, or am I misunderstanding your question?