GCE load balancer can not call from inside - load-balancing

I use CoreOS on Google Compute Engine, to launch 3 instances: core1, core2, core3
I use Fleet to start a web service container on one of the three instances (ex: core1).
Because I don't know which machine the container is on, so I use Network Load Balance to forward request to the container, with the Targer Pool contain 3 machines
The issue is that I can not curl the load balancer IP from the core2 or core3 machine.
I expect the load balancer should forward the request to web serivce container on core1. How do I do that?

Related

Cannot add VM to Standard Azure Load Balancer

I have an Azure standard internal load balancer inside a VNET that contains several virtual machines. Two of the VMs are not listed as options when I want to add them to a back end pool of the load balancer. They are were created under ARM and not included in any other load balancer pool. They are also in the same VNET that is associated to the backend pool.
If I create a basic load balancer, I can see them and successfully add them to the pool. Is there documentation on the VM requirements that must be met before you can add a VM to a pool within a standard load balancer?
When you add the backend pool, you will see Only VMs in the same region with standard SKU public IP or no public IP can be attached to this load balancer.
In this case, you can randomly disable the public IP address from the virtual machine---network interface---Ip configurations---ipconfig1---disabled---save. Then you can add the desired VMs to backend pool again.

Load balancer PublicIPReferencedByMultipleIPConfigs error on restart

Following along from the Use a static IP address with the Azure Container Service (AKS) load balancer documentation I have created a static IP and assigned it to the load balancer. This worked fine on the initial run, but now I am getting the following error and the external ip for my load balancer is stuck <pending> (personal info omitted):
Failed to ensure load balancer for service default/[...]: network.LoadBalancersClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="PublicIPReferencedByMultipleIPConfigs" Message="Public ip address /subscriptions/[...]/providers/Microsoft.Network/publicIPAddresses/[PublicIPName] is referenced by multiple ipconfigs in resource
As far as I can tell, this isn't referenced by multiple configs - just the load balancer service that I'm trying to run. Removing the loadBalancerIP option from my yaml file allows this to work but then I don't think the server address is static - which is not ideal for the applications trying to communicate with this container
Is this supposed to be happening? Is there a way to configure this so that the same IP can be reused after the container restarts?
Seeing as this issue appears to still be present, for anyone else stumbling upon this issue it seems that the Azure load balancer resource itself may be taking the first configured static IP address.
GitHub issue response:
the first public IP address created is used for egress traffic
Microsoft Docs:
Once a Kubernetes service of type LoadBalancer is created, agent nodes are added to an Azure Load Balancer pool. For outbound flow, Azure translates it to the first public IP address configured on the load balancer.
As far as I can tell, once you provision an IP address and configure an AKS load balancer to use it, that IP gets picked up by the provisioned load balancer resource in Azure. My best guess is that when Kubernetes attempts to provision a new load balancer with the same IP address, if the previous Azure load balancer still exists the IP config will fail as it's still in use.
Workaround was to provision an extra static IP (one specifically for the Azure load balancer resource, and one for the actual AKS load balancer service) to avoid conflicts. It's obviously not ideal but it solves the issue...

Azure Container Services Port Load Balancer

While trying to port my application which is running on docker Swarm locally to Azure Container Service I am struck on the load balancer part of the Azure.
Locally I have a container instance of HAproxy running on Swarm Master and multiple web containers running.
Web containers have just exposed the ports and they are not mapped to machines on which they are running.
HAproxy container has mapped port to the master and internally is talking to my web containers for load balancing.
This gives me the leverage to run any number of containers with limited number of workers in Docker Swarm.
In azure container service I see that Azure load balancer will talk to only ports that are mapped, that means that I can only run 1 container per agent or I keep an internal load balancer in my containers, which implies that users will be going through 2 load balancers before hitting my application.
Not an ideal scenario when my application uses sticky sessions.
So Apparently Microsoft's statement "Everything works same in Azure containers" goes for a toss ?
what are the solutions available or am I doing something wrong here?
Regards,
Harneet
The solution in ACS is almost identical. Use HAProxy and have the Azure LB talk to that. The only difference is that you will not be running the proxy on the master, you will have Swarm deploy it to an agent for you.
You shouldn't really be running workloads on your masters. What would you do if you have a DDoS attack and can't reach your masters, for example. Having Swarm deploy the proxy for you means that you can also have swarm monitor the health of the proxy.
You could, if you really wanted to, run the proxy on the master as you do now. The solution would be the same, have the Azure LB provide a public connection to the proxy just as you currently do.

Google compute engine load balancing not routing properly

I am new to Google compute engine and I am try to setup network load balancing having 2 VMs for serving web pages.
For ex, I have 2 VMs - app1 and app2 - both having apache server and serves simple web page.
Both VMs are running with Red Hat Enterprise Linux Server release 7.0 (Maipo)
I am able to access both web pages through the IP in browser.
I created network load balancing setup and both apps are showing in green in target pool which means load balancer is able to connect to both VMs.
But, when I hit the IP of load balancer, it is rendering page from only one server. If I manually stop the server in the VM, load balancer IP redirects to other app. I believe load balancer is able to identify health of both VMs and able to redirect.
But it is not balancing the traffic. Can anyone help me to solve this issue?
I think that the network load balancer doesn't forward the traffic on a round-robin basis. I was able to test it with the load balancer setup that I have. As per the documentation:
By default, to distribute traffic to instances, Google Compute Engine picks an instance based on a hash of the source IP and port and the destination IP and port.
HTTP/S load balancing will proxy requests in a round-robin fashion. https://cloud.google.com/compute/docs/load-balancing/http/

Glassfish 3.1 loadbalancing setup

I am very new to server setup. I have a cluster with 2 instances in GF.
instance1:28081
instance2:28082
I am running my GF in Amazon Linux EC2 instance. What are the options to create a load balancer setup that directs traffic to these instances when I try to access my EC2 instance http 80 port?
1) Do I need to have a webserver to direct traffic to these instances?
2) Is there any options in Glass fish which can handle load balancing without a webserver on these instances? I couldn't find load balancing configuration on my admin console.
3) Is there a way to use Amazon Load balancing to distribute traffic to these cluster instances which resides in a single ec2 instance?
If some one can provide step by step instructions/link reference that would be helpful.
I did a nice write up on loadbalancing/failover and proxy options for GlassFish.
have a look: http://blog.eisele.net/2012/01/throwing-light-on-glassfish-webserver.html