I am trying to test a api with the help of jmeter through aws load balancer it throws 504.And the instance cpu reached 100% and it is not getting drop till restarting the server.
Same api tested with the help of postman through aws load balancer it throws the expected output.
Same api tested with the help of jmeter hitting directly to the instance(voilated load balancer) throws the expected outupt.
I am facing the issue only hitting via load balancer through JMETER.
I am not facing the issue
While hitting via load balancer through postman
Hitting without load balancer directly to the instance through JMETER.
so how to overcome the issue with jmeter and loadbalancer
Try adding a DNS Cache Manager to your Test Plan, it might be the case that JMeter hits only one node behind the load balancer and remaining nodes are not touched by the load test.
References:
Disable DNS caching
The DNS Cache Manager: The Right Way To Test Load Balanced Apps
Other causes could be connected with load balancing mechanism and/or algorithm, the load balancer can orchestrate the requests basing on cookies so you will need to play with the HTTP Cookie Manager, or source IP address or whatever.
Related
Following along from the Use a static IP address with the Azure Container Service (AKS) load balancer documentation I have created a static IP and assigned it to the load balancer. This worked fine on the initial run, but now I am getting the following error and the external ip for my load balancer is stuck <pending> (personal info omitted):
Failed to ensure load balancer for service default/[...]: network.LoadBalancersClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="PublicIPReferencedByMultipleIPConfigs" Message="Public ip address /subscriptions/[...]/providers/Microsoft.Network/publicIPAddresses/[PublicIPName] is referenced by multiple ipconfigs in resource
As far as I can tell, this isn't referenced by multiple configs - just the load balancer service that I'm trying to run. Removing the loadBalancerIP option from my yaml file allows this to work but then I don't think the server address is static - which is not ideal for the applications trying to communicate with this container
Is this supposed to be happening? Is there a way to configure this so that the same IP can be reused after the container restarts?
Seeing as this issue appears to still be present, for anyone else stumbling upon this issue it seems that the Azure load balancer resource itself may be taking the first configured static IP address.
GitHub issue response:
the first public IP address created is used for egress traffic
Microsoft Docs:
Once a Kubernetes service of type LoadBalancer is created, agent nodes are added to an Azure Load Balancer pool. For outbound flow, Azure translates it to the first public IP address configured on the load balancer.
As far as I can tell, once you provision an IP address and configure an AKS load balancer to use it, that IP gets picked up by the provisioned load balancer resource in Azure. My best guess is that when Kubernetes attempts to provision a new load balancer with the same IP address, if the previous Azure load balancer still exists the IP config will fail as it's still in use.
Workaround was to provision an extra static IP (one specifically for the Azure load balancer resource, and one for the actual AKS load balancer service) to avoid conflicts. It's obviously not ideal but it solves the issue...
I'm totally new to clustering and load balancing.
What I'm trying to do is "Deploy Application on a Cluster which contains 2 managed servers. Now, If one of the managed server goes down, request should be redirected to another server which is Up."
For Example:
I've 2 managed servers (M1:7021 and M2:7022)
And I've a Cluster C1 having M1 and M2.
And I've an Application App1 deployed on C1 and a Data Source deployed on C1.
Application App1 is working fine.
The way through which I'm accessing application is:
http://10.184.111.11:7021/App1/
AND
http://10.184.111.11:7022/App1/
Now, Suppose if M1(7021) goes down, and request is coming like
:7021/App1/
Then, it should be redirected to :7022/App1/
Any help is highly appreciated. Thanks!
I believe you will need a load balancer (or a software equivalent) to sit above the weblogic servers and direct traffic down to those servers.
The idea being that you access your application on http://loadBalancer.com/App and then the Load Balancer forwards your request onto either one of weblogic servers. Meanwhile in the background the load balancer is continually performing health checks on the two weblogic servers to see if they are running.
In the event that one of the weblogic servers go down, the load balancer will mark it as inactive and forward all traffic to the weblogic server still running. Once the failed weblogic server has come back online the load balancer will begin routing traffic back through it.
#Garreth Well, in fact WebLogic DOES provide an internal load balancer. You are supposed to use OHS or Apache for load balancing in production environments, but for development, httpclusterservlet works great.
I am new to Google compute engine and I am try to setup network load balancing having 2 VMs for serving web pages.
For ex, I have 2 VMs - app1 and app2 - both having apache server and serves simple web page.
Both VMs are running with Red Hat Enterprise Linux Server release 7.0 (Maipo)
I am able to access both web pages through the IP in browser.
I created network load balancing setup and both apps are showing in green in target pool which means load balancer is able to connect to both VMs.
But, when I hit the IP of load balancer, it is rendering page from only one server. If I manually stop the server in the VM, load balancer IP redirects to other app. I believe load balancer is able to identify health of both VMs and able to redirect.
But it is not balancing the traffic. Can anyone help me to solve this issue?
I think that the network load balancer doesn't forward the traffic on a round-robin basis. I was able to test it with the load balancer setup that I have. As per the documentation:
By default, to distribute traffic to instances, Google Compute Engine picks an instance based on a hash of the source IP and port and the destination IP and port.
HTTP/S load balancing will proxy requests in a round-robin fashion. https://cloud.google.com/compute/docs/load-balancing/http/
I got a query regarding a response from cluster of managed server (Weblogic) behind load balancer.
During a request to server (though load balancer) if server crashes then does response goes back to client OR load balancer re-route the request to another running server in that cluster (stateful request)
so the workflow is something like this:
Client request -> server1 -> crashed in between processing request -> reponse back to app -> send back to server -> load balancing-> to another running server.
does it go to client in case of crash server response
OR
Load Balancer handles the response and see that server crash response received and hence it re-request (without even client letting know about this whole crashing scene)
Depends on the Load Balancer, depends on the vendor, depends on the module and method. Your question is vague. This is what a load balancer should be configured to do. It comes built in for some load balancers like Apache and might have to be specifically configured for some others.
The load balancer should probe your application page, for example if you have an application running on http://example.com:9101/index.html, then every 15 seconds(say) it will probe the page and ensure the application is up and send traffic to it. If not, it will send it to the other members defined. So if the server crashes, it will know and stop sending. Most commercial and free load balancers have to option to be configured to do so, so does the weblogic load balancer.
Without more information, this is the most general answer I can provide.
I'm using Jmeter to load test my web application. I have two web servers and we are using HAProxy for load balance. All my tests are running fine and configured correctly. I have three jmeter remote clients so I can run my tests distributed. The problem I'm facing is that ALL my jmeter requests are only being processed by one of the web servers. For some reason it's not balancing and I'm having many time outs, and huge response times. I've looked around a lot for a way to make these requests being balanced, but I'm having no luck so far. Does anyone know what can be the cause of this behavior? Please let me know if you need to know anything about my environment first and I will provide the answers.
Check your haproxy configuration:
What is it's load balancing policy, if not round-robin is it based on ip source or some other info that might be common to your 3 remote servers?
Are you sure load balancing is working right? Try testing with browser first, if you can add some information about the web server in response to debug.
Check your test plan:
Are you sure you don't have somewhere in your requests a sessionid that is hardcoded?
How many threads did you configure?
In your Jmeter script by default the HTTP Request "Use KeepAlive" header option is checked.
Keep-Alive is a header that maintains a persistent connection between
client and server, preventing a connection from breaking
intermittently. Also known as HTTP keep-alive, it can be defined as a
method to allow the same TCP connection for HTTP communication instead
of opening a new connection for each new request.
This may cause all requests to go to the same server. Just uncheck the option and save, stop your script and re-run.