I am implementing a load balancing application in OpenDaylight. So I need the Cpu utilization of hosts created using Mininet. I want the hosts to send their CPU usage info to the controller at regular intervals.
What is best way achieve this ?
Thanks !
Related
I am doing load test to tune my apache to server maximum concurrent https request. Below is the details of my test.
System
I dockerized my httpd and deployed in openshift with pod configuration is 4CPU, 8GB RAM.
Running load from Jmeter with 200 thread, 600sec ramup time, loop is for infinite. duration is long run (Jmeter is running in same network with VM configuration 16CPU, 32GB RAM ).
I compiled by setting module with worker and deployed in openshift.
Issue
Httpd is not scaling more than 90TPS, even after tried multiple mpm worker configuration (no difference with default and higher configuration)
2.Issue which i'am facing after 90TPS, average time is increasing and TPS is dropping.
Please let me know what could be the issue, if any information is required further suggestions.
I don't have the answer, but I do have questions.
1/ What does your Dockerfile look like?
2/ What does your OpenShift cluster look like? How many nodes? Separate control plane and workers? What version?
2b/ Specifically, how is traffic entering the pod (if you are going in via a route, you'll want to look at your load balancer; if you want to exclude OpenShift from the equation then for the short term, expose a NodePort and have Jmeter hit that directly)
3/ Do I read correctly that your single pod was assigned 8G ram limit? Did you mean the worker node has 8G ram?
4/ How did you deploy the app -- raw pod, deployment config? Any cpu/memory limits set, or assumed? Assuming a deployment, how many pods does it spawn? What happens if you double it? Doubled TPS or not - that'll help point to whether the problem is inside httpd or inside the ingress route.
5/ What's the nature of the test request? Does it make use of any files stored on the network, or "local" files provisioned in a network PV.
And,
6/ What are you looking to achieve? Maximum concurrent requests in one container, or maximum requests in the cluster? If you've not already look to divide and conquer -- more pods on more nodes.
Most likely you have run into a bottleneck/limitation at the SUT. See the following post for a detailed answer:
JMeter load is not increasing when we increase the threads count
I'm trying to optimize Docker-Swarm load-balancing in a way that it will first route requests to services by the following priority
Same machine
Same DC
Anywhere else.
Given the following setup:
DataCenter-I
Server-I
Nginx:80
Server-II
Nginx:80
Worker
DataCenter-II
Server-I
Nginx:80
Worker
In case and DataCenter-I::Server-II::Worker will issue an API request over port 80, The desired behavior is:
Check if there are any tasks (containers) mapped to port:80 on local server (DataCenter-I::Server-II)
Fallback and check in local DataCenter (i.e DataCenter-I::Server-I)
Fallback and check in all clusters (i.e DataCenter-II::Server-I)
This case is very useful when using workers and response time doesn't matter while bandwidth does.
Please advise,
Thanks!
According to this question I asked before, docker swarm is currently only using round-robin and no indication to be pluginable yet.
However, Nginx Plus support least_time load balancing method, which I think there will be an similar open-source module, and it is similar to what you need, with perhaps the least effort.
ps: Don't run Nginx with the docker swarm. Instead, run Nginx with regular docker or docker-compose in the same docker network of your app. You don't want docker swarm to load balancing your load balancer.
What algorithm is used to balance HTTP load among several instances running on Bluemix? It seems I can use auto-scaling service to scale horizontally, and want to know what algorithm is used when balancing the load.
Cloud Foundry uses round-robin load balancing to distribute requests across the running instances of your app.
I have controller and its topology created using mininet. I need to generate traffic among hosts of topology via iperf, so that the controller is loaded and it can not handle. Is there command to generate huge number of packets at a time or large amount of traffic generation which possible by iperf?
What does this mean "the controller is loaded and it can not handle"? Do you mean find a way to generate huge traffic to saturate the CPU? network bandwidth? Or other resource?
Just finding this tool is also useful to generate multiple threads traffic on Linux: https://github.com/Microsoft/ntttcp-for-linux
I'm using apache mina server to process my workflow.
But when too many processes are launched the Mina server is occupying much of JVM and i couldnt progress further.
One instance of "org.apache.mina.transport.socket.nio.NioSocketSession" loaded by
"org.jboss.classloader.spi.base.BaseClassLoader # 0xb9b10d58" occupies 685,361,840 (68.96%) bytes.
The memory is accumulated in one instance of "java.lang.Object[]" loaded by "<system class loader>".
1.So is there any other alternative to Mina..?
2.How to handle my human task without Mina..?
Kindly suggest a solution...
There are two alternatives to Apache Mina currently supported in jBPM 5.2
- LocalTaskService: runs locally, next to your process engine
- HornetQ: uses HornetQ messages for communication between client and server
Kris