I'm trying to write a taurus yaml configuration file that will allow me to split 200 threads among two workers. This is the part of the config file for setting this:
execution:
- distributed:
- 172.17.0.2:1099
- 172.17.0.3:1099
scenario: scenario
concurrency: 200
scenarios:
scenario:
properties:
PERFUSER: 400
script: /scenarios/scipt.jmx
But with this config every worker gets 200 threads. How can I write the config file such that what I specify for concurrency gets distributed equally to all workers (in this case 100 threads for each worker)?
Taurus generates a JMeter .jmx script and kicks off JMeter Master process
JMeter master sends the .jmx script to slaves
Each slave independently executes the .jmx script and reports the results back to master
Therefore if you define concurrency as 200 each slave will execute script with the concurrency of 200, in case of 2 slaves you will have 400 users, in case of 3 slaves - 600 users, etc.
So you need to manually proportionally reduce the concurrency depending on the number of slave machines.
More information:
Apache JMeter Distributed Testing Step-by-step
Remote Testing
How to Perform Distributed Testing in JMeter
Related
Os:Windows
Machines:vdi, remote machines
Getting connection timeout error in jmeter distribution testing while clicking remote start all but scripts r working fine normally
If you're getting connection timeouts between JMeter master and slaves - check your JMeter instances RMI configuration
JMeter master should be able to connect to the slaves to transfer the .jmx script to them
JMeter slaves should be able to connect to the master to transfer the test results back
If you're getting connection timeouts as the result of your Samplers it might be the case that your application under test gets overloaded because if you have 100 user in Test Plan and 1 slave - it will kick off 100 users, each additional slave will add 100 more users so if you have 10 slaves it will be 1000 users. Check Active Threads Over Time Listener (can be installed using JMeter Plugins Manager) to see what is perceived load you generate
Invest into learning some English or if you know it better than me consider changing the way you're asking for help b'cause u r not # Tinder. How do I ask a good question? is a very good place to start.
I am doing load test to tune my apache to server maximum concurrent https request. Below is the details of my test.
System
I dockerized my httpd and deployed in openshift with pod configuration is 4CPU, 8GB RAM.
Running load from Jmeter with 200 thread, 600sec ramup time, loop is for infinite. duration is long run (Jmeter is running in same network with VM configuration 16CPU, 32GB RAM ).
I compiled by setting module with worker and deployed in openshift.
Issue
Httpd is not scaling more than 90TPS, even after tried multiple mpm worker configuration (no difference with default and higher configuration)
2.Issue which i'am facing after 90TPS, average time is increasing and TPS is dropping.
Please let me know what could be the issue, if any information is required further suggestions.
I don't have the answer, but I do have questions.
1/ What does your Dockerfile look like?
2/ What does your OpenShift cluster look like? How many nodes? Separate control plane and workers? What version?
2b/ Specifically, how is traffic entering the pod (if you are going in via a route, you'll want to look at your load balancer; if you want to exclude OpenShift from the equation then for the short term, expose a NodePort and have Jmeter hit that directly)
3/ Do I read correctly that your single pod was assigned 8G ram limit? Did you mean the worker node has 8G ram?
4/ How did you deploy the app -- raw pod, deployment config? Any cpu/memory limits set, or assumed? Assuming a deployment, how many pods does it spawn? What happens if you double it? Doubled TPS or not - that'll help point to whether the problem is inside httpd or inside the ingress route.
5/ What's the nature of the test request? Does it make use of any files stored on the network, or "local" files provisioned in a network PV.
And,
6/ What are you looking to achieve? Maximum concurrent requests in one container, or maximum requests in the cluster? If you've not already look to divide and conquer -- more pods on more nodes.
Most likely you have run into a bottleneck/limitation at the SUT. See the following post for a detailed answer:
JMeter load is not increasing when we increase the threads count
I have a batch based micro service that runs every after a particular interval through a chronos job for which I have to performance test. This micro service doesn't return any response but downloads zip files from Amazon S3, extract them and uploads the individual files from the zip to Amazon S3. I work on JMETER to performance test Web applications. Can I use JMeter for perf testing this batch based micro service? If yes, what would I have to do?
Yes, you can use JMeter for this, take a look at:
HTTP Request sampler - to mimic downloads and uploads
Save Responses to a file listener - to store downloaded zip files
OS Process Sampler - to unzip the downloaded files
Check our Performance Testing: Upload and Download Scenarios with Apache JMeter article for detailed information on JMeter configuration for file operations.
You can't load test this service, because it has no particular HTTP endpoint. There is no load on this service since it's not being hit by any user.
You should use production monitoring instead to track any performance issue while the service is running.
I'm struggling with using jMeter to test my .NET web app. I'm running jMeter locally against a staging environment in Azure for my app. Hitting some endpoints, I get:
java.net.ConnectException: Connection timed out: connect
Which tells me it's something happening on my end, not caused by my app. My app shows no errors and is serving requests at this magnitude with ease.
In this particular test, I have 300 threads with a ramp-up of 10 seconds, repeated 3 times.
What can I do to diagnose further? Is there some kind of limit being imposed client-side?
JMeter default configuration is not suitable for producing high loads, you may use it for tests development and debugging only. When it comes to running the load test you need to increase at least Java Heap Space allocated to JMeter (it is only 512Mb by default, I think browser which you're using to read this page consumes twice as more).
You should be also running your test in non-GUI mode as JMeter GUI is not designed to correctly display information when more or less immense load is being generated, you can use it up to ~50 threads maybe.
More information:
JMeter Best Practices
9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure
I would also suggest keeping an eye on your load generator health during the test and collect information regarding CPU, RAM, Swap, Disk, Network usage as well as some JVM metrics like Heap usage, Garbage collections, etc. so you could see whether your JMeter instance is configured good enough, if there is enough headroom w.r.t. hadrware resources, etc. Of course doing it on server side is a must. You can use PerfMon JMeter Plugin to collect this information and plot results along with other test metrics so you could correlate the values and identify the cause. See How to Monitor Your Server Health & Performance During a JMeter Load Test for plugin configuration and usage instructions
From the storm docs:
supervisor.slots.ports: "For each worker machine, you configure how many workers run on that machine with this config. Each worker uses a single port for receiving messages, and this setting defines which ports are open for use. If you define five ports here, then Storm will allocate up to five workers to run on this machine."
And from the storm concepts:
Workers: Topologies execute across one or more worker processes. Each worker process is a physical JVM and executes a subset of all the tasks for the topology.
My storm.yaml defines:
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
And then I run a topology with topology.workers set to 3 (kafka-spout-parallelism set to 1 and solr-bolt-parallelism set to 2, no other bolts).
Storm-ui also shows that my topology is running fine with 1 spout and 2 bolts.
But when I login to the storm machines and run ps -aef | grep storm or jps -l, I do not see the JVM processes for workers anywhere. Only processes I see are:
Machine 1:
jps -l
30675 backtype.storm.daemon.supervisor
30583 backtype.storm.daemon.logviewer
Machine 2:
jps -l
6818 backtype.storm.ui.core
6995 backtype.storm.daemon.supervisor
6739 backtype.storm.daemon.nimbus
6904 backtype.storm.daemon.logviewer
Does storm not create one physical JVM per worker? And does that not translate to one JVM per port mentioned in supervisor.slots.ports?
One worker slot equates to one JVM slot. Slots are only taken up when a topology is deployed.
It's possible you checked the processes before the workers actually started. Check the Storm UI to make sure the topology is up, running, and processing data. If it's not, then use the log viewer to look for errors. It is possible the workers are crashing due to uncaught exceptions.