Is there a way to synchronize "thread group runs" across slaves in Jmeter's distributed testing?
For example, Suppose my test plan has 2 thread groups and I run JMeter with 2 slaves.If one of the slaves finishes with the first thread group first, I want that particular slave to wait until the other slave gets done with the first thread group as well. Then I want them to proceed with the second one together!
Please help with this problem.
I do not think we have a straight forward method for this in JMeter. I assume you run your thread groups consecutively.
In this Case, Lets have one more thread group in between as given below. Let the thread count be 1. Create a Beanshell Sampler - You can do your own sync operation here. (create a file in a common location - wait till total no of files = total no of slaves. it will almost make your sync accurate.
I think this approach will help!
OR
You can have 2 tests - First Thread Group in 1 JMX file and second thread group in second jmx file. Just call the second test once the first test is complete.
Related
I'm wondering if there is a way to get the parent execution of an execution in camunda. What I'm trying to achieve is basically the following:
This is a simple process involving a parallel gateway. Each of the flows is composed of a service task (external) and a user task.
In each "Pre: Task X" service task, I want to set some variables that I will use afterward in their respective user tasks. I want each execution flow of the parallel gateway to have their own variables and non-accessible from the other flows. How can I achieve this?
I was doing some tests and I found the following:
When the process is instantiated, I get instantly 5 execution instances.
What I understand is that one belongs to the process, the next two belong to each flow of the parallel gateway, and the last two belong to each of the service tasks.
If I call "complete" for one of the service tasks on the REST API with localVariables, they will instantly disappear and no further be available because they will be tied to the execution associated to the external task, which is terminated after the task completion.
Is there a way in which I can get the parent execution of the task, which in this case would be the parallel execution flow. So I can set localVariables at this level?
Thanks in advance for the valuable help
Regards
First of all 5 executions doesn't mean they are active. In your case there should only be 2 executions active when you start a new instance for the process. You can set your variables in respective executions as return value of the respective service tasks.
You can set variables for process instance but do respect you have 2 executions and 1 process instance. You can not set same variable for multiple executions.
Hello and thank you for reviewing this question!
I'm working on an SGE cluster with 16 available worker nodes. Each has 32 cores.
I have a rule which defines a process which must be run only one instance per worker node. This means I could in theory run 16 jobs at a time. It's fine if there are other things happening on each worker node - there just can't be two jobs from this specific rule running at the same time. Is there a way to ensure this?
I have tried setting memory resources. But setting for example
resources:
mem_mb=10000
and running
snakemake --resources mem_mb=10000
will only allow one job to run at a time, not one job per cluster. Is there a way to set each individual cluster's memory limit? Or some other way to achieve one job per node for only a specific rule?
Thank you,
Eric
In jMeter
I have a test plan with 100 virtual users. If i set ramp up time to 100 then the whole test takes 100 sec to complete the whole set. That means each thread takes 1 sec to perform for each virtual user. Meaning that each thread is carried out step by step. However, each thread is carried out after completion of previous one.
Problem: I need 100 users accessing the website at a same time , concurently and simultaneously. I read about CSV but still it does act step wise dosent it. OR if I am not clear about it. Please enlighten me.
You're running into "classic" situation described in Max Users is Lower than Expected article.
JMeter acts as follows:
Threads are being started according to the ramp-up time. If you put 1 there - threads will be started immediately. If you put 100 threads and 100 seconds ramp-up time initially 1 thread will start and each 1 second next thread will be kicked off.
Threads start executing samplers upside down (or according to logic controllers)
When thread doesn't have more samplers to execute and more loops to iterate - it's being shut down.
So I would suggest adding more loops on Thread Group level so threads kicked off earlier kept looping while others are starting so finally you could have 100 threads working at the same time. You can configure test execution time either in Thread Group "Scheduler" section or via Runtime Controller.
Another good option is using Ultimate Thread Group available via JMeter Plugins which provides easy way of configuring your load scenario.
can we run two thread groups parallel by creating a single test plan in Jmeter ??
Example:
I have to add 2 test cases in a test plan, which has to be executed in parallel and can we combine this test plan with any other test plan to be executed simultaneously
Jmeter supports running more than one scenario in parallel as part of the same test plan.
Each scenario is managed in its own Thread Group element.
So for your case, add a new Thread Group to the test plan, and set the steps for the second scenario there. When you have more than 1 Thread Group, you can configure the test plan to start them at the same time (or one after the other).
There is no guarantee that the requests will be in the exact same time, but both Thread Groups will start simultaneously.
Hope it helps :)
I'm trying to keep an eye on how long an application runs. To do this, I capture every process's ID as it starts, and when that process is shut down, I log the time. However, Google's Chrome starts and stops like 6 processes when you start it up and shut it down, meaning each execution of Chrome gets logged multiple times.
Is there a better way to track the execution of an application than by process ID? Or is there, perhaps, a technique for getting around this particular problem? I'd considered not adding a process ID if a process with the same ID was added within a second or so, but that seems exploitable.
Any ideas?
I am not 100% but I would assume that one process in Chrome must be the parent. try eliminating processes from your list if their parent (PPID) is the same (and not init = PID 1)
I ended up just checking if I was adding a duplicate. Not very efficient, but easy and effective. It will serve for now.