I have a Test plan in which there are multiple Thread Groups.
I want to run all of the Thread Groups sequentially.
Thread Groups are as below:
Thread Group1
Thread Group2
Thread Group3
. . .
Thread GroupN
I've read in different blogs and articles on the internet, people claiming that the thread groups will run in the order they are defined but apparently they are not in my case. Thread Group4 runs before Thread Group1. Thread Group4 is generating a report which is wrong because it runs before Thread Group1.
How do I ensure the ordering of my Thread Groups?
Also, I need to implement the following scenarios:
Run a single request multiple time by a single user (Single user should create 1000 accounts from a single HTTP request).
Run a multiple requests multiple times by multiple users simultaneously (Multiple users should create 1000 accounts simultaneously from a single HTTP request).
How to do so?
PS: Please read and understand the query carefully before replying.
Concerning consecutive execution of thread groups in test plan: simple check Run Test Group consecutively check-box on the Test Plan configuration screen:
Use e.g. Loop Controller for this:
Thread Group
Number of Threads = 1
Loop Count = 1
...
Loop Controller
Loop Count = N
HTTP Request
...
or even schema without Loop Controller but not so flexible:
Thread Group
Number of Threads = 1
Loop Count = N
...
HTTP Request
...
Use Number of Threads property of standard Thread Group for this together with Ramp-Up Period property:
Thread Group
Number of Threads = N
Ramp-Up Period = 0
Loop Count = 1
...
HTTP Request
...
This will start N simultaneous threads executing same scenario.
Check the Run Thread Groups consecutively (i.e. run groups one at a time) in the Test Plan.
Refer this link. This asks you to check the check box in the main Test Plan
Run Thread Groups consecutively (i.e. one at a time)
http://www.mahsumakbas.net/run-jmeter-thread-groups-consecutively/
Just add more thread groups in your test plan.
In test plan properties -> tick Run Test Group consecutively for step by step execution of thread groups.
Here is the simple solution which I found for Running multiple Thread Groups in particular order:
Check the option "Run Thread Groups consecutively" under "Test Plan"
Order your "Thread Group/s" in the order you want to be executed using drag and drop approach
Regarding Alies Belik answer, there is another way than running thread group consecutively
which is to use Setup Thread Group for the first part.
Regarding setting ramp-up period, it is better to set a value > 0,
which is more realistic one as depending on the number of threads
to start it could delay there startup, the more you have the more it takes time to start.
Related
I am processing records from one DB to another DB. The batch job is being called multiple times in a single request(triggering the process API URL only one time).
How can I add the total records processed(given by the payload at the on-complete phase) for one complete request?
For eg, I ran the process, and three times the batch job executed. So I want to have the sum of all the records in all the 3 batch jobs.
That's not possible because of how the Batch scope works:
In the On Complete phase, none of these variables (not even the
original ones) are visible. Only the final result is available in this
phase. Moreover, since the Batch Job Instance executes asynchronously
from the rest of the flow, no variable set in either a Batch Step or
the On Complete phase will be visible outside the Batch Scope.
source: https://docs.mulesoft.com/mule-runtime/4.3/batch-processing-concept#variable-propagation
What you could do is to store the results in a persistent repository, for example in you database.
The current situation is that I have an application that scales horizontally with one SQL database. Periodically, a background process is ran but I only want one invocation of this background process running at a time. I have tried to accomplish this by using a database row and locking but I am stuck. The requirement is that only one batch job should have successfully completed per day.
Currently I have a table called lock which has three columns: timestamp, lock_id, status. Status is an enum that has three values 0 = not running, 1 = running, 2 = completed.
The issue is that if a batch job fails and status is equal to 0, How can I make sure that only one background process will retry. How do I guarantee that only one background process is running in the retry scenario?
In an ideal world, I would like to do a SELECT statement that checks for the STATUS in the locking table, if status is = 0 meaning not running then start the background job and change status to 1 = running. However, if all horizontally scaled processes do this at the same time, is it guaranteed that only one is executed?
Thanks!
I have a specific requirement in 1 to n problem. If I have groups and in each group there are multiple items, then I can give these groups to multi instance subprocess.
If a group is not completed because of an issue in one item, then I want to remove that item out of that group and finish processing that group. Is this addressed in 1 to n problem. If yes, what is the solution for this?
I am a developer on a web app that uses an Oracle database. However, often the UI will trigger database operations that take a while to process. As a result, the client would like a progress bar when these situations occur.
I recently discovered that I can query V$SESSION_LONGOPS from a second connection, and this is great, but it only works on operations that take longer than 6 seconds. This means that I can't update the progress bar in the UI until 6 seconds has passed.
I've done research on wait times in V$SESSION but as far as I've seen, that doesn't include the waiting for the query.
Is there a way to get the progress of the currently running query of a session? Or should I just hide the progress bar until 6 seconds has passed?
Are these operations Pl/SQL calls or just long-running SQL?
With PL/SQL operations we can write messages with SET_SESSION_LONGOPS() in the DBMS_APPLICATION_INFO package. We can monitor these messages in V$SESSION_LONGOPS. Find out more.
For this to work you need to be able to quantify the operation in units of work. These must be iterations of something concrete, and numeric not time. So if the operation is insert 10000 rows you could split that up into 10 batches. The totalwork parameter is the number of batches (i.e. 10) and you call SET_SESSION_LONGOPS() after every 1000 rows to increment the sofar parameter. This will allow you to render a thermometer of ten blocks.
These messages are session-based but there's no automatic way of distinguishing the current message from previous messages from the same session & SID. However if you assign a UID to the context parameter you can then use that value to filter the view.
This won't work for a single long running query, because there's no way for us to divide it into chunks.
i found this very usefull
dbms_session.set_module("MY Program" , "Kicking off ... ")
..
dbms_session.set_action("Extracting data ... ")
..
dbms_session.set_action("Transforming data ... ")
..
you can monitor the progress using
select module , action from v$session where sid = :yoursessionid
I've done quite a lot of web development with Oracle over the years and found that most users prefer an indeterminate progress bar, than a determinate bar that is inaccurate (a la pretty much any of Microsoft's progress bars which annoy me no end), and unfortunately there is no infallible way of accurately determining query progress.
Whilst your research into the long ops capability is admirable and would definitely help to make the progress of the database query more reliable, it can't take into account the myriad of other variables that may/will affect the web operation's transactional progress (network load, database load, application server load, client-side data parsing, the user clicking on a submit button 1,000 times, etc and so on).
I'd stick to the indeterminate progress method using Javascript callbacks. It's much easier to implement and it will manage your user's expectations as appropriate.
Using V$_SESSION_LONGOPS requires to set TIMED_STATISTICS=true or SQL_TRACE=true. Your database schema must be granted the ALTER SESSION system privilege to do so.
I once tried using V$_SESSION_LONGOPS with a complex and long running query. But it turned up that V$_SESSION_LONGOPS may show the progress of parts of the query like full table scans, join operations, and the like.
See also: http://www.dba-oracle.com/t_v_dollar_session_longops.htm
What you can do is just to show the user "the query is still running". I implemented a <DIV> nested into a <TD> that gets longer with every status request sent by the browser. Status requests are initiated by window.SetTimeout (every 3 seconds) and are AJAX calls to a server-side procedure. The status report returned by the server-side procedure simply says "we are still running". The progress bar's width (i.e. the <DIV>'s width) increments by 5% of the <TD>s width every time and is reset to 5% after showing 100%.
For long running queries you might track the time they took in a separate table, possibly with individual entries for varying where clauses. You could use this to display the average time plus the time that just elapsed in the client-side dialog.
If you have a long running PL/SQL procedure or the like on the server side doing several steps, try this:
create a table for status messages
use a unique key for any process the user starts. Suggestion: client side's javascript date in milliseconds + session ID.
in case the long running procedure is to be started by a link in a browser window, create a job using DBMS_JOB.SUBMIT to run the procedure instead of running the procedure directly
write a short procedure that updates the status table, using PRAGMA AUTONOMOUS_TRANSACTION. This pragma allows you to commit updates to the status table without committing your main procedure's updates. Each major step of your main procedure should have an entry of its own in this status table.
write a procedure to query the status table to be called by the browser
write a procedure that is called by an AJAX call if the use clicks "Cancel" or closes the window
write a procedure that is called by the main procedure after completion of each step: it queries the status table and raises an exception with an number in the 20,000s if the cancel flag was set or the browser did not query the status for, say, 60 seconds. In the main procedure's exception handler look for this error, do a rollback, and update the status table.
Can we run two scenarios at same time with loadrunner?
E.g. Suppose there are 50 user and I have to generate script such that 25 user accessing login and order modules and other 25 user just browsing the site.
Is it possible to generate such scenario?
Running multiple scenarios at the same time (assuming standalone controller) is not possible on THE SAME controller at THE SAME time.
From your description of the problem I assume you are looking for multiple scripts (groups) running in the same scenario - if so then the answer is YES.
In the controller you add more Groups (scripts are called groups in controller) and define the number of vusers or % of total vusers (depending on scenario type & controller version) for the group. I have not seen any limit on the number of groups/scenario. I've never needed more than 15 groups in a single scenario thou ..
Rohit,
I have a direct question for you, are you using an evaluation license?
Generally you would just use a 50 user license with two groups of 25 users apiece. The only time I get this question for the past 15 years is when someone is trying to combine result sets for two 25 user evaluation license copies of a LoadRunner controller.
Rohit, you can create two seperate scripts - one for your login & other for browse site transactions. In controller scenario , you can select select by Group and add these two scripts in your groups and assign 25 vusers each to both.