Running rational performance tester on a schedule - rational-performance-test

Is is possible to run rational performance tester once every hour and generate a report which contains all response times for every hour for all pages? Like this
hour 1: hello.html min avg max time
hour 2: hello.html min avg max time
hour 3: hello.html min avg max time
if you use a ordinary schedule and let it iterate once every hour all response times get lumped together in the report likes this:
hello.html` min avg max count=24
.
Would it be possible to start rpt from a script and run a specific project/schedule and then let cron run that script every hour?

To run the Rational Performance tester tests automatically, one can you use commandline feature that is built within tool. So if you can create window scheduler to use bat file(or unix crontab to use shell script) and the following command inserted into that file, that would solve the first bit of calling rpt test automatically.
cmdline –workspace “workspace “–project “testproj” –schedule “schedule_or_test”
For more details on the above command, refer the below link
Executing Rational Performance tester from command line
Second bit, To produce response time report automatically, there seems to be no easy way(which is shame), but one can write java custom code to log page responses time into text file.

For sure, you can schedule that task using Rational Quality Manager, the new IBM's Centralized QA Management Tool. However, in the same tool you can start your test plan with a Java code that allows you to manage that.
Hope this helps.

Why would you want to do that? Sounds like you are looking for a method of monitoring a running website! If so then there are much simpler ways such as adding %D in apache logformat to write out the time taken to service the page and process your web logs every hour instead :-)
If you really want to do this then don't use RPT - use JMeter or something more commandliney, would be easy then. In fact if its just loading a page then Curl on a cron would do it.

Well it is not a single page it is a websphere portal running on an mainframe so it is not just to open up an apache config.
Haven't looked into JMeter but we have a couple of steps that must be done in the test ( log in make some stuff and logout) that we want to measure and we allready had a testflow in RPT that we use so it would be nice to reuse it even if it is not what rpt are ment for.
//Olme

You can use several stages for the scheduler(select the scheduler, add in "user load" tab).
stage 1, duration 1 hour
stage 2, duration 1 hour
stage 3, duration 1 hour
You would get the test result with several stages. Select all the stages and right click, there's "compare" option, after compare the stages' result, it looks:
stage 1: hello.html min avg max time
stage 2: hello.html min avg max time
srage 3: hello.html min avg max time

Related

Pyhon APScheduler stop jobs before starting a new one

I need to start a job every 30 minutes, but before a new job is being started I want the old but same job being terminated. This is to make sure the job always fetches the newest data file which is constantly being updated.
Right now I'm using the BlockingScheduler paired with my own condition to stop the job (stop job if processed 1k data etc.), I was wondering if APScheduler supports this "only 1 job at the same time and stop old one before new one" behavior natively
I've read the docs but I think the closest is still the default behavior which equals max_instances=1, this just prevents new jobs firing before the old job finishes, which is not what I'm looking for.
Any help is appreciated. Thanks!
After further research I came to a conclusion that this is not supported natively in APScheduler, but by inspired by
Get number of active instances for BackgroundScheduler jobs
, I modified the answer into a working way of detecting the number of current running instances of the same job, so when you have a infinite loop/long task executing, and you want the new instance to replace the old instance, you can add something like
if(scheduler._executors['default']._instances['set_an_id_you_like'] > 1):
# if multiple instances break loop/return
return
and this is what should look like when you start:
scheduler = BlockingScheduler(timezone='Asia/Taipei')
scheduler.add_job(main,'cron', minute='*/30', max_instances=3, next_run_time=datetime.now(),\
id='set_an_id_you_like')
scheduler.start()
but like the answer in the link, please refrain from doing this if someday there's a native way to do this, currently I'm using APScheduler 3.10
This method at least doesn't rely on calculating time.now() or datetime.datetime.now() in every iteration to check if the time has passed compared when the loop started. In my case since my job runs every 30 minutes, I didn't want to calculate deltatime so this is what I went for, hope this hacky method helped someone that googled for a few days to come here.

AWS Glue metrics to populate Job name, job Status, Start time, End time and Elapsed time

I tried various metrics options using glue.driver.* but there is no clear way to get Job name, job Status, Start time, End time and Elapsed time in Cloudwatch metrics. This info is already available under Job Runs history but no way to get this on Metrics.
I found few solutions where this can be achieved using Lambda function but there should be an easy way.
Please share ideas. thanks.
We had the same issue. In order to track glue job runs we ended up writing a small shell script which transformed the JSON Output of -> https://docs.aws.amazon.com/cli/latest/reference/glue/list-jobs.html to CSV. The final Output resembled the following :

Collect statistics on current traffic with Bro

I want to collect statistics on traffic every 10 seconds and the only tool that I found is connection_state_remove event,
event connection_state_remove(c: connection)
{
SumStats::observe( "traffic", [$str="all"] [$num=c$orig$num_bytes_ip] );
}
how to deal with those connections that did not removed by the end of this period. How to get statistics from them?
The events you're processing are independent of the time interval at which the SumStats framework reports statistics. First, you need to define what exactly are the statistics you care about — for example, you may want to count the number of connections for which Bro completes processing in a given time interval. Second, you need to define the time interval (in your case, 10 seconds) and how to process the statistical observations in the SumStats framework. This latter part is missing in your snippet: you're only making an observation but not telling the framework what to do with it.
The examples in the SumStats documentation are very close to what you're looking for.

Measure query execution time excluding start-up cost in postgres

I want to measure the total time taken by postgres to execute my query excluding the start-up cost. Earlier I was using \timing but now I found \timing includes start-up cost.
I also tried: "explain analyze" in which I found that actual time is specified in a particular format like: actual time=12.04..12.09
So, does this mean that the time taken to execute postgres query excluding start-up time is 0.05. If not, then is there a way to exclude start-up costs and measure query execution time?
What you want is actually quite ill-defined.
"Startup cost" could mean:
network connection overhead and back-end start cost of establishing a new connection. Avoided by re-using the same session.
network round-trip times for sending the query and getting the results. Avoided by measuring the timing server-side with log_statement_min_duration = 0 or (with timing overhead) using explain analyze or the auto_explain module.
Query planning time. Avoided by PREPAREing the query, then timing only the subsequent EXECUTE.
Lock acquisition time. There is not currently any way to exclude this.
Note that using EXPLAIN ANALYZE may not be ideal for your purposes: it throws the query result away, and it adds its own costs because of the detailed timing it does. I would set log_statement_min_duration = 0, set client_min_messages appropriately, and capture the timings from the log output.
So it sounds like you want to PREPARE a query then EXPLAIN ANALYZE EXECUTE it or just EXECUTE it with log_statement_min_duration set to 0.
For exploring PLANNING costs and EXECUTE costs separately you need to set on several postgres.conf parameters:
log_planner_stats = on
log_executor_stats = on
and explore your log file.
Update:
1. find your config file location with executing:
SHOW config_file;
2. Set parameters. Don't foget to remove comment-symbol '#'.
3. Restart postgresql service
4. Execute your query
5. Explore your log file.

How to reduce time allotted for a batch of HITs?

today I created a small batch of 20 categorization HITs with the name Grammatical or Ungrammatical using the web UI. Can you tell me the easiest way to manage this batch so that I can reduce its time allotted to 15 minutes from 1 hour and remove also remove the categorization of masters. This is a very simple task that's set to auto-approve within 1 hour, and I am fine with that. I just need to make it more lucrative for people to attempt this at the penny rate.
You need to register a new HITType with the relevant properties (reduced time and no masters qualification) and then perform a ChangeHITTypeOfHIT operation on all of the HITs in the batch.
API documentation here: http://docs.aws.amazon.com/AWSMechTurk/latest/AWSMturkAPI/ApiReference_ChangeHITTypeOfHITOperation.html