Rally: Maximum number of test results for each test case - rally

Is there a limit to the number of testcase results we can assign to a testcase in Rally. I want to run regression test and add testcase results everyday. Would it be ok?
Thanks,
Haris

Unless you have thousands of Testcases and assuming it really is only once a day instead of every build, then you should have no trouble. Rally continues to perform well with many artifacts as recently confirmed by a performance study.
Disclosure: I am Director of Analytics for Rally Software.

As such there is no upper limit.

Related

Is it possible to continue tests sequentially from prior test state with Foundry/Forge?

I'm wondering if there is a way within Forge to continue your tests sequentially, starting from the contractual end state of the last test without repasting the prior tests code as setup. Obviously I could just make one massive test, however, i would lose the gas data and such that i would receive from individual tests. Thank you in advance to anyone who can help :)
No, this is not possible. Each test is run in isolation, with the exception that state from setUp is preserved.
You can make a bigger test and retain gas reporting information if you use forge test --gas-report instead of going by the gas reported by just forge test. The --gas-report flag uses transaction tracing to build a gas usage report of individual calls instead of the entire test.

Jmeter Performance Testing - Difference in result count getting from API and from MongoDB using JSR223 sampler

I have created a performance test suit using JMETER 4.0 and i have multiple test cases which are divided in 2 fragment and i am calling them from a single thread. Following are the type of test cases which are in 2 fragments.
Test Fragment 1: CURD operation on User
Test Fragment 2: Getting User counts from MongoDB and API's and comparing them
and test cases from Test Fragment 1 runs first multiple time based on thread count and then test case from second fragment runs
In Test Fragment 2 i am having these two test cases
TC1: Fetching user count from mongoDB(using JSR223 Sampler)
TC2: Fetching user count using API
When 2nd Test Fragment runs then test case to fetch user count from mongoDb gives different count compared to test case which fetch count using API directly. API's are talking time to update data in mongoDB as there could be some layers which takes time to update data in database(i am not sure which layer exists and why it takes time exactly). The Scripts work fine when i run it for single user so there is not doubt that something is wrong with script.
someone please suggest what approach we can use here to get the same count.
a. Is it a good approach to add timers/delay or something else can be used?
b. If use use timer/delay is it effects performance test report as well, are those delays going to add up in our performance test reports?
It might be the case you're facing a race condition, i.e. while you're performing read operation from database with one thread the number was already updated with another thread.
The options are in:
Amend your queries so your DB checks would be user-specific.
Use Critical Section Controller to ensure that your DB check is being executed only by 1 thread at a time
Use Inter-Thread Communication plugin in order to implement synchronisation across threads based on certain conditions. The latter one can be installed using JMeter Plugins Manager

Jmeters test standard

I am using JMeter to test my own web application with the HTTP request. The final result seems okay. But I have one question are there any details of testing standard? Because I am writing a report which needs some data as a reference.
For example, something like the connected time and loading speed should lower than XXXXms or sample time should between XX and XX
I didn't find there are any references about this. So is there anyone knows about this which I can be used as reference data
I doubt you will be able to find "references". Normally when people invest into performance testing they have either non-functional requirements to check or they better spend money on performance testing to see if/when/where their system breaks instead of loosing it for every minute of system unexpected downtime.
So if you're developing an internal application for your company users will "have to" wait until it does its job as they don't have any alternative. On the other hand they will loose their valuable time so you will be like "serial programmer John"
If you're running a e-commerce website and it is not responsive enough - users just go to your competitors and never return.
If you still want some reference numbers:
According to a report by Akamai, 49% of respondents expected web pages to load in under 2 seconds, while 30% expect a 1-second response and 18% expected a site to load immediately. 50% of frustrated users will visit another website to accomplish their activity, while finally, 22% will leave and won't return to a website where problems have occurred
Similarly, a Dynatrace survey last year found that 75 percent of all smartphone and tablet users said they would abandon a retailer's mobile site or app if it was buggy, slow or prone to crashes.
See Why Performance Testing Matters - Looking Back at Past Black Friday Failures article for more information.
Feng,
There is no standard acceptance criteria for application performance. Most of the time Product owner takes the decision of acceptable response time, but we as a performance tester should always recommend to keep the response time within 2 seconds.
If you are running the performance testing first time of your application then its good to set the benchmark & baseline of your application based on that you can run your future tests and suggest the recommendation to the development team.
In performance testing, you can set benchmarks for following KPIs
Response time
Throughput
Also, its recommended to share detailed performance report to the stackholders so that they can easily take their decision. JMeter now provides Dashboard Report that has all the critical KPIs and performance related information.

Get ALL sql queries executed in the last several hours in Oracle

I need a way to collect all queries executed in Oracle DB (Oracle Database 11g) in the last several hours regardless of how fast the queries are (this will be used to calculate sql coverage after running tests against my application that has several points where queries are executed).
I cannot use tables like V$SQL since there's no guarantee that a query will remain there long enough. It seems I could use DBA_HIST_SQLTEXT but I didn't find a way to filter out queries executed before current test run.
So my question is: what table could I use to get absolutely all queries executed in the given period of time (up to 2 hours) and what DB configuration should I adjust to reach my goal?
"I need the query itself since I need to learn which queries out of all queries coded in the application are executed during the test"
The simplest way of capturing all the queries executed would be to enable Fine Grained Audit on all your database tables. You could use the data dictionary to generate policies on every application table.
Note that even when writing to an OS file such a number of policies would generate a high impact on the database, and will increase the length of time it takes to run the tests. Therefore you should only use these policies to assess your test coverage, and disable them for other test runs.
Find out more.
In the end I went with what Justin Cave suggested and instrumented Oracle JDBC driver to collect every executed query in a Set and them dump them all into a file after running the tests.

JMeter and end-to-end testing

I have a jmx script that is being used to perform functional and load testing.
The script tests, using 1 user and multiple thread users, a simple order management system that does the following for things:
Load the System
Login
Order Placement (select a product, add to cart, check out, submit order till Order Confirmation Page)
Logout
These steps become steps in the jmx script.
When the script is executed, I see no major issues. JMeter does not report any errors as its gathering performance metrics and processing times.
However post-testing, when we check the database (and the System itself outside of JMeter) - those orders, that should have been created when we ran the JMeter test are not being created.
I assume that when JMeter logs in as a dummy user and performs any transactions on the UI, those transactions see their way through to the database. There is a transaction that goes end-to-end. But it appears that this is not the case here.
Any ideas so as to what might be causing this?
Does JMeter actually push out the actions on the UI all the way to the back-end?
Any help would be appreciated.
First, JMeter is not a browser, it reproduces only trafic with server.
Second, are you adding assertions to check that responses are ok and contain what they should?
Third, you say you use 1 user and N threads, of by this you mean you only have 1 user That you multithread then you test is wrong as it Will provoke caching, transaction contention...
I suggest you Check your script first with one user and view results tree listener. Then check your users by running them all with low number of threads.
Finally run real load test.