Consider the cases where the test process will be terminated as soon as a defect is detected. We need to consider probability of passing in order to schedule the test sets so that the expected total test time will be minimized.
Ex:
91 0.805414173
79 0.73921812
61 0.940068379
Expected time : t1(1-p1) + (t1+t2)p1(1-p2) + ...
Can you please help in the algorithm to order the tests such that it will return minimum expected time.
The tasks can be ordered by
t1/(1-p1)
This will give the minimum test time.
Related
Do we have an API in Qtest that can provide summary of test cycle execution ?
E.g. Passed: 23 Failed: 7 Unexecuted: 10 Running: 2
Need this data for generating report in our consolidated reporting tool along with data from some other sources.
Nothing that gives exactly what you ask for, but you could use the API calls below to create it yourself.
You can get the status of all test runs in a project using
GET /api/v3/projects/{projectId}/test-runs/execution-statuses
Or, to get results from a specific test cycle, first find all the test runs in that cycle using
/api/v3/projects/{projectId}/test-runs?parentId={testCycleID}&parentType=test-cycle
(append &expand=descendants to find test runs in containers under the test cycle)
and then get the results of each run individually using
/api/v3/projects/{projectId}/test-runs/{testRunId}/test-logs/last-run
See https://qtest.dev.tricentis.com/
How we can fail the test case using the Toleration threshold and Frustration threshold and how we display that result in a summary report in the Jmeter dashboard?
How we can fail the test case using the Toleration threshold and Frustration threshold - as of JMeter 5.4.3 it's not possible to "fail" samplers based on the "thresholds"
Assuming point 1 all the results, even if they exceed the thresholds, will be marked as "passed" given the status code is below 400
If you want JMeter to automatically fail the request if its response time exceeds acceptable threshold - consider adding Duration Assertion and set the maximum response time there, if the response time will be higher - JMeter will mark the relevant Sampler(s) as failed.
More information on JMeter Assertions concept: How to Use JMeter Assertions in Three Easy Steps
At some point during a testing run in Intellij test runner, the "total" test count begins to increase (as though its finding more tests while running), but the summary displayed when the tests are finished shows something like this:
Stopped. Tests failed: 5, passed: 3090, ignored: 392 of 3825 tests - 4 m 22 s 836 ms.
I'm not the best at math, but pretty sure 5 + 3090 + 392 != 3825.
I see nothing in Intellij bug reports mentioning something like this, so I'm wondering if there might just be a setup issue on my part, or something else entirely...
I was looking at the examples found on this website :
http://www.tutorialspoint.com/operating_system/os_process_scheduling_algorithms.htm
And there's something that just doesn't make sense about those examples. Take shortest-job-first for example. The premise is that you take the process with the least execution time and run that first.
The example runs p1 first and then p0. But WHY? At t = 0 the only process that exists in the queue is p0. Wouldn't that start running at t = 0, and then p1 would start at t = 6?
I've got the same issue with priority based scheduling.
you are right , since the process P0 has arrived at the queue at 0 sec and before P1 , it will start executing before P1 .
Their answer would be correct if there was no arrival time for the corresponding process and in that case , it is considered that all the processes have reached at the queue at the same time .So, the process with shortest executing time will be executed by CPU first .
Let's say I have a test case with some steps in it. Now, let's say that step 3 needs to be repeated after you complete steps 4 and 5 ... so that when you do step 6 you are in the right place.
Is it good practice to tell the tester to repeat a step? Or would it be better copy-and-paste the repeated step into the step where you would need to repeat it?
I am hearing arguments that it is not industry standard to tell the tester to repeat steps and that one might not pass certain certifications if test cases are written in this manner.
Example:
*Step 1: Click the View Event Log button; Expected Results: Event Log window appears
Step 2: Close the event log window (X) or OK; Expected Results: The Event Log window disappears
Step 3: Repeat Step 1; Expected Results: Expected Results from Step 1
Step 4: Click the Cancel button; Expected Results: The Event Log window closes and any changes (such as clearing the log) are not applied
Step 5: Repeat Step 1; Expected Results: Expected Results from Step 1
Step 6: Click the Clear button and hit apply; Expected Results: The log is cleared
...*
Some people think that I should be copying-and-pasting what is in Step 1 each time I need to repeat that step rather than just simply saying that the tester should repeat the step. Any input as to industry standards, potential downfalls, etc ... would be greatly appreciated.
Test case design does not really follow an industry standard, if you were trying to get certified, listing to repeat a step is a no-no. I personally think that's crap. I see no problem asking a tester to repeat a step. As a believer in agile methodology, I prefer much simpler test cases so a tester has more time to test scenarios rather than design test cases (or a developer more time to develop if you are in a cross functional team). If your looking for more input from a larger testing community try http://www.qaforums.com/
Test cases should be as independent as possible and not verifying two outcomes in a single test case. The test cases should not be designed in a way where the tester has to repeat any previous step. In this case, a new test case should be written because it is a new path. The prons for this approach is that at the end of execution you'll have a clear picture of the test coverage and the pass/fail % of the requirements because all the test cases are independent.