How do we create a snapshot versions of the test results of a test set in HP ALM - Test LAB
I need this versioning to keep track of the past execution details.
Any suggestions on how to achieve this?
I do not believe you need to copy/archive/version anything to get the results you need. Consider this...
First Testing Effort
Follow these steps for the first time you are executing tests.
Create your test set
Add the tests that will be part of your effort.
All tests will be in the "No Run" status with no run history
Execute your tests (multiple times as necessary if they do not pass initially) until your effort is complete.
Second Testing Effort
The next time you need to run the tests, use the same test set.
Reset the test set. Right-click the test set in Test Lab and select 'Reset Test Set'. This step will reset the 'Status of all your previous test runs to 'No Run'.
Execute your tests (multiple times as necessary if they do not pass initially) until your effort is complete.
Comparison
At this time, each test has maintained a full run history. All the runs from the First Effort are still available along with those from the Second Effort. Open the Test Instance details of one of the tests in your test set, and then select the 'Runs' option. There you will be able to see each and every time that test instance was executed, no matter which testing effort it was.
Alternative
If this doesn't meet your needs, an alternative would be to leave the test set from your first testing effort alone when you are done. When your second effort beings, copy/paste your test set and perform all your new runs from that copy. When you copy/paste, it does not copy run history, so the new test set will be a blank slate. There is a COM-based API for doing all of this through code if necessary.
Related
I have a build job which takes a parameter (say which branch to build) that, when it completes triggers a testing job (actually several jobs) which does some stuff like download a bunch of test data and checks that the new version is works with the test data.
My problem is that I can't seem to figure out a way to show the test results in a sensible way. If I just use one testing job then the test results for "stable" and "dodgy-future-branch" get mixed up which isn't what I want and if I create a separate testing job for each branch that the build job understands it quickly becomes unmanageable because of combinatorial explosion (say 6 branches and 6 different types of testing mean I need 36 testing jobs and then when I want to make a change, say to save more builds, then I need to update all 36 by hand)
I've been looking at Job Generator Plugin and ez-templates in the hope that I might be able to create and manage just the templates for the testing jobs and have the actual jobs be created / updated on the fly. I can't shake the feeling that this is so hard because my basic model is wrong. Is it just that the separation of the building and testing jobs like this is not recommended or is there some other method to allow the filtering of test results for a job based on build parameters that I haven't found yet?
I would define a set of simple use cases:
Check in on development branch triggers build
Successful build triggers UpdateBuildPage
Successful build of development triggers IntegrationTest
Successful IntegrationTest triggers LoadTest
Successful IntegrationTest triggers UpdateTestPage
Successful LoadTest triggers UpdateTestPage
etc.
So especially I wouldn't look into all jenkins job results for overviews, but create a web page or something like that.
I wouldn't expect the full matrix of build/tests, and the combinations that are used will become clear from the use cases.
Is there any way that i can Upload the test result In QC from desktop?
I am using following code:
Set qtApp= CreateObject("Quicktest.Application")
qtApp.Open "C:\Test"
Set rep= CreateObject("QuickTest.RunResultsOptions")
rep.ResultsLocation = "Root\TestFolder\TestSet\Test"
rep.TDTestSet= "Root\TestFolder\TestSet"
qtApp.Run rep, True
The above code runs the test successfully but does not Upload the result in qc.
However if the script from qc is launched it stores the result in QC :
qtApp.Open "QC Path"
Is it Possible by any chance that we can run a script from desktop and store result in Qc?
If you specify a local path, it will be used. If you specify a QC path, that´s where the results will show up. So far, so (un)clear.
If you want to upload to QC a run result that has been generated locally in a previous run, then there is no standard functionality for that. You might be lucky if you explore the API, and find a way to relocate the local run result, but as far as I remember, the API does not cover enough functionality for that as far as I know.
If you want the result to be generated locally during the test run, and then get it uploaded to QC afterwards -- well then just specify a QC path, and execute the test. The run result will be created locally, and, within one huge upload phase at the end of the test run, the result will be uploaded.
So the standard does exactly what you are looking for. It does not, as one might think, create a run result in QC step by step during test run. It is empty until the test run is complete. (This used to be different in older QC/TD versions. There, you could see the progress so far by looking at the run result of a currently running test. Doesn´t work for QC10 anymore imho.)
Generally, for a test to store its run results in QC, the (outermost) test must be part of a QC test set since run results are always associated with a test set (be it the default test set, or an explicitely specified one). For a test to be part of a test set, it must be stored in QC (i.e. in the test plan, or in the resources tree). Thus, it is impossible to store a test locally (like, on the desktop), and sending run results to QC. The run result would be an orphan in QC´s datamodel perspective, as such would violate referential integrity in the database, and thus is impossible to create.
You can, however, create a QC test that calls a locally-stored test, which generates result steps. Since the "outermost" test determines the run results location, running the QC test creates results in QC even though the main processing (and result generation) took place in a locally-stored test´s script code.
I have created Testcase and record it also. Now i need to modify the test case.
If I do modification even of a single step, then next time if I want execute it, MTM is asking to again record the complete test case. Is it possible to modify single step?
If you are using the Fast Forward feature, then this is the expected behavior. Any changes to the test will require you to record the steps again.
Before every release selenium test cases are need to run in jenkins, now i need to track execution time comparison of this build and previous build. Is there any way to store the execution time of each test case in Data Base, So we can easily compare the time..
As far as I know there isn't. If you use JUnit you could add listener to the start and end events of a test case and save it anywhere in any form you want. I save the start/end time in a database for post analysis.
Do a Google with JUnit + Listener, here is one of the results:
http://howtodoinjava.com/2012/12/12/how-to-add-listner-in-junit-testcases/
My integration tests are use a live DB that's generated using the EF initalizers. When I run the tests individually they run as expected. However when I run them all at once, I get a lot of failed tests.
I appear to have some overlapping going on. For example, I have two tests that use the same setup method. This setup method builds & populates the DB. Both tests perform the same test ACT which adds a handful of items to the DB (the same items), but what's unique is each test is looking for different calculations (instead of one big test that does a lot of things).
One way I could solve this is to do some trickery in the setup that creates a unique DB for each test that's run, that way everything stays isolated. However the EF initilization stuff isn't working when I do that because it is creating a new DB rather than dropping & replacing it iwth a new one (the latter triggers the seeding).
Ideas on how to address this? Seems like an organization of my tests... just not show how to best go about it and was looking for input. Really don't want to have to manually run each test.
Use test setup and tear down methods provided by your test framework and start transaction in test setup and rollback the transaction in test tear down (example for NUnit). You can even put setup and tear down method to the base class for all tests and each test will after that run in its own transaction which will rollback at the end of the test and put the database to its initial state.
Next to what Ladislav mentioned you can also use what's called a Delta Assertion.
For example, suppose you test adding a new Order to the SUT.
You could create a test that Asserts that there is exactly 1 Order in the database at the end of the test.
But you can also create a Delta Assertion by first checking how many Orders there are in the database at the start of the test method. Then after adding an Order to the SUT you test that there are NumberOfOrdersAtStart + 1 in the database.