How to save results of a QTP script present on desktop to some test set path in QC? - automation

Is there any way that i can Upload the test result In QC from desktop?
I am using following code:
Set qtApp= CreateObject("Quicktest.Application")
qtApp.Open "C:\Test"
Set rep= CreateObject("QuickTest.RunResultsOptions")
rep.ResultsLocation = "Root\TestFolder\TestSet\Test"
rep.TDTestSet= "Root\TestFolder\TestSet"
qtApp.Run rep, True
The above code runs the test successfully but does not Upload the result in qc.
However if the script from qc is launched it stores the result in QC :
qtApp.Open "QC Path"
Is it Possible by any chance that we can run a script from desktop and store result in Qc?

If you specify a local path, it will be used. If you specify a QC path, that´s where the results will show up. So far, so (un)clear.
If you want to upload to QC a run result that has been generated locally in a previous run, then there is no standard functionality for that. You might be lucky if you explore the API, and find a way to relocate the local run result, but as far as I remember, the API does not cover enough functionality for that as far as I know.
If you want the result to be generated locally during the test run, and then get it uploaded to QC afterwards -- well then just specify a QC path, and execute the test. The run result will be created locally, and, within one huge upload phase at the end of the test run, the result will be uploaded.
So the standard does exactly what you are looking for. It does not, as one might think, create a run result in QC step by step during test run. It is empty until the test run is complete. (This used to be different in older QC/TD versions. There, you could see the progress so far by looking at the run result of a currently running test. Doesn´t work for QC10 anymore imho.)
Generally, for a test to store its run results in QC, the (outermost) test must be part of a QC test set since run results are always associated with a test set (be it the default test set, or an explicitely specified one). For a test to be part of a test set, it must be stored in QC (i.e. in the test plan, or in the resources tree). Thus, it is impossible to store a test locally (like, on the desktop), and sending run results to QC. The run result would be an orphan in QC´s datamodel perspective, as such would violate referential integrity in the database, and thus is impossible to create.
You can, however, create a QC test that calls a locally-stored test, which generates result steps. Since the "outermost" test determines the run results location, running the QC test creates results in QC even though the main processing (and result generation) took place in a locally-stored test´s script code.

Related

Test Results Repository Solutions?

I have been searching for a while now and am surprised that I can't find any solutions out there for test result storage with grouping and searching capabilities.
I'd like a service or self hosted solution that supports:
storing test results in xunit/junit organized by keyword. In other words, I want to keep all my "test process A" test results together and all my "test process B" results together. I want to store failure traces and overall pass/fail at a minimum
get last run results for keyword: get the last "auth" test results with failure details
get run history results by keyword in some format
search of some sort on test results
I happen to be have:
Cypress tests
typescript/mocha tests without cypress
custom test framework tests that will need custom reporters
but I am fine with any test results solution that supports a generic input like xunit.
I am definitely open to suggestions that use any other storage system that can accomplish this even if it isn't strictly a test results tool.

Is there a way to re-run successful tests, from previous run, with google test?

Is there a gtest flag or any other way to re-run tests with google test that were previously successful ( and no change to any code).
It doesn’t seem that there is a specific option for rerunning tests.
You could run the test collection and generate a report in XML or JSON format. Then, you could parse the report, and create a --gtest_filter=... argument by joining the names of successful tests as a colon-separated string.
You could also imagine writing a custom test listener. This would need to write out successful test names to a file, and once again, you could write some simple code to generate the corresponding --gtest_filter argument.

How to extract weekly report and log from a test under execution for 60 days in Robot Framework

I am running a performance/reliability/stress(P/S/R) testing script in my SUT(system under test) using Robot Framework and some internal libraries (e.g. s2l, os, bulletin, collection, datetime and some own in-house libraries), and which need to run for 60 days to measure the expected P/S/R parameters.I know after completing its 60 days execution (if the SUT is not interrupted by any system or networking issues), i will get log and report file.
But, i have a requirement of getting its weekly execution status as log file or report file.
Is there any way to do this in Robot Framework,i am using robot framework only for my testing.Is there any internal/external libraries available (apart from bulletin library) to do this efficiently.
Or, can i include a python script and include the script in the test ENV, if so how can i do this, Any suggestions.
My recommendation would be to rewrite the test so that it runs for one week. Then, schedule a job using jenkins or a python script or bash script that runs that test eight times. This gives you the benefit of a weekly report, and at the end you can use rebot to combine all of the reports into a single larger report.
Another option would be to use the listener interface to stream test results to some other process or file. Then, once a week you can create your own report from this data. For example, you could set up an elastic search server to store the results, and use kibana to view the results.

HP ALM: Test execution results versioning

How do we create a snapshot versions of the test results of a test set in HP ALM - Test LAB
I need this versioning to keep track of the past execution details.
Any suggestions on how to achieve this?
I do not believe you need to copy/archive/version anything to get the results you need. Consider this...
First Testing Effort
Follow these steps for the first time you are executing tests.
Create your test set
Add the tests that will be part of your effort.
All tests will be in the "No Run" status with no run history
Execute your tests (multiple times as necessary if they do not pass initially) until your effort is complete.
Second Testing Effort
The next time you need to run the tests, use the same test set.
Reset the test set. Right-click the test set in Test Lab and select 'Reset Test Set'. This step will reset the 'Status of all your previous test runs to 'No Run'.
Execute your tests (multiple times as necessary if they do not pass initially) until your effort is complete.
Comparison
At this time, each test has maintained a full run history. All the runs from the First Effort are still available along with those from the Second Effort. Open the Test Instance details of one of the tests in your test set, and then select the 'Runs' option. There you will be able to see each and every time that test instance was executed, no matter which testing effort it was.
Alternative
If this doesn't meet your needs, an alternative would be to leave the test set from your first testing effort alone when you are done. When your second effort beings, copy/paste your test set and perform all your new runs from that copy. When you copy/paste, it does not copy run history, so the new test set will be a blank slate. There is a COM-based API for doing all of this through code if necessary.

Cross browsers testing - how to ensure uniqueness of test data?

My team is new to automation and plan to automate the cross browsers testing.
Thing that we not sure, how to make sure the test data is unique for each browser’s testing? The test data need to be unique due to some business rules.
I have few options in mind:
Run the tests in sequential order. Restore database after each test completed.
The testing report for each test will be kept individually. If any error occurs, we have to reproduce the error ourselves (data has been reset).
Run the tests concurrently/sequentially. Add a prefix to each test data to uniquely identify the test data for different browser testing. E.g, FF_User1, IE_User1
Run the tests concurrently/sequentially. Several test nodes will be setup and connect to different database. Each test node will run the test using different browser and the test data will be stored in different database.
Anyone can enlighten me which one is the best approach to use? or any other suggestion?
Do you need to run every test in all browsers? Otherwise, mix and match - pick which tests you want to run in which browser. You can organize your test data like in option 2 above.
Depending on which automation tool you're using, the data used during execution can be organized as iterations:
Browser | Username | VerifyText(example) #headers
FF | FF_User1 | User FF_User1 successfully logged in
IE | IE_User1 | User IE_User1 successfully logged in
If you want to randomly pick any data that works for a test and only want to ensure that the browsers use their own data set, then separate the tables/data sources by browser type. The automation tool should have an if clause you can use to then select which data set gets picked for that test.