Tracking Execution Time of test cases in jenkins - selenium

Before every release selenium test cases are need to run in jenkins, now i need to track execution time comparison of this build and previous build. Is there any way to store the execution time of each test case in Data Base, So we can easily compare the time..

As far as I know there isn't. If you use JUnit you could add listener to the start and end events of a test case and save it anywhere in any form you want. I save the start/end time in a database for post analysis.
Do a Google with JUnit + Listener, here is one of the results:
http://howtodoinjava.com/2012/12/12/how-to-add-listner-in-junit-testcases/

Related

Forcing integration tests to run one at a time in a jenkins pipeline

I have a small collection of integration tests that utilize selenium in a class. The idea is that these tests run every time there is a merge to the codebase, with the merge proceeding through the pipeline and having a series of tests running against the new code.
The thing is, these selenium tests have to run one at a time. They're using the browser to log into a website, and the account will just log out if more than one person tries to log into the account at once, it'll just log out, and the test will obviously fail, so I need these tests to run one at a time. I've tried using the #NotThreadSafe annotation, doesn't seem to have changed anything, and I've searched through for some sort of switch or parameter that defines how many tests run at once with no luck. These tests are using junit 4.12.

How to modify a single test case in manual recording using MTM

I have created Testcase and record it also. Now i need to modify the test case.
If I do modification even of a single step, then next time if I want execute it, MTM is asking to again record the complete test case. Is it possible to modify single step?
If you are using the Fast Forward feature, then this is the expected behavior. Any changes to the test will require you to record the steps again.

HP ALM: Test execution results versioning

How do we create a snapshot versions of the test results of a test set in HP ALM - Test LAB
I need this versioning to keep track of the past execution details.
Any suggestions on how to achieve this?
I do not believe you need to copy/archive/version anything to get the results you need. Consider this...
First Testing Effort
Follow these steps for the first time you are executing tests.
Create your test set
Add the tests that will be part of your effort.
All tests will be in the "No Run" status with no run history
Execute your tests (multiple times as necessary if they do not pass initially) until your effort is complete.
Second Testing Effort
The next time you need to run the tests, use the same test set.
Reset the test set. Right-click the test set in Test Lab and select 'Reset Test Set'. This step will reset the 'Status of all your previous test runs to 'No Run'.
Execute your tests (multiple times as necessary if they do not pass initially) until your effort is complete.
Comparison
At this time, each test has maintained a full run history. All the runs from the First Effort are still available along with those from the Second Effort. Open the Test Instance details of one of the tests in your test set, and then select the 'Runs' option. There you will be able to see each and every time that test instance was executed, no matter which testing effort it was.
Alternative
If this doesn't meet your needs, an alternative would be to leave the test set from your first testing effort alone when you are done. When your second effort beings, copy/paste your test set and perform all your new runs from that copy. When you copy/paste, it does not copy run history, so the new test set will be a blank slate. There is a COM-based API for doing all of this through code if necessary.

Display history of a single test result in Jenkins - additional plugin or config issue?

Currently our Jenkins server only displays a history/graph for the overall number of passed/skipped/failed tests - I'm assuming that's the behavior out of the box.
If you select a single test, you'll get information for how long the test was failing (assuming it did fail).
However, we'd like to see is a history for that single test across the different builds to identify whether the test has been failing in the past (and when) even though it just passed. If you find a build where it failed, you could click on it, and investigate what might have caused the failure; if it passes again, you could check whether something actually fixed the test, or whether it was failing randomly all along.
Is this something that can be done somehow through the config, or do we need an additional plugin for this? If yes, which one?
Not sure if this makes much difference, but we're using Java (Maven) & TestNG (Surefire).
Both the TestNG plugin and the JUnit plugin will actually display history of the test results.
You just need to pick a given result and then:
For JUnit click on "History" on the left side, and
For TestNG click you will see the history in the graph above the result. You can just click on the bars in the bars to see the older results, and also if you click closer to the edge, the scope of the test results will adjust
The Test Results Analyzer plugin does the job for me. There appears to be other suitable plugins out there as well.
https://wiki.jenkins-ci.org/display/JENKINS/Test+Results+Analyzer+Plugin
Does the Static Code Analysis plugin help?

Is there a way to keep a selenium session persistent accross multiple tests?

I am testing a django application's frontend with selenium and that's first time I use it. I have multiple tests that test things after user logged in.
I want them to be separate tests, but I want to have only log in once, is that possible? (As oppose to what I do right now: I log in first, then execute my testing actions of test1, then log in again and execute my testing actions for test2, etc.)
You could try to run your first test, get the session ID, i.e. selenium.sessionId or so, and set that session ID, i.e. selenium.sessionId= after you call selenium = New DefaultSelenium(...) for the second time.
Basically, you want to keep the selenium = New DefaultSelenium(...) out of your tests and move it into some common setup code. You could have selenium be a class memeber that only gets initialized one time, and then is reused in all of the tests in that class.
See: Is there any way to speed up the Selenium Server load time?