How to tell that a test case (TFS work item) had been tested? - testing

I don't see in the test case work item status and reason anything to say "tested successfully"
Design status is for when the test case is being written
Ready status is for when the test case is ready to be tested
Closed status has reasons saying the test case is "not to test" (deprecated, different, duplicated)
So how can we mark a test case as "successfully tested"?
It does not seem right that the tester does not have to attest that the case has been tested with success.

There is an "Outcome" field that will show pass, failed ect.

There is no such kind of build-in state in Test Case work item. However, you could create your own customized work item state.
More details please take a look at this tutorial-- Add a workflow stateAdd a workflow state
If you have latest version of tfs on web test hub you should select
several test cases and update the state (screens from Azure DevOps
Service):
Set new value for the State field

Seems you're confusing Test Case with Test Result. A test as defined by a Test Case can be run multiple times to obtain many Test Results. This may not be very useful if the test is only getting run once, although it certainly still works; it is more useful when the test is being run multiple times, e.g. regression tests.
Also, if I may, your initial premise re: the different states a stock Test Case may be in is not correct. Per How workflow states and state categories are used in Backlogs and Boards, section State categories,
Design status is for when a test case is proposed, i.e. we should come up with a plan to test this case
Ready status is for when a test case is in progress, i.e. we are implementing the the test plan
Closed is for when a test case is completed, i.e. we have the plan
That said, I don't see why you couldn't use your above breakdown if it makes more sense for your team. But either way, the notion of "tested successfully" still belongs to the realm of the Test Result. Hope this helps to clarify things.

Related

How do I debug SAP ATC (ABAP Test cockpit) test runs?

I am running ATC (ABAP test cockpit) tests from SE80 ABAP workbench.
It is not the first time that I do not understand an ATC test result. (SAP standard check)
(I am a developer, without experience as ATC administrator)
What are good breakpoints to see how SAP tests ABAP code and/or emits test result to the ATC result list?
I find it hard to start an ATC test in the debugger from SE80, since the right mouse click to start ATC already invokes the debugger.
I have no rights for transaction ATC.
You need to know which check is making you trouble. If you know, you can either find it via the classes ( CL_CI_* ) - which is kinda hard or you access the transaction "SCI".
In this transaction you go "Code Inspector" ->"Management Of" -> "Tests" and you should get the following screen:
Here you can find any check and category ( unfortunately I don't know how it is ordered ). Then you find your check, which is making you problems. For example: "Recognizing Dead Code" and you get the class ( in this case "CL_CI_TEST_CROSSREF" ).
Then you go into the method, which is called "run", and set a breakpoint there. The Run method is the one which gets called at the start of the test.
Then, when you check your code with ATC or SCI ( sci is the base of atc ) you will get into the debugger.
Start with the accepted answer. It is correct and good.
If the debugger doesn't stop where you expect:
In an environment with a central ATC server, the tests are done with two different users: your own user and an RFC user.
To find out which RFC user, open two windows.
- start a longer running test in the first window
- open transaction SM50 to se which users are performing the tests.
Then set a breakpoint for the RFC user
In addition, in our system I experience the following:
If a check is performed by the RFC user, then a break point works for exactly one ATC run.
At the second run, it won't stop at the breakpoint. Remove and set the breakpoint, then it is good for one more ATC run.

Retry Failed Automation Test Case from the logical point for E2E Automation

We are trying to automate E2E test cases for an booking application which involves around 60+ steps for each test case. Whenever there is a failure at the final steps it is very much time consuming if we go for traditional retry option since the test case will be executed from step 1 again. On the application we have some logical steps which can be marked somehow through which we would like to achieve resuming the test case from a logical point before the failed step. Say for example, among the 60 steps say every 10th step is a logical point in which the script can be resumed instead of retrying from the step 1. say if the failure is on line number 43 then with the help of booking reference number the test can be resumed from step number 41 since the validation has been completed till step 40 (step 40 is a logical closure point). There might be an option you may suggest to split the test case into smaller modules, which will not work for me since it is an E2E test case for the application which we would want to have in a single Geb Spec. The framework is built using Geb & Spock for Web Application automation. Please share your thoughts / logics on how can we build the recovery scenarios for this case. Thanks for your support.!
As of now i am not able to find out any solution for this kind of problem.
Below are few things which can be done to achieve the same, but before we talk about the solutions, we should also talk about the issues which it will create. You are running E2E test cases and if they fail at step 10 then they should be started from scratch not from step 10 because you can miss important integration defects which are occurring when you perform 10th step after running first 9 steps. For e.g. if you create an account and then immediately search for hotel, you application might through error because its a newly created account, but if you retry it from a step where you are just trying to search for the hotel rooms then it might work, because of the time spent between your test failure and restarting the test, and you will not notice this issue.
Now if you must achieve this then
Create a log every time you reach a checkpoint, which can be a simple text file indicating the test case name and checkpoint number, then use retry analyzer for running the failed tests, inside the test look for the text file with the test case name, if it exists then simple skip the code to the checkpoint mentioned in the text file. It can be used in different ways, for e.g. if your e2e test if going through 3 applications then file can have the test case name and the last passed application name, if you have used page objects then you can write the last successful page object name in the text file and use that for continuing the test.
Above solution is just an idea, because I dont think there are any existing solutions for this issue.
Hope this will give you an idea about how to start working on this problem.
The possible solution to your problem is to first define the way in which you want to write your tests.
I would recommend considering one test Spec (class) as one E2E test containing multiple features.
Also, it is recommended to use opensource Spock Retry project available on GitHub, after implementing RetryOnFailure
your final code should look like:
#RetryOnFailure(times= 2) // times parameter is for retry attempts, default= 0
class MyEndtoEndTest1 extends GebReportingSpec {
def bookingRefNumber
def "First Feature block which cover the Test till a logical step"()
{
// your test steps here
bookingRefNumber = //assign your booking Ref here
}
def "Second Feature which covers a set of subsequent logical steps "()
{
//use the bookingRefNumber generated in the First Feature block
}
def "Third set of Logical Steps"()
{ // your test steps here
}
def "End of the E2E test "()
{ // Your final Test steps here
}
The passing of all the Feature blocks (methods) will signify a successful E2E test execution.
It sounds like your end to end test case is too big and too brittle. What's the reasoning behind needing it all in one script.
You've already stated you can use the booking reference to continue on at a later step if it fails, this seems like a logical place to split your tests.
Do the first bit, output the booking reference to a file. Read the booking reference for the second test and complete the journey, if it fails then a retry won't take anywhere near as long.
If you're using your tests to provide quick feedback after a build and your tests keep failing then I would look to split up the journey into smaller smoke tests, and if required run some overnight end to end tests with as many retries as you like.
The fact it keeps failing suggests your tests, environment or build is brittle.

tfs test execution states - can we add additional statuses

This question is regards to whether or not additional test execution statuses can be added to TFS.
Out of the box, when running tests the tests can either Pass, fail, Blocked, or N/A/ I would like to add a "Caution" status as well - is this possible?
Seems you want to customize the test result values and add a failure type. It is possible.
The test result is associated with MTM. If you want to customize Test Result Failure Type & Resolution Type, please refer the link from MSDN: Customize and manage the test experience [tcm and Microsoft Test Manager]
More info, please take a look at this uservoice: Provide customization for test plan, test results.
For customizing Test Result Failure Type & Resolution Type, we have added this capability in VS 2012 Update#2. Take a look at:
http://msdn.microsoft.com/en-us/library/ff398070.aspx
http://blogs.msdn.com/b/visualstudioalm/archive/2013/06/05/microsoft-test-manager-customization-of-test-result-fields-and-marking-test-results-as-na.aspx

Run automated tests on a schedule to server as health check

I was tasked with creating a health check for our production site. It is a .NET MVC web application. There are a lot of dependencies and therefore points of failure e.g. a document repository, Java Web services, Site Minder policy server etc.
Management wants us to be the first to know if ever any point fails. Currently we are playing catch up if a problem arises, because it is the the client that informs us. I have written a suite of simple Selenium WebDriver based integration tests that test the sign in and a few light operations e.g. retrieving documents via the document api. I am happy with the result but need to be able to run them on a loop and notify IT when any fails.
We have a TFS build server but I'm not sure if it is the right tool for the job. I don't want to continuously build the tests, just run them. Also it looks like I can't define a build schedule more frequently than on a daily basis.
I would appreciate any ideas on how best achieve this. Thanks in advance
What you want to do is called a suite of "Smoke Tests". Smoke Tests are basically very short and sweet, independent tests that test various pieces of the app to make sure it's production ready, just as you say.
I am unfamiliar with TFS, but I'm sure the information I can provide you will be useful, and transferrable.
When you say "I don't want to build the tests, just run them." Any CI that you use, NEEDS to build them TO run them. Basically "building" will equate to "compiling". In order for your CI to actually run the tests, it needs to compile.
As far as running them, If the TFS build system has any use whatsoever, it will have a periodic build option. In Jenkins, I can specify a Cron time to run. For example:
0 0 * * *
means "run at 00:00 every day (midnight)"
or,
30 5 * 1-5 *
which means, "run at 5:30 every week day"
Since you are making Smoke Tests, it's important to remember to keep them short and sweet. Smoke tests should test one thing at a time. for example:
testLogin()
testLogout()
testAddSomething()
testRemoveSomething()
A web application health check is a very important feature. The use of smoke tests can be very useful in working out if your website is running or not and these can be automated to run at intervals to give you a notification that there is something wrong with your site, preferable before the customer notices.
However where smoke tests fail is that they only tell you that the website does not work, it does not tell you why. That is because you are making external calls as the client would, you cannot see the internals of the application. I.E is it the database that is down, is a network issue, disk space, a remote endpoint is not functioning correctly.
Now some of these things should be identifiable from other monitoring and you should definitely have an error log but sometimes you want to hear it from the horses mouth and the best thing that can tell you how you application is behaving is your application itself. That is why a number of applications have a baked in health check that can be called on demand.
Health Check as a Service
The health check services I have implemented in the past are all very similar and they do the following:
Expose an endpoint that can be called on demand, i.e /api/healthcheck. Normally this is private and is not accessible externally.
It returns a Json response containing:
the overall state
the host that returned the result (if behind a load balancer)
The application version
A set of sub system states (these will indicate which component is not performing)
The service should be resilient, any exception thrown whilst checking should still end with a health check result being returned.
Some sort of aggregate that can present a number of health check endpoints into one view
Here is one I made earlier
After doing this a number of times I have started a library to take out the main wire up of the health check and exposing it as a service. Feel free to use as an example or use the nuget packages.
https://github.com/bronumski/HealthNet
https://www.nuget.org/packages/HealthNet.WebApi
https://www.nuget.org/packages/HealthNet.Owin
https://www.nuget.org/packages/HealthNet.Nancy

Saba/SCORM 2004 3rd Edition 'Attempt Status' Suspended

A SCORM Question regarding how I invoke Rollup (macro?) in Saba.
I have a SCORM manifest which contains two SCOs.
The second is only available when the first is completed by having a preConditionRule (similar to the SCORM/Rustici Golf Examples).
When a Delegate is successful with the first SCO I set 'cmi.completion_status' to 'completed' and 'cmi.success_status' to 'passed'.
The second SCO is a test so I set 'cmi.score.raw', 'cmi.score.scaled' and 'cmi.completion_status' to 'completed'.
As I have a 0.8 in the second SCO's primaryObjective the Saba LMS is able to evaluate a test status of 'passed' or 'failed' on exit of the test.
When I exit this Learning, Saba displays that both SCORM Items have been passed and completed.
However, the 'Content Attempt Status' is always 'Suspended'. This initially pointed to some of my JS interactions as I was using 'cmi.suspend_data' to store some info on a SCO. Even though this is disabled and even when I have a single unrelated SCO, the 'Suspended' Status is still set.
So, I still don't seem to get a Rollup (never seen one) and it is somehow to do with this 'Suspended' State I am somehow setting or not resolving.
One question is whether I should be setting 'cmi' 'completion_status' and 'success_status' in my interactions with the SCORM API at 'Object' Level (cmi.objective.n.success_status') rather than 'cmi.success_status' or both? I've tried both but am not certain which is right. I think it may depend on the Manifest.
Has anyone managed to get Saba to Rollup (Set Learning Assignments' 'Completion Status' so it no longer displays 'Not Evaluated') in this way?
My Manifest file seems ok, the Saba Player's Table of Contents shows the green light for each completed SCO in the Package but until I resolve the Status of 'Suspended', I am a little stuck.
I 'cmi.exit' both SCOs in the 'normal' way.
I am using Pipwerks Wrapper on this too but it seems to be ok.
Does this all point to the API, to the Manifest or to something I'm not setting up on Saba?
Thanks.
You're right that setting cmi.exit to 'normal' should not reflect suspended. Unless another call is being made on unload of the SCO, typically a window.unload or otherwise.
I've got a bookmarklet up on http://goo.gl/MXJVNM, but it sounds like with those events happening so fast you won't be able to trap the status without having another logging mechanism.
You could always offload the test to the cloud.scorm.com to see if it performs the same way, and you'll get a rich set of logs to review.
Mark