How to add TestCase to Defect in Rally? - rally

I have a list of TestCases.
After few iteration we have some test cases that fails. We have defect for them but some defects cause fail of few TC.
So here is my question how we can connect TestCase with defect not by creating defect from TestCase?
I want to do that from Web.

Related

How will you give Consistency in Automation Testing

Let’s assume the following simple test case which is testing the functionality of a Banking system maintaining the balance of bank accounts:
Check Account #1234 Balance, which become the reference point (Ex: 1000 $)
Perform Deposit of 600 $
Perform Withdraw of 400 $
Check Account #1234 Balance, expecting the balance to be 200 $ over the reference point (Ex: 1200 $)
Given project pressures you and your colleague are ask to run the test suite in a concurrent fashion (could be using different browser version), given that both of you are manipulating the same account your test is sporadically failing.
In the IP sprint you are task to come up with a solution to bring consistency to the test results regardless of the number of members executing the suite in a concurrent fashion, what are the options you would consider.
There are different ways to approach your case, I would like to list some:
1 - If the concurrency is a must and if your Check Account changes something in a Database, then would be necessary to use different accounts, one per thread of execution, this way each test can run with no concerns on what the other tests are doing.
2 - If you can push for a non-concurrent solution, then you only need to run your tests serialized and at the end of each test revert back the check account to the reference point.
3 - Another way to solve this problem is to use mock data. This solution could be a little bit more complex, and it could requiere more work. But if still you want to know more about it contact your development team and let them know about your problem so that you can find a solution together.
You can read more about mocking data here:
Cypress Interceptor
Mockserver
Wiremock
Mockoon
Hope it helps!

Using a single JIRA Task ticket or create Sub-Tasks

We are using JIRA to work with a team of Developers and a QA team. Currently the 'Dev Team Leader' creates a 'Task' ticket, assigns it to the development member, who work on that ticket and then informs the JIRA ticket number to the QA team, who create a separate QA ticket for testing it. And of the test is pass or failed they inform the DEV team, who either fix it or change the ticket status to 'In Deploy'.
My question is as follows:
Should they create single ticket and use that to do the Development and Testing ? (ie. shift the ticket between the DEV Team and QA Team)
Should the DEV team create a Parent TASK ticket for Development and then assign it to the QA team, who will create a Sub-Task for the Testing and link it to the Parent Development ticket?
Issues:
We need to identify which team member worked on the development
task?
Which team member worked on the Testing ?
How much of tie spent on Development as a whole?
How much of time spent on Testing as a whole?
What is the best way of doing this ?
You only need one ticket or an Issue in JIRA context. Your Project should have a workflow with, for example, the following Statuses: To Do -> In development -> In testing -> from here, the Issue can go in two directions, back to In development if the QA is not satisfied or Done.
When the Issue is moved to the next step, it will/should be assigned to the proper person, i.e. in To Do it's assigned to your project lead or whoever distributes the tasks, In Development it's the developer, In testing the QA, etc.
This is the most widely-accepted way to use JIRA as a ticket tracker. Each transition will be recorded in the Issue Activity Log with the corresponding datetimes, Assignees, etc. You will have access to all the information you've asked for.
It sounds to me like the workflow is in need of granular tracking of development work and testing, where a single ticket (suggested idea) doesn't satisfy.
I found the following design useful:
1. Create a USER STORY that has a set of criteria that needs to be met.
2. Sub TASKS can be created as children of the STORY especially if they need to be worked on by different people.
3. Once all tasks are completed, the USER STORY can be moved to TESTING / IN TESTING (whatever the workflow defines).
4. The QA/QE Engineer then can create TESTS / TEST CASES (children) for the User Stories and and execute them accordingly. Similarly, defects can be filed as BUGS as children of the story.
Ultimately in this workflow the story must meet a set of criteria and level of quality (based on what is acceptable to pass the story for the business) in order to be considered "completed" or ready for release.

How to i automate a test that targets values that change from one test to another?

Currently I am testing an online shop. I want to automate the checkout process but each time an item has been added to the cart and the checkout process is completed that particular item is removed from the list, and the second time the test runs, it returns an error because it can not find the item.
Is there any way I can build a test that completes the test without failing because it can not find that particular item ?
I am using Selenium with PHP and Selenium IDE.
Please note that I am just a beginner in automation.
Any help would be appreciated.
Best regards,
Radu
Whatever the test you do, you should be very confident with the expected initial situation of your system under test when the test starts. In your case, if your test needs a specific item to be available, then this should be part of your "test setup" section.
Let's say you have 100 tests that needs an item to be available.
I can see 2 different strategies to solve your issue:
At the beginning of your test suite, deploy a custom brand new web site with your 100 products available. Your tests will update/destroy those data but you don't case because next time you run the test, you will deploy a brand new set of items.
At the end of each test case, you do a custom-action that will clean your system by adding the item back in the shop.

TestCases field on TestSet object is empty

I wrote a Rally app to do reporting on TestSets and TestCases. Suddenly today, my app was not getting any TestCases in it's query.
To simplify this, I will take my app out of the equation and I am just running queries with the web service api: https://rally1.rallydev.com/slm/doc/webservice/index.jsp?version=1.40
If I query a TestSet, the TestCases field, which should contain a list of the TestCases in the TestSet, is coming back empty for TestSets that definitely have TestCases. This was working perfectly up until sometime in the last few days (we used the app today and weren't getting any TestCases when the last time we used it, we were and no changes have occurred on our end).
If I look at Test Cases in Track->Iteration Status in Rally and expand the TestSet to see all the TestCases, they show up. So they are there, just for some reason the web service api isn't returning them.
I've spent the last two hours reading the API documentation and searching Google to see if anyone else has had this issue or if anything might have changed that is causing this, but I haven't found anything.
I have confirmed that other objects containing a list of TestCase objects (such as TestFolders) are properly returning a list of TestCases. I have also confirmed that I am able to query the individual TestCases that should be returned in the list. I have also confirmed that I am able to query the TestCaseResult for the particular TestSet and TestCase.
So I am really stumped. It appears as though it's just TestSet.TestCases that isn't working and I am unable to find any specific cause or correlate something unrelated that could be the cause.
Any thoughts?
Rally's DevOps team issued a fix for this issue the evening of 10-jan-2013. TestSet queries through WSAPI should be appropriately hydrated with member TestCases again. Contact Rally Support with any questions or concerns.
This is a bug - Rally's engineering team is aware of the problem and is working on a fix. Please file a Case with Rally Support to report/get status updates on the Defect resolution.

Defect status types in HP QC

I'm using HP QC to manage defects (just started) and had a question regarding the various defect status types....
We have:
New - new defect
Open - ??
Active - defect is being investigated
Fixed - dev team have issued a fix-need retest
Ready For Test - self explanatory
I know there are other status' but I'm not entirely sure of the meaning of open in particular...
I.e in what instances is a defect open
In a well organized defect management system different (user)roles are defined.
Each step in the process is maintained by a certain role.
Open means that the defects is checked by a senior and can be assigned or picked up by a solver.
For example:
A developer/tester logs a defect -> status becomes new.
The developer/tester can't change the status to open/active or whatever. The defect just sits there waiting.
A testmanager (or defect coordinator/senior tester) checks the defect. (completeness, validity, duplicate etc). When the defect is ok, he changes the status to open. He can also assign a solver.
When a solver starts on working on the defect, he changes the status to active. Everyone can see who is busy working on this defect (investigating or solving).
Solver fixes defect and changes status to fixed.
Testmanager collects all fixed defects and assigns testers to them. Status change to Ready for (re)test.
etc.