How to track time spent on tests in TFS? - testing

TFS (we're using 2012 at the moment) has a functional testing area where people set up test cases and go through them during regression testing or when a feature has been implemented. If something doesn't work, a bug can be created from a test case.
We're looking for an easy way to track the amount of time testers spend on going through the test cases before each release in addition to whether they passed or failed. Could a custom "Time Spent" field be added to a test run? Or is there a better way? I'd prefer not to use a separate tool for tracking time.

This feature is built into TFS. When you execute one or more tests as a tester Microsoft Test Manager ( and Web Access) records both the start and end date time and associates it to the Test Run.
You can see this easily in MTM but it is no surfaced in the web access. This is the actual time between starting and ending testing making it easy to calculate a duration. If you have lots of runs you can report on total test effort within a Release as well as potentially ranking PBI's by the test time.
You can do this reporting in TFS with the Data Warehouse and Cube, and in VSO using the RestAPI.

It is difficult to track the actual time spent on any task all the time. People would have to be really on top of watching the watch all the time when they start and finish a tasks and of course there are interruptions and distractions.
I flaunted with the idea of using the Pomodoro technique, which worked well for me when the team wasn't too big.
There is an Visual Extension for Pomodoro Timer available but haven't used it personally so can't vouch for it.

Related

What could be the reasons for taking more time while doing automation testing than manual testing for a particular web page for suppose?

What could be the reasons for taking more time while doing automation testing than manual testing for a particular web page for example???
There are many possible reasons like, while running a particular test case there are chances of getting it fail if the page loads late, then again we need to run the test case from the beginning.(This happens in cypress).
If the test case fails, the page won't be able to find the element associated to that particular feature and we have to re run the test case until the page loads. This is time consuming.
In automation we can test only the scripts which we have written, but in manual we can test each and every features in detail with more accuracy.
In order to answer the question, we first need to understand that both manual and automated testing techniques have their own share of pros and cons.
While manual testing is suitable for exploratory testing and testing of applications that are frequently changing, automated testing provides better ROI when the regression test cases of a stable application are automated.
Now, coming back to your question, we can opt for automation of a particular page or spend more time on automation than on manual, when we know that once automated, the repetitive tasks on the page can be performed very quickly and run any number of times by just a button click thus helping the tester in focusing more on the exploratory tasks.
According my knowledge both manual and automation testing have advantages and disadvantages. It’s worth knowing the difference, and when to use one or the other for best results.
Some of these (black box testing, white box testing, integration
testing, system testing, exploratory testing, usability and adhoc
testing ) methods are better suited to manual testing,
Some are best performed through automation for example (Regression
Testing,Load Testing,Repeated Execution)
Even though it is time taking while identifying elements for particular element or a page, it is one time task that we need to do
if we set it up for a first time than it won't take much time for execution
Automation is one time activity in terms of creating script where manual we have to perform often. While automating activity, we have to be very careful about the locators, functions, utilities, compatibility which consume times.
Prior logic should be clear before implementing what we gonna automate. In manual, we have to just pass valid /Invalid data and check the output. It does also based on the performance of the web site as well.
Clear understanding about the automation can only help to do automation speedy. Hence manual is more preferable in terms of saving time if you are new to automation. Identifying scenarios to be automate will take some time as well.
Every software testing company do manual and automation testing. There are a few reasons why automation testing takes more time as compared to manual testing:
Understanding of the automation tool and scripting language is required before starting script automation by a person who is new in automation. Moreover, logic implementation also takes time.
Actually, automated testing saves time and money. Manually repeating tests is costly and time-consuming. Once created, automated tests can be run over and over again at no additional cost and they are much faster than manual tests. Automated software testing can reduce the time to run repetitive tests from days to hours.

How to integrate activities between developer and tester in scrum sprint

Good day
Any suggesion or opinoin about activities between tester and developer in scrum sprint
Does the tester feed his acceptance test (ATDD from acceptance criteria) to developer to start coding the user story and if the developer finish coding does the tester take the implemented story and start his (ATDD) execution.
Plus , what is the main role for the system analysis team(where it was generating srs from brs in waterfall model)
In our company we try to use Agile instead of waterfall, so I highly appreciate your help
There are a huge range of approaches to how development and testing are combined in a sprint.
One approach I think works well is to have acceptance tests written in advance of the development.
The steps would be as follows:
Work items are allocated to the next sprint
Analysts, testers and developers work together to identify the acceptance tests for the selected work items
The tests are built and then run, ideally in continuous integration
All the tests fail as no code has yet been written
Development starts on the work items
Development work proceeds until all the tests pass
Ideally all of this is done within the sprint or in the days just proceeding the start of the sprint. Some teams find they need a bit more time to do the analysis and preparation of acceptance tests, so they may choose to do this one or two weeks in advance of the sprint start.
You have to be careful not to do preparation too far in advance though, as to follow an agile approach we want to be able to respond to changes in requirements/priorities.

Development/QA/Production Environment

I am the QA Test Lead for a large enterprise software company with a team of over 30 developers and a small team of QA Testers. We currently use SVN to do all our code and schema check in which is then built out each night after hours.
My dilemma is this: All of developments code is promoted from their machine to the central repository on a daily basis into a single branch. This branch is our production code for our next software release. Each day when code is checked in, the stable branch is de-stabilized with this new piece of code until QA can get to testing it. It can sometimes take weeks for QA to get to a specific piece of code to test. The worst part of all of this is that we identify months ahead of time of what code is going to go into the standard release and what code will be bumped to the next branch, which has us coding all the way up until almost the actual release date.
I'm really starting to see the effects of this process (put in place by my predecessors) and I'm trying to come up with a way that won't piss off development whereby they can promote code to a QA environment, without holding up another developers piece of code. A lot of our code has shared libraries, and as I mentioned before it can sometimes take QA awhile to get to a piece of code to test. I don't want to hold up development in a certain area while that piece of code is waiting to be tested.
My question now is, what is the best methodology to adopt here? Is there software out there than can help with this? All I really want to do is ensure QA has enough time to test a release without any new code going in until it's tested. I don't want to end up on the street looking for a new job because "QA is doing a crappy job" according to a lot of people in the organization.
Any suggestions are greatly appreciated and will help with our testing and product.
It's a broad question which takes a broad answer, and I'm not sure if I know all it takes (I've been working as dev lead and architect, not as test manager). I see several problems in the process you describe, each require a solution:
Test team working on intermediate versions
This should be handled by working with the dev guys on splitting their work effort into meaningful iterations (called sprints in agile methodology) and delivering a working version every few weeks. Moreover, it should be established that feature are implemented by priority. This has the benefit that it keep the "test gap" fixed: you always test the latest version, which is a few weeks old, and devs understand that any problem you find there is more important than new features for next version.
Test team working on non stable versions
There is absolutely no reason why test team should invest time in version which are "dead on arrival". Continuous Integration is a methodology by which "breaking the code" is found as soon as possible. This require some investment in products like Hudson or home-grown solution to make sure build failure are notices as they occur and some "Smoke Testing" is applied to them.
Your test cycle is long
Invest in automated testing. This is not to say your testers need to learn to program; rather you should invest in recruiting or growing people with their knowledge and passion in writing stable automated tests.
You choose "coding all the way up until almost the actual release date"
That's right; it's a choice made by you and your management, favoring more features over stability and quality. It's a fine choice in some companies with a need to get to market ASAP or have a key customer satisfied; but it's a poor long-term investment. Once you convince your management it's a choice, you can stop taking it when it's not really needed.
Again, it's my two cents.
You need a continuous integration server that is able to automate the build and testing and deployment. I would look at a combination of Apache Hudson, JUnit (DBUnit), Selenium and code quality tools like Sonar.
To ensure that the code that the QA is testing is unique and not constantly changing, you should make the use of TAGs. A tag is like a branch except that the contents are immutable. Once a set of files have been checked in / committed you cannot change and then commit on top of those files. This way the QA has a stable version of code they are working with.
Using SVN without branching seems like a wasted resource. They should set up a stable branch and a test branch (ie. the daily build). When code is tested in the daily build it can be then pushed up to the development release branch.
Like Albert mentioned depending on what your code is you might also look into some automated tests for some of the shared libraries (which depending on where you are in development really shouldn't be changing all that much or your Dev team is doing a crappy job of organization imho).
You might also talk with your dev team leaders (or who ever manages them) and discuss where they view QA and what QA can do to help them the best. Ask: Does your dev team have a set cut off time before releases? Do you test every single line of code? Are there places that you might be spending too much detailed time testing? It shouldn't all fall on QA, QA and dev need to work together to get the product out.

CRUD Web App Automated Testing best practices

G'day,
I'm working with a fairly DB heavy web app and looking at setting up automated testing for it using Selenium. However, as an automated testing newbie, I don't know where to start.
How do you categorize your tests to make sure they're logically sound as well as complete?
How do you deal with the DB when testing? Build a fresh DB before every test and drop the tables after each test? Start with a test-db?
Just looking for some pointers to what best practices are in this regard.
Thanks,
In general...
If your main goal is to test database CRUD operations I would go at least 'one level down' and write some kind of integration tests that do not use the GUI for testing. The tests become a lot more focused on the actual CRUD operations if you take the GUI out.
How to deal with the database...
No matter whether you go with Selenium or integration tests it is a good idea that the tests do not depend on each other. This means setting up the database before each test and/or tearing them down to a clean/known state after the test. Maintaining tests that are written this way is a lot easier. For example, you can run a single test by itself.
For both of our integration and acceptance tests we use dbunit to achieve this. Easily setting up and tearing down DBs is not a new thing, there should be something available for your technology stack as well. (You did not mention the technologies you are using)
How to categorize the tests...
For CRUD operations I would make sure I test one thing and one thing only. For example, I have an Employee table. You can have a test suite that tests everything that has to do with an Employee, but a single tests should only test one thing. 'Save Employee successfully' should be a different test case from 'Attempt to save an Employee that already exists' or 'Delete Employee'.
EDIT: (Answer to the comment)
We basically kill the database and build it from scratch at the beginning of the testing. (Not sure how crucial this part is, but this makes sure that our db is consistent with what the code expects. We are using hibernate...)
Then for each test we have a different datasets to insert. So let's say again that we are testing Employee. If I want to test deleting an Employee I would insert a dataset that contained the smallest amount of information in the database to make this happen. Smaller datasets are easier to maintain. If you use the same dataset for all of your tests it will become very hard to change the code and change or add new tests.
We do use the same dataset for things that seem to require the same information. For example, you want to test 'Attempt to save Employee to database' and 'Delete Employee'. You might reuse one dataset for this.
I was wondering if building and
tearing down the DB for each test
would be expensive time and computing
wise?
I would not worry too much about this. Yes, it might add, let's say, 3-4 seconds to every test, but in the big picture, is this really important? It is more important that you have tests that aim for maintenance because your time as a developer is a lot more valuable then these tests taking 5 minutes to run instead of 3 minutes.
I don't know anything about Selenium, but I found a JavaRanch article that was really well-done. In the section titled "Unit Testing Mock Basics" the author shows a good method of creating mock objects to avoid any database calls. Obviously, you'll need some integration tests that involve DB calls, but for plain old unit tests, the outlined method works well.
Remember, running unit tests should be super fast.

Regression Testing and Deployment Strategy

I'd like some advice on a deployment strategy. If a development team creates an extensive framework, and many (20-30) applications consume it, and the business would like application updates at least every 30 days, what is the best deployment strategy?
The reason I ask is that there seems to be a lot of waste (and risk) in using an agile approach of deploying changes monthly, if 90% of the applications don't change. What I mean by this is that the framework can change during the month, and so can a few applications. Because the framework changed, all applications should be regression-tested. If, say, 10 of the applications don't change at all during the year, then those 10 applications are regression-tested EVERY MONTH, when they didn't have any feature changes or hot fixes. They had to be tested simply because the business is rolling updates every month.
And the risk that is involved... if a mission-critical application is deployed, that takes a few weeks, and multiple departments, to test, is it realistic to expect to have to constantly regression-test this application?
One option is to make any framework updates backward-compatible. While this would mean that applications don't need to change their code, they would still need to be tested because the underlying framework changed. And the risk involved is great; a constantly changing framework (and deploying this framework) means the mission-critical app can never just enjoy the same code base for a long time.
These applications share the same database, hence the need for the constant testing. I'm aware of TDD and automated tests, but that doesn't exist at the moment.
Any advice?
The idea behind a framework is that it's supposed to be the "slow moving code". You shouldn't be changing the framework as frequently as the applications it supports. Try getting the framework on a slower development cycle: perhaps a release no more often than every three or six months.
My guess is that you're still working out some of the architectural decisions in this framework. If you think the framework changes really need to be that dynamic, find out what parts of the framework are being changed so often, and try to refactor those out to the applications that need them.
Agile doesn't have to mean unlimited changes to everything. Your architect could place boundaries on what constitutes the framework, and keep people from tweaking it so readily for what are likely application shortcuts. It may take a few iterations to get it settled down, but after that it should be more stable.
I wouldn't call it an Agile approach unless you have (unit) test coverage. One of the key tenets of Agile is that you have robust unit tests that provide a safety net for frequent refactoring and new feature development. There is a lot of risk in your scenario. Deploying twenty to thirty applications a month when 1) most of them don't add any new business value to their users; and 2) there are no tests in place would not qualify as a good idea in my book. And I'm a strong believer in Agile. But you can't pick and choose only the parts of it that are convenient.
If the business application has not changed, I wouldn't release it just to compile in a new framework. Imagine every .NET application needing to be re-released every time the framework changed. Reading into your question, I wonder if the common database is driving the need for this. If your framework is isolating the schema and you're finding you need to rebuild apps whenever the schema changes, then you need to tackle that problem first. Check out Refactoring Databases, by Scott Ambler for some tips.
As another aside, there's a big difference between integration test and unit tests. Your regression tests are integration tests. It's very difficult to automate at that level. I think the breakthroughs that are happening in testing are all about writing highly testable code that makes unit testing more and more of the code base possible.
Here are some tips I can think of:
1. break the framework into independent parts, so that changing one part requires only running a small portion of test cases.
2. Employ a test case prioritizaion technique. That is, you only rerun a small portion of the test pools of the applications selected by some strategy. Additional branch and ART have better performance than others usually. They require to know the branch coverage information of each test case.
3. Update the framework less frequently. If an application doesn't need change, it means its ok not to change it. So I guess its ok for these applications to use the old version of the framework. You can update the framework for these applications say every 3 months.
Regression testing is a way of life. You will need to regression test every application before it is released. However, since time and money are not usually infinite, you should focus your testing on the areas with the most changes. A quick and dirty way to identify these areas is to count the lines of code changed in a given business area; say "accounting" or "user management". Those should get the most testing first along with any areas that you have identified as “mission critical”.
Now I know that lines of code changed is not necessarily the best way to measure change. If you have well defined change requests, it is actually better to evaluate these hot spots by looking at the number and complexity of the change requests. But not everyone has that luxury.
When you are talking about making a change to the framework, you probably don't need to test all the code that uses it. If you're talking about a change to something like the DAL, that would basically amount to everything anyway. You just need to test a large enough sample of the code to be reasonably comfortable that the change is solid. Again, start with the "mission critical" areas and the area most heavily affected.
I find it helpful to divide the project into 3 distinct code streams; Development, QA, and Production. Development is open to all changes, QA is feature locked, and Production is code locked (well, as locked as it gets anyway). If you are releasing to production on a monthly cycle, you probably want to branch a QA build from the Development code at least 1 month before the release. Then you spend that month acceptance testing the new changes and regression testing everything else that you can. You'll probably have to complete testing the changes about a week before the release so that the app can be staged and you can dry run the installation a few times. You won't get to regression test everything, so have a strategy ready for releasing patches to Production. Don't forget to merge those patches back into the QA and Development code streams too.
Automating the regression tests would be a really great thing; theoretically. In practice, you end-up spending more time updating the testing code then you would spend running the test scripts manually. Besides, you can hire two or three testing monkeys for the price of one really good test script developer. Sad but true.