Is there a right way to implement a continuous improvement (AKA software hardening) process? - testing

Each release it seems that our customers find a few old issues with our software. It makes it look like every release has multiple bugs, when in reality our new code is generally solid.
We have tried to implement some additional testing where we have testers do several hours of monthly regression testing on a single app each month in an effort to stay ahead of small issues. We refer to this process as our Software Hardening process, but it does not seem like we are catching enough of the bugs and it feels like a very backburner process since there is always new code to write.
Is there a trick to this kind of testing? Do I need to target one specific feature at a time?

When you develop your testing procedures, you may want to implement these kind of tests:
unit testing (testing invididual components of your project to test their functionality), these tests are important because they allow you to pinpoint where in the software the error may come from. Basically in these tests you will test a single functionality and use mock objects to simulate the behavior, return value of other objects/entities.
regression testing, which you mentioned
characterization testing, one example could be running automatically the program on automatically generated input (simulating the user input), storing the results and compare the results of every version against these results.
At the beginning this will be very heavy to put in place, but with more releases and more bugs fixes being added to the automated non-regression tests, you should be starting to save time.
It is very important that you do not fall in the trap of designing huge numbers of dumb tests. Testing should make your life easier, if you spend too much time understanding where the tests have broken you should redesign the tests such as they give you better messages/understanding of the problem so you can locate the issue quickly.
Depending of your environment, these tests can be linked to the development process.
In my environment, we use SVN for versioning, a bot runs the tests against every revision and returns the failing tests and messages with the name of the revision which broke it and the contributor (his login).
EDIT:
In my environment, we use a combination of C++ and C# to deliver analytics used in Finance, the code was C++ and is quite old while we are trying to migrate the interfaces toward C# and keep the core of the analytics in C++ (mainly because of speed requirements)
Most of the C++ tests are hand-written unit tests and regression tests.
On the C# side we are using NUnit for unit testing. We have a couple of general tests.
We have a 0 warnings policy, we explicitely forbid people to commit code which is generating warnings unless they can justify why it is useful to bypass the warning for this part of the code. We have as well conventions about exception safety, the use of design patterns and many other aspects.
Setting explicitely conventions and best practices is another way to improve the quality of your code.

Is there a trick to this kind of testing?
You said, "we have testers do several hours of monthly regression testing on a single app each month in an effort to stay ahead of small issues."
I guess that by "regression testing" you mean "manually exercising old features".
You ought to decide whether you're looking for old bugs which have never been found before (which means, running tests which you've never run before); or, whether you're repeating previously-run tests to verify that previously-tested functionality is unchanged. These are two opposite things.
"Regression testing" implies to me that you're doing the latter.
If the problem is that "customers find a few old issues with our software" then either your customers are running tests which you've never run before (in which case, to find these problems you need to run new tests of old software), or they're finding bugs which you have previous tested and found, but which you apparently never fixed after you found them.
Do I need to target one specific feature at a time?
What are you trying to do, exactly:
Find bugs before customers find them?
Convince customers that there's little wrong with the new development?
Spend as little time as possible on testing?
Very general advice is that bugs live in families: so when you find a bug, look for its parents and siblings and cousins, for example:
You might have this exact same bug in other modules
This module might be buggier than other modules (written by somone on an off day, perhaps), so look for every other kind of bug in this module
Perhaps this is one of a class of problems (performance problems, or low-memory problems) which suggests a whole area (or whole type of requirement) which needs better test coverage
Other advice is that it's to do with managing customer expectations: you said, "It makes it look like every release has multiple bugs, when in reality our new code is generally solid" as if the real problem is the mistaken perception that the bug is newly-written.
it feels like a very backburner process since there is always new code to write
Software develoment doesn't happen in the background, on a burner: either someone is working on it, or they're not. Management must to decide whether to assign anyone to this task (i.e. look for existing previously-unfound bugs, or fix-previously-found-but-not-yet-reported bugs), or whether they prefer to concentrate on new development and let the customers do the bug-detecting.
Edit: It's worth mentioning that testing isn't the only way to find bugs. There's also:
Informal design reviews (35%)
Formal design inspections (55%)
Informal code reviews (25%)
Formal code inspections (60%)
Personal desk checking of code (40%)
Unit test (30%)
Component test (30%)
Integration test (35%)
Regression test (25%)
System test (40%)
Low volume beta test (<10 sites) (35%)
High-volume beta test (>1000 sites) (70%)
The percentage which I put next to each is a measure of the defect-removal rate for each technique (taken from page 243 of McConnel's Software Estimation book). The two most effective techniques seem to be formal code inspection, and high-volume beta tests.
So it might be a good idea to introduce formal code reviews: that might be better at detecting defects than black-box testing is.

As soon as your coding ends, first you should go for the unit testing. THere you will get some bugs which should be fixed and you should perform another round of unit testing to find if new bugs came or not. After you finish Unit testing you should go for functional testing.
YOu mentioned here that your tester are performing regression testing on a monthly basis and still there are old bugs coming out. So it is better to sit with the tester and review the test cases as i feel that they need to be updated regularly. Also during review put stress on which module or functionality the bugs are coming. Stress on those areas and add more test cases in those areas and add those in your rgression test cases so that once new build comes those test cases should be run.
YOu can try one more thing if your project is a long term one then you should talk with the tester to automate the regression test cases. It will help you to run the test cases at off time like night and in the next day you will get the results. Also the regression test cases should be updated as the major problem comes when regression test cases are not updated regularly and by running old regression test cases and new progression test cases you are missing few modules that are not tested.

There is a lot of talk here about unit testing and I couldn't agree more. I hope that Josh understands that unit testing is a mechanized process. I disagree with PJ in that unit tests should be written before coding the app and not after. This is called TDD or Test Driven Development.
Some people write unit tests that exercise the middle tier code but neglect testing the GUI code. That is imprudent. You should write unit tests for all tiers in your application.
Since unit tests are also code, there is the question of QA for your test suite. Is the code coverage good? Are there false positives/negatives errors in the unit tests? Are you testing for the right things? How do you assure the quality of your quality assurance process? Basically, the answer to that comes down to peer review and cultural values. Everyone on the team has to be committed to good testing hygiene.
The earlier a bug is introduced into your system, the longer it stays in the system, the harder and more costly it is to remove it. That is why you should look into what is known as continuous integration. When set up correctly, continuous integration means that the project gets compiled and run with the full suite of unit tests shortly after you check in your changes for the day.
If the build or unit tests fail, then the offending coder and the build master gets a notification. They work with the team lead to determine what the most appropriate course correction should be. Sometimes it is just as simple as fix the problem and check the fix in. A build master and team lead needs to get involved to identify any overarching patterns that require additional intervention. For example, a family crisis can cause a developer's coding quality to bottom out. Without CI and some managerial oversight, it might take six months of bugs before you realize what is going on and take corrective action.
You didn't mention what your development environment is. If yours were a J2EE shop, then I would suggest that you look into the following.
CruiseControl for continuous integration
Subversion for the source code versioning control because it integrates well with CruiseControl
Spring because DI makes it easier to mechanize the unit testing for continuous integration purposes
JUnit for unit testing the middle tier
HttpUnit for unit testing the GUI
Apache JMeter for stress testing

Going back and implementing a testing strategy for (all) existing stuff is a pain. It's long, it's difficult, and no one will want to do it. However, I strongly recommend that as a new bug comes in, a test be developed around that bug. If you don't get a bug report on it, then either is (a) works or (b) the user doesn't care that it doesn't work. Either way, a test is a waste of your time.
As soon as its identified, write a test that goes red. Right now. Then fix the bug. Confirm that it is fixed. Confirm that the test is now green. Repeat as new bugs come in.

Sorry to say that but maybe you're just not testing enough, or too late, or both.

Related

Starting Testing department

I am joining a company, they dont have any formal testing setup. They expect me to start a testing department. I have good understanding of manual and automated testing. Not sure about how to start or which tools to use for document sharing, bugs tracking.
please guide as much info you can provide.
thanks
This is a very broad question and almost impossible to answer without significantly more knowledge of your companies products, quality goals and existing tooling... But I've got some Opinions :tm: that might help, starting with some philosophy (sorry).
What You're For
The function of a testing department isn't to test; the goal is to help the company be confident in its delivery of products. Your customers want to know that your software is accurate and stable. Your Operations team wants to avoid Production going down. Your Developers want to feel confident that their changes work and don't have any negative side effects.
I personally feel that the best way for a testing team to provide that confidence is not by writing tests; It's by editing them. The testing team provides the tooling, guidelines and expertise to help the rest of the Engineering departments make testing an integral part of the process.
It's like cooking. You don't make a well seasoned meal by chopping and sautéing and stirring and then giving it to a head chef to taste. You taste continually while you go because you're the one who knows what the food should be like. The head chef trains you and provides feedback on the final dish so that you learn how to season correctly.
Choosing Tools
Irrelevant. Mostly.
Your tools need to give you what you're after and then get out of your way. At the moment, the company barely knows what it's after, so you could even use a Google Doc to track defects.
You don't want to get in anyone's way to begin with, or they'll start to resent you. Your team needs to provide value and start to earn the social capital to change the Engineering processes to help deliver your goals.
So, use whatever document sharing tools are already in use; Whether that's a Wiki, Google, Dropbox etc. If you're choosing a new one because there's no collaboration, I'm partial to Notion.
If your team already has a collaborative build tool (eg Jenkins, Travis) it's probably best to stick with that, adding in testing steps. Again, the less friction you introduce, the better your initial outcomes.
I wouldn't bother building and maintaining a test grid; Instead, lean on a vendor like Sauce Labs for infrastructure and expertise. That way you've got easy parallelisation, wide platform coverage, test asset collection, insights, as well as their experience in supporting Testing teams. Disclaimer: I'm the Manager of Developer Relations at Sauce Labs, so I'm probably biased ;)
As for testing tools; If you want your engineering teams to collaborate on test production, you need to stick with an ecosystem they can use. This likely means whatever they're already using.
How To Start Testing
Selecting What To Test
Your organisation wants testing so bad they're hiring you. That implies there's a traumatic event that they want to avoid happening again. So, start there. Find out what it is, and create a test for it.
If Black Friday overwhelmed their site, do Load testing. If their build is always breaking, concentrate on unit testing. If functionality doesn't work in Prod, add an integration test.
Test Coverage
There's a trap for new players, and you're likely to hear this from your devs:
We're so far behind on test coverage we'll never catch up
That is absolutely true.... if you never start! Add the tests that prevent the trauma that bought you on board and you're already adding value; You'll catch that problem next time.
Another trap is setting test coverage goals. Test coverage is a great way to monitor your process but a terrible way to improve it. Force your teams to increase test coverage (or not let it slip) and they'll start to resent the process... And write crap tests just to boost the percentage.
Instead, use coverage for feedback. If coverage goes down during a commit, discuss why and talk about how to improve it. if it drops way down you might want to do something, but a little dip while you're getting started is A-OK.
Assuming you've covered the trauma that got you hired, increasing test coverage is best done on an as-worked basis. If a developer is writing new code, it gets tests. If a developer is modifying old code, it gets tests to (at least) prove that the modifications work, and ideally to prove that they don't break the old functionality either.
You may come across old code that literally can't be tested. That's a good time to refactor that code. If people are scared of refactoring because it might break, point out that that's exactly what tests are for. Try to pull out to a level where you can test. If you can't test a unit, test the class. If you can't test the class, test the package. Then, go back in and start re-working. You have to do it some day.
Oh, no, we'll be replacing the Fizzwangle with a new Buzzshooper implementation soon; There's no need to take the risk of refactoring for testability.
This is a lie. Even if they mean it truthfully, it's a lie. Buzzshooper isn't coming any time soon. Refactor that shit.
Tests Are Code, Code Is Tests
Your tests need to be treated like high quality code. Use all the abstractions you use when writing code, like inheritance, polymorphism, modularisation, composability.
Look at techniques like the Page Object Model for front end testing. Your test code should restrict implementation detail knowledge (eg, element locators) to the least number of places, so that changes are easy to implement.
Oh, and also, your Code is Code. Learn about then help your teams write code for testability, and tests for code-ability. Structure your tests and app so you can test in parallel, reliably, as fast as possible:
Give HTML elements unique, simple IDs
Write tests that test a single thing
Bypass complicated test setup by doing things like pre-populating databases
Log in once, then use session management to avoid doing it again
Use data generators to create unique test data (including logins)
Other Resources
Check out past conference talks like SauceCon Online.
Testing Talks Online has some great discussions and is the closest thing I've found to a real-life meetup during Covid.
There's also a lot of great content over at Ministry of Testing.

What kinds of tests are there?

I've always worked alone and my method of testing is usually compiling very often and making sure the changes I made work well and fix them if they don't. However, I'm starting to feel that that is not enough and I'm curious about the standard kinds of tests there are.
Can someone please tell me about the basic tests, a simple example of each, and why it is used/what it tests?
Thanks.
Different people have slightly different ideas about what constitutes what kind of test, but here are a few ideas of what I happen to think each term means. Note that this is heavily biased towards server-side coding, as that's what I tend to do :)
Unit test
A unit test should only test one logical unit of code - typically one class for the whole test case, and a small number of methods within each test. Unit tests are (ideally) small and cheap to run. Interactions with dependencies are usually isolated with a test double such as a mock, fake or stub.
Integration test
An integration test will test how different components work together. External services (ones not part of the project scope) may still be faked out to give more control, but all the components within the project itself should be the real thing. An integration test may test the whole system or some subset.
System test
A system test is like an integration test but with real external services as well. If this is automated, typically the system is set up into a known state, and then the test client runs independently, making requests (or whatever) like a real client would, and observing the effects. The external services may be production ones, or ones set up in just a test environment.
Probing test
This is like a system test, but using the production services for everything. These run periodically to keep track of the health of your system.
Acceptance test
This is probably the least well-defined term - at least in my mind; it can vary significantly. It will typically be fairly high level, like a system test or an integration test. Acceptance tests may be specified by an external entity (a standard specification or a customer).
Black box or white box?
Tests can also be "black box" tests, which only ever touch the public API, or "white box" tests which take advantage of some extra knowledge to make testing easier. For example, in a white box test you may know that a particular internal method is used by all the public API methods, but is easier to test. You can test lots of corner cases by calling that method directly, and then do fewer tests with the public API. Of course, if you're designing the public API you should probably design it to be easily testable to start with - but it doesn't always work out that way. Often it's nice to be able to test one small aspect in isolation of the rest of the class.
On the other hand, black box testing is generally less brittle than white box testing: by definition, if you're only testing what the API guarantees in its contracts, then the implementation can change as much as it wants without the tests changing. White box tests, on the other hand, are sensitive to implementation changes: if the internal method changes subtly - or gains an extra parameter, for example - then you'll need to change the tests to reflect that.
It all boils down to balance, in the end - the higher the level of the test, the more likely it is to be black box. Unit tests, on the other hand, may well include an element of white box testing... at least in my experience. There are plenty of people who would refuse to use white box testing at all, only ever testing the public API. That feels more dogmatic than pragmatic to me, but I can see the benefits too.
Starting out
Now, as for where you should go next - unit testing is probably the best thing to start with. You may choose to write the tests before you've designed your class (test-driven development) or at roughly the same time, or even months afterwards (not ideal, but there's a lot of code which doesn't have tests but should). You'll find that some of your code is more amenable to testing than others... the two crucial concepts which make testing feasible (IMO) are dependency injection (coding to interfaces and providing dependencies to your class rather than letting them instantiate those dependencies themselves) and test doubles (e.g. mocking frameworks which let you test interaction, or fake implementations which do everything a simple way in memory).
I would suggest reading at least book about this, since the domain is quite huge, and books tend to synthesize better such concepts.
E.g. A very good basis might be: Software Testing Testing Across the Entire Software Development Life Cycle (2007)
I think such a book might explain better everything than some out of context examples we could post here.
Hi… I would like to add on to what Jon Skeet Sir’s answer..
Based on white box testing( or structural testing) and black box testing( or functional testing) the following are the other testing techniques under each respective category:
STRUCTURAL TESTING Techniques
Stress Testing
This is used to test bulk volumes of data on the system. More than what a system normally takes. If a system can stand these volumes, it can surely take normal values well.
E.g.
May be you can take system overflow conditions like trying to withdraw more than available in your bank balance shouldn’t work and withdrawing up to a maximum threshold should work.
Used When
This can be mainly used we your unsure about the volumes up to your system can handle.
Execution Testing
Done in order to check how proficient is a system.
E.g.
To calculate turnaround time for transactions.
Used when:
Early in the development process to see if performance criteria is met or not.
Recovery Testing
To see if a system can recover to original form after a failure.
E.g.
A very common e.g. in everyday life is the System Restore present in Windows OS..
They have restore points used for recovery as one would well know.
Used when:
When a user feels an application critical to him/her at that point of time has stopped working and should continue to work, for which he performs recovery.
Other types of testing which you could find use of include:-
Operations Testing
Compliance Testing
Security Testing
FUNCTIONAL TESTING Techniques include:
Requirements Testing
Regression Testing
Error-Handling Testing
Manual-Support Testing
Intersystem testing
Control Testing
Parallel Testing
There is a very good book titled “Effective methods for Software Testing” by William Perry of Quality Assurance Institute(QAI) which I would suggest is a must read if you want to go in depth w.r.t. Software Testing.
More on the above mentioned testing types would surely be available in this book.
There are also two other very broad categories of Testing namely
Manual Testing: This is done for user interfaces.
Automated Testing: Testing which basically involves white box testing or testing done
through Software Testing tools like Load Runner, QTP etc.
Lastly I would like to mention a particular type of testing called
Exhaustive Testing
Here you try to test for every possible condition, hence the name. This is as one would note pretty much infeasible as the number of test conditions could be infinite.
Firstly there are various tests one can perform. The Question is how does one organize it. Testing is a Vast & enjoying process.
Start Testing with
1.Smoke Testing. Once passed , go ahead with Functionality Testing. This is the Backbone of Testing. If Functionality works fine then 80% of Testing is profitable.
2.Now go with User Interface testing. AS at times User Interface is something that attracts the Client more than functionality. It is the look & feel that a client gets more attracted to it.
3.Now its time to have a look on Cosmetics bugs. Generally these bugs are ignored because of time constraint. But these play a major role depending on the page it is found. A spelling mistake turns to be Major when found on Splash Screen Or Your landing page or the App name itself. Hence these can not be overlooked as well.
4.Do Conduct Compatibility Testing. i,e Testing on Various Browsers & browser Versions. May be devices & OS for Responsive applications.
Happy testing :)

Role of Testers in Agile? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I work in a team which has been doing the traditional waterfall method of development for many years. Recently, we've been told that future projects are going to be moving towards an agile (particularly Scrum) methodology. It so happens that my project will be one of the first, so we will essentially be guinea pigs for the next few months to iron out what it takes to make the transition.
The project itself is in a very early stage and we would usually be many months away from releasing anything to the testing team, but now we are going to be working directly with them up front. As a result, I'm concerned as to the role of the testers in such a project at this stage. I have several questions/concerns which hopefully some experienced agile developers could answer:
While a developer is coding a task, it is impossible for a tester to test it (it doesn't exist yet). What then is the role of a tester at this point
Is the tester now involved in unit testing? Is this done parallel to black box testing?
What does the tester do during a sprint where primarily infrastructural changes have been made, that may only be testable in unit testing?
How do the traditional test team members function in your agile project?
Keeping testers busy tends to get easier as a project matures (there is more to test!), but the following points apply in the early stages too:
Testers can prepare their test plans, test cases, and automated tests for the user stories before (or while) they are implemented. This helps the team discover any inconsistency or ambiguity in the user stories even before the developers write any code.
In my personal experience, testers don't have any involvement in unit testing; they only test code that passes all of the automated unit, integration and acceptance tests, which are all written by the developers. This split may be different elsewhere, though; for example your testers could be writing automated acceptance tests. Unit tests really should be written by the developers, however, as they are written in tandem with the code.
Their workload will vary between sprints, but regression tests still need to be run on these changes...
You may also find that having the testers spend the first couple of days of each sprint testing the tasks from the previous sprint may help, however I think it's better to get them to nail down the things that the developers are going to be working on by writing their test plans.
Ideally QA and testers should be involved if not from the day one then from very early stages of a software development project, regardless of the process used (waterfall or agile). The test team will need to:
Ensure that project or sprint requirements are clear, measurable and testable. In an ideal world each requirement will have a fit criterion written down at this stage. Determine what information needs to be automatically logged to troubleshoot any defects.
Prepare a project specific test strategy and determine which QA steps are going to be required and at which project stages: integration, stress, compatibility, penetration, conformance, usability, performance, beta testing etc. Determine acceptable defect thresholds and work out classification system for defect severity, specify guidelines for defect reporting.
Specify, arrange and prepare test environment: test infrastructure and mock services as necessary; obtain, sanitise and prepare test data; write scripts to quickly refresh test environment when necessary; establish processes for defect tracking, communication and resolution; prepare for recruitment or recruit users for beta, usability or acceptance testing.
Supply all the relevant information to form project schedule, work break down structure and resource plan.
Write test scripts.
Bring themselves up to speed with the problem domain, system AS-IS and proposed solution.
Usually this is not a question of whether a test team may provide any useful input into the project on an early stage, nor if such an input is beneficial. It is a question, however, of the extent to which an organisation can afford the aforementioned activities. There is always a trade off between available time, budget and resource versus the level of known quality of the end result.
Good post. I was in the same situation about 3 years ago and the transition from waterfall to agile was tricky. I encountered many pain points in the move but once I overcame them and my role had changed I realised that this way of working really suits testing.
The common myth that testers are not required is easily dispelled.
1. While a developer is coding a task, it is impossible for a tester to test it (it doesn't exist yet). What then is the role of a tester at this point
In my experience the tester could be working with the customer to fine tune the stories in the sprint.
They are usually working with the developers to fine tune the code that they are delivering. i.e. advising on edge cases, flows, errors etc.
They can often be involved in designing the tests that the coder will write to perform TDD.
If the agile team is fairly advanced then the tester would normally be writing the ATDD (Acceptance Test Driven Development) tests. These could be in a tool such as Fitnesse or Robot Framework or they could be more advanced ruby tests or even some other programming language. Or in some cases, simple record and playback can often be beneficial for a small number of tests.
They would obviously be writing tests and planning some exploratory testing scenarios or ideas.
The tricky thing to comprehend sometimes for the team is that the story does not have to be complete in order to drop it to the test stack for testing. For example the coders could drop a screen with half of the fields planned on it. The tester could test this half whilst the other half is being coded and hence feedback in with early test results. Testing doesn't have to take place on "finished" stories.
2. Is the tester now involved in unit testing? Is this done parallel to black box testing?
Ideally the coders would be doing TDD. Writing the test and then writing the code to make the test pass. And if the coders are wanting really good TDD then they would be liasing with the tester to think up the tests.
If TDD is not being done then the coders should be writing unit tests at the same time as coding. It probably shouldn't be an after thought or after task after the software has been dropped. The whole point of tests is to test the software is correct to avoid wasting time later down the line. It's all about instant feedback.
3. What does the tester do during a sprint where primarily infrastructural changes have been made, that may only be testable in unit testing?
Ideally the tester would be working with the team and the customer (who by the way, is part of the team!) to define the planned stories and build in some good, detailed acceptance critiera. This is invaluable and can save loads of time later down the line. The tester could also be learning new automation techniques, planning test environments, helping to document the outcome of the planning.
Ideally each story in the sprint would be testable in some way, shape or form. This doesn't mean it should be by the test team, but should be testable. So the tester could be working with the rest of the team working out how to make sure stories are testable.
I post some agile tips here : http://thesocialtester.posterous.com/
Hope this helps you out
Rob..
Just a few thoughts, definitely incomplete:
While the developer is coding a task, the tester can be examining the specifications (or requests from the customer, if there are no formal specs) and writing the test plan. This can include a conceptual framework for what needs to be tested, but it should also include formally writing test suites (yes, in code) as well. This can be quite a challenge for teams moving to agile, as a lot of testers are hired without programming skills. (In a lot of places, it seems like it's a requirement to not be able to code.)
The tester can be involved in unit testing, or in a slightly higher scope by testing components or libraries that have a clean interface.
The testers should always be executing regression tests, load tests, and any other kinds of tests that he can think of, as well as writing test suites for the next sprint. It's often the case that testers work one sprint ahead of development (in preparing a test environment), as well as one sprint behind development (in testing what developers just produced).
I saw a good talk on this recently. Basically this team started off doing a fairly standard Scrum process, then transitioned to Kanban and Lean. One of the most important things they did was to gradually erode the distinctions between testers and developers. Testers were involved in writing unit tests and code, developers were bringing in more higher level tests early in development. It was a steep learning curve for the testers, but worth it as the team was building in quality from the start. By now the testers call themselves developers because their work is so integrated in the process of writing code.
At my company we use and endorse Agile. Our QA team members are involved in unit test creation, maintaining the regression testing infrastructure and, just like in waterfall, they also test each feature upon completion.
When doing infrastructural changes, they also participate to make sure that the new infrastructure is testable.
So, from my limited experience, I'll try to answer your points:
If there's nothing to test yet, start setting up a regression/testing infrastructure and make sure that whatever is being done will be testable
Yes, he may do both
Maintains the testing infrastructure and hunts whoever breaks the tests
The most natural approach to testing in an agile environment is in my opinion exploratory testing http://en.wikipedia.org/wiki/Exploratory_testing.
Doesn't sound words like
According to Cem Kaner & James Bach, exploratory testing is more a [mindset] or "...a way of thinking about testing" than a methodology
or
pair testing
sound familiar to agile developpers. Testers can be involved much earlier in the process than in traditional testing.
1) While a developer is coding a task, it is impossible for a tester to test
it (it doesn't exist yet). What then
is the role of a tester at this point
The tester may still create test plans and have a list of what tests will be created. There may also be the need for the tester to get training if the development involves some off-the-shelf software,e.g. if you are doing a CMS project with Sitecore then the tester should know a few things about Sitecore. There can also be some collaboration of the tester, the developer and the end user or BA to know what are the requirements and expectations so that there isn't the finger pointing that can pop up in vague requirements.
2) Is the tester now involved in unit testing? Is this done parallel to
black box testing?
Not in our case. The tester is doing more integration/user acceptance testing rather than the low-level unit testing. In our case, unit tests come before any QA tests as the developers creating the functionality will create a layer of tests.
3) What does the tester do during a sprint where primarily infrastructural
changes have been made, that may only
be testable in unit testing?
Regression testing! In making infrastructural changes, did anything break? How thorough a testing suite can developers run compared to QA? We had this in a sprint not that long ago where most of the sprint work was plumbing rework so there wasn't much to test other than seeing that things that worked before still work afterward.
In our case, we have testing as one level up from our development environment but still a pre-production environment. The idea is to allow QA a sprint to validate the work done and for any critical or high severity bugs to be found and fixed before a release into staging for final user acceptance testing, so if developers are working on sprint X then QA is validating sprint X-1 and production may have sprint X-2 or earlier running depending on the final UAT and deployment schedule as not every sprint will make it into production after QA gives the OK to move into staging. There are pairing exercises that can happen once a developer is done an initial coding of a task to ensure that both a tester and an end user sign off on what was built. This is our third or fourth version of trying to integrate quality control into the project so it is still a work in progress that has evolved a few times over already.
Like a few other respondents have indicated, Testers should be involved from day one. In Sprint zero they should be involved in ensuring that the Stories the Product Owner is producing are testable (e.g. verifiable once coded) and "acceptable" (i.e. when you go though UAT). Once the Product Backlog is initially populated then the Testers can work on test cases for the Stories slated for the current Sprint, and once there is a product for them to test (Ideally somewhere in your first Sprint) then they can start testing.
If it sounds like there will never be anything to test for a few Sprints, you've got your stories wrong. The aim of a Sprint, even an early one, is to have a thin slice of the eventual system. Focus on "asprin" (i.e. if building a drug prescription system, how do you deliver testable functionality in 2-4 weeks? Build the ones for prescribing an asprin) and "tracer bullets" stories (ones which, when taken in combination touch all the risky parts of the architecture). You'll be amazed what you can hand over to test early on. If testers do end up with spare time, get them to pair program with the developers. It'll build relationships and mutual respect.
The benfits of this approach are many but primarily you test out a good deal of the internal people-processes of your development (handovers from requirements, to development, to test, and also the reverse) and secondarily the whole team (all three disciplines mentioned) sees the benefits of rapid feedback as a result of producing executable software.
It sounds impossible, but I've seen it work. Just make sure you don't bite off too big a chunk to begin with. Let yourselves ease into it and you'll be amazed.

Automatic testing for web based projects

Recently I've came up with the question is it worth at all to spent development time to generate automatic unit test for web based projects? I mean it seems useless at some point because at some point those projects are oriented on interactions with users/clients, so you cannot anticipate the whole possible set of user action so you be able to check the correctness of content showed. Even regression test can hardly be done. So I'm very eager to know to know the opinion of other experienced developers.
Selenium have a good web testing framework
http://seleniumhq.org/
Telerik are also in the process of developing one for web app testing.
http://www.telerik.com/products/web-ui-test-studio.aspx
You cannot anticipate the whole
possible set of user action so you be
able to check the correctness of
content showed.
You can't anticipate all the possible data your code is going to be handed, or all the possible race conditions if it's threaded, and yet you still bother unit testing. Why? Because you can narrow it down a hell of a lot. You can anticipate the sorts of pathological things that will happen. You just have to think about it a bit and get some experience.
User interaction is no different. There are certain things users are going to try and do, pathological or not, and you can anticipate them. Users are just inputting particularly imaginative data. You'll find programmers tend to miss the same sorts of conditions over and over again. I keep a checklist. For example: pump Unicode into everything; put the start date after the end date; enter gibberish data; put tags in everything; leave off the trailing newline; try to enter the same data twice; submit a form, go back and submit it again; take a text file, call it foo.jpg and try to upload it as a picture. You can even write a program to flip switches and push buttons at random, a bad monkey, that'll find all sorts of fun bugs.
Its often as simple as sitting someone down who's unfamiliar with the software and watching them use it. Fight the urge to correct them, just watch them flounder. Its very educational. Steve Krug refers to this as "Advanced Common Sense" and has an excellent book called "Don't Make Me Think" which covers cheap, simple user interaction testing. I highly recommend it. It's a very short and eye opening read.
Finally, the client themselves, if their expectations are properly prepared, can be a fantastic test suite. Be sure they understand its a work in progress, that it will have bugs, that they're helping to make their product better, and that it absolutely should not be used for production data, and let them tinker with the pre-release versions of your product. They'll do all sorts of things you never thought of! They'll be the best and most realistic testing you ever had, FOR FREE! Give them a very simple way to report bugs, preferably just a one button box right on the application which automatically submits their environment and history; the feedback box on Hiveminder is an excellent example. Respond to their bugs quickly and politely (even if its just "thanks for the info") and you'll find they'll be delighted you're so responsive to their needs!
Yes, it is. I just ran into an issue this week with a web site I am working on. I just recently switched-out the data access layer and set up unit tests for my controllers and repositories, but not the UI interactions.
I got bit by a pretty obvious bug that would have been easily caught if I had integration tests. Only through integration tests and UI functionality tests do you find issues with the way different tiers of the application interact with one another.
It really depends on the structure and architecture of your web application. If it contains an application logic layer, then that layer should be easy to unit test with automating tools such as Visual Studio. Also, using a framework that has been designed to enable unit testing, such as ASP.NET MVC, helps alot.
If you're writing a lot of Javascript, there have been a lot of JS testing frameworks that have come around the block recently for unit testing your Javascript.
Other than that, testing the web tier using something like Canoo, HtmlUnit, Selenium, etc. is more a functional or integration test than a unit test. These can be hard to maintain if you have the UI change a lot, but they can really come in handy. Recording Selenium tests is easy and something you could probably get other people (testers) to help you create and maintain. Just know that there is a cost associated with maintaining tests, and it needs to be balanced out.
There are other types of testing that are great for the web tier - fuzz testing especially, but a lot of the good options are commercial tools. One that is open source and plugs into Rails is called Tarantula. Having something like that at the web tier is a nice to have run in a continuous integration process and doesn't require much in the form of maintenance.
Unit tests make sense in TDD process. They do not have much value if you don't do test-first development. However the acceptance test are a big thing for quality of the software. I'd say that acceptance test is a holy grail of the development. Acceptance tests show whether the application satisfies the requirements. How do I know when to stop developing the feature --- only when all my acceptance test pass. Automation of acceptance testing a big thing because I do not have to do it all manualy each time I make changes to the application. After months of development there can be hundreds of test and it becomes unfeasible (sometime impossible) to run all the test manually. Then how do I know if my application still works?
Automation of acceptance tests can be implemented with use of xUnit test frameworks, which makes a confusion here. If I create an acceptance test using phpUnit or httpUnit is it a unit test? My answer is no. It does not matter what tool I use to create and run test. Acceptance test is the one that show whether the features is working IAW requirements. Unit test show whether a class (or function) satisfies the developer's implementation idea. Unit test has no value for the client (user). Acceptance test has a lot of value to the client (and thus to developer, remember Customer Affinity)
So I strongly recommend creating automated acceptance tests for the web application.
The good frameworks for the acceptance test are:
Sahi (sahi.co.in)
Silenium
Simpletest (I't a unit-test framework for php, but includes the browser object that can be used for acceptance testing).
However
You have mentioned that web-site is all about user interaction and thus test automation will not solve the whole problem of usability. For example: testing framework shows that all tests pass, however the user cannot see the form or link or other page element due to accidental style="display:none" in the div. The automated tests pass because the div is present in the document and test framework can "see" it. But the user cannot. And the manual test would fail.
Thus, all web-applications needs manual testing. The automated test can reduce the test workload drastically (80%), but manual test are as well significant for the quality of the resulting software.
As for the Unit testing and TDD -- it make the code quality. It is beneficial to the developers and for the future of the project (i.e. for projects longer that a couple of month). However TDD requires skill. If you have the skill -- use it. If you don't consider gaining the skill, but mind the time it will take to gain. It usually takes about 3 - 6 month to start creating a good Unit tests and code. If you project will last more that a year, I recommend studding TDD and investing time in proper development environment.
I've created a web test solution (docker + cucumber); it's very basic and simple, so easy to understand and modify / improve. It lies in the web directory;
my solution: https://github.com/gyulaweber/hosting_tests

In agile like development, who should write test cases? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Our team has a task system where we post small incremental tasks assigned to each developer.
Each task is developed in its own branch, and then each branch is tested before being merged to the trunk.
My question is: Once the task is done, who should define the test cases that should be done on this task?
Ideally I think the developer of the task himself is best suited for the job, but I have had a lot of resistance from developers who think it's a waste of their time, or that they simply don't like doing it.
The reason I don't like having my QA people do it, is because I don't like the idea of them creating their own work. For example they might leave out things that are simply too much work to test, and they may not know the technical detail that is needed.
But likewise, the down part of developers doing the test cases, is that they may leave out things that they think will break. (even subconsciously maybe)
As the project manager, I ended up writing the test cases for each task myself, but my time is taxed and I want to change this.
Suggestions?
EDIT: By test cases I mean the description of the individual QA tasks that should be done to the branch before it should be merged to the trunk. (Black Box)
The Team.
If a defect gets to a customer, it is the team's fault, therefore the team should be writing test cases to assure that defects don't reach the customer.
The Project Manager (PM) should understand the domain better than anyone on the team. Their domain knowledge is vital to having test cases that make sense with regard to the domain. They will need to provide example inputs and answer questions about expectations on invalid inputs. They need to provide at least the 'happy path' test case.
The Developer(s) will know the code. You suggest the developer may be best for the task, but that you are looking for black box test cases. Any tests that a developer comes up with are white box tests. That is the advantage of having developers create test cases – they know where the seams in the code are.
Good developers will also be coming to the PM with questions "What should happen when...?" – each of these is a test case. If the answer is complex "If a then x, but if b then y, except on Thursdays" – there are multiple test cases.
The Testers (QA) know how to test software. Testers are likely to come up with test cases that the PM and the developers would not think of – that is why you have testers.
I think the Project Manager, or Business Analyst should write those test cases.
They should then hand them over to the QA person to flesh out and test.
That way you ensure no missing gaps between the spec, and what's actually tested and delivered.
The developer's should definately not do it, as they'll be testing their unit tests.
So it's a waste of time.
In addition these tests will find errors which the developer will never find as they are probably due to a misunderstanding in the spec, or a feature or route through the code not having been thought through and implemented correctly.
If you find you don't have enough time for this, hire someone else, or promote someone to this role, as it's key to delivering an excellent product.
From past experience, we had pretty good luck defining tests at different levels to test slightly different things:
1st tier: At the code/class level, developers should be writing atomic unit tests. The purpose is to test individual classes and methods as much as possible. These tests should be run by developers as they code, presumably before archiving code into source control, and by a continuous-integration server (automated) if one is being used.
2nd tier: At the component integration level, again have developers creating unit tests, but that test the integration between components. The purpose is not to test individual classes and components, but to test how they interact with each other. These tests should be run manually by an integration engineer, or automated by a continuous-integration seerver, if one is in use.
3rd tier: At the application level, have the QA team running their system tests. These test cases should be based off the business assumptions or requirements documents provided by a product manager. Basically, test as if you were an end user, doing the things end users should be able to do, as documented int eh requirements. These test cases should be written by the QA team and the product managers who (presumably) know what the customer wants and how they are expected to use the application.
I feel this provides a pretty good level of coverage. Of course, tiers 1 and 2 above should ideally be run before sending a built application to the QA team.
Of course, you can adapt this to whatever fits your business model, but this worked pretty well at my last job. Our continous-integration server would kick out an email to the development team if one of the unit tests failed during the build/integration process too, incase someone forgot to run their tests and committed broken code into the source archive.
We experimented with a pairing of the developer with a QA person with pretty good results. They generally 'kept each other honest' and since the developer had unit tests to handle the code, s/he was quite intimate with the changes already. The QA person wasn't but came at it from the black box side. Both were held accountable for completeness. Part of the ongoing review process helped to catch unit test shortcomings and so there weren't too many incidents that I was aware of where anyone was purposely avoiding writing X test because it would likely prove there was a problem.
I like the pairing idea in some instances and think it worked pretty well. Might not always work, but having those players from different areas interact helped to avoid the 'throw it over the wall' mentality that often happens.
Anyhow, hope that is somehow helpful to you.
The reason I don't like having my QA people do it, is because I don't like the idea of them creating their own work. For example they might leave out things that are simply too much work to test, and they may not know the technical detail that is needed.
Yikes, you need to have more trust in your QA department, or a better one. I mean, imagine of you had said "I don't like having my developers develop software. I don't like the idea of them creating their own work."
As a developer, I Know that there are risks involved in writing my own tests. That's not to say I don't do that (I do, especially if I am doing TDD) but I have no illusions about test coverage. Developers are going to write tests that show that their code does what they think it does. Not too many are going to write tests that apply to the actual business case at hand.
Testing is a skill, and hopefully your QA department, or at least, the leaders in that department, are well versed in that skill.
"developers who think it's a waste of their time, or that they simply don't like doing it" Then reward them for it. What social engineering is necessary to get them to create test cases?
Can QA look over the code and test cases and pronounce "Not Enough Coverage -- Need More Cases". If so, then the programmer that has "enough" coverage right away will be the Big Kahuna.
So, my question is: Once the task is done, who should define the goal of "enough" test cases for this task? Once you know "enough", you can make the programmers responsible for filling in "enough" and QA responsible for assuring that "enough" testing is done.
Too hard to define "enough"? Interesting. Probably this is the root cause of the conflict with the programmers in the first place. They might feel it's a waste of their time because they already did "enough" and now someone is saying it isn't "enough".
the QA people, in conjunction with the "customer", should define the test cases for each task [we're really mixing terminology here], and the developer should write them. first!
Select (not just pick randomly) one or two testers, and let them write the test cases. Review. It could also be useful if a developer working with a task looks at the test cases for the task. Encourage testers to suggest improvements and additions to test sets - sometimes people are afraid to fix what the boss did. This way you might find someone who is good at test design.
Let the testers know about the technical details - I think everyone in an agile team should have read access to code, and whatever documentation is available. Most testers I know can read (and write) code, so they might find unit tests useful, possibly even extend them. Make sure the test designers get useful answers from the developers, if they need to know something.
My suggestion would be to having someone else look over the test cases before the code is merged to ensure quality. Granted this may mean that a developer is overlooking another developer's work but that second set of eyes may catch something that wasn't initially caught. The initial test cases can be done by any developer, analyst or manager, not a tester.
QA shouldn't write the test cases as they may be situations where the expected result hasn't been defined and by this point, it may be hard to have someone referee between QA and development if each side thinks their interpretation is the right one. It is something I have seen many many times and wish it didn't happen as often as it does.
I loosely break my tests down into "developer" tests and "customer" tests, the latter of which would be "acceptance tests". The former are the tests that developers write to verify that their code is performing correctly. The later are tests that someone other than developers write to ensure that behavior matches the spec. The developers must never write the accepatance tests because their creation of the software they're testing assumes that they did the right thing. Thus, their acceptance tests are probably going to assert what the developer already knew to be true.
The acceptance tests should be driven by the spec and if they're written by the developer, they'll get driven by the code and thus by the current behavior, not the desired behavior.
The Agile canon is that you should have (at least) two layers of tests: developer tests and customer tests.
Developer tests are written by the same people who write the production code, preferably using test driven development. They help coming up with a well decoupled design, and ensure that the code is doing what the developers think it is doing - even after a refactoring.
Customer tests are specified by the customer or customer proxy. They are, in fact, the specification of the system, and should be written in a way that they are both executable (fully automated) and understandable by the business people. Often enough, teams find ways for the customer to even write them, with the help of QA people. This should happen while - or even before - the functionality gets developed.
Ideally, the only tasks for QA to do just before the merge, is pressing a button to run all automated tests, and do some additional exploratory (=unscripted) testing. You'll want to run those tests again after the merge, too, to make sure that integrating the changes didn't break something.
A test case begins first in the story card.
The purpose of testing is to drive defects to the left (earlier in the software development process when they are cheaper and faster to fix).
Each story card should include acceptance criteria. The Product Owner pairs with the Solution Analyst to define the acceptance criteria for each story. This criteria is used to determine if a story card's purpose has been meet.
The story card acceptance criteria will determine what automated unit tests need to be coded by the developers as they do Test Driven Development. It will also drive the automated functional test implemented by the autoamted testers (and perhaps with developer support if using tools like FIT).
Just as importantly, the acceptance criteria will drive the automated performance tests and can be used when analyzing the profiling of the application by the developers.
Finally, the user acceptance test will be determined by the acceptance criteria in the story cards and should be designed by the business partner and or users. Follow this process and you will likely release with zero defects.
I've rarely have heard of or seen Project Managers write test cases except for in the smaller teams. In any large,complex software application have to have an analyst that really knows the application. I worked at a mortgage company as a PM - was I to understand sub-prime lending, interest rates, and the such? Maybe at a superficial level, but real experts needed to make sure those things worked. My job was to keep the team healthy, protect the agile principles, and look for new opportunities for work for my team.
The system analyst should review over all test-cases and its correct relation with the use-cases.
Plus the Analyst should perform the final UAT, which could be based on test-cases also.
So the analyst and the quality guy are making sort of peer-review.
The quality is reviewing the use-cases while he is building test-cases, and the analyst is reviewing the test-cases after they are written and while he is performing UAT.
Of course BA is the domain expert, not from technical point of view. BA understands the requirements and the test cases should be mapped to the requirements. Developers should not be the persons writing the test cases to test against their code. QA can write detail test steps per requirement. But the person who writes the requirement should dictate what needs to be tested. Who actually writes the test cases, I dont care too much as long as the test cases can be traced back to requirements. I would think it makes sense that BA guides the testing direction or scope, and QA writes the granular testing plans.
We need to evolve from the "this is how it has been done or should be done mentality" it is failing and failing continuously. The best way to resolve the test plan/cases writing issue is that test cases should be written on the requirements doc in waterfall or the user story in agile as those reqs/user stories are being written. This way there is no question what needs to be tested and QA and UAT teams can execute the test case(s) and focus time on actual testing and defect resolution.