Starting Testing department - testing

I am joining a company, they dont have any formal testing setup. They expect me to start a testing department. I have good understanding of manual and automated testing. Not sure about how to start or which tools to use for document sharing, bugs tracking.
please guide as much info you can provide.
thanks

This is a very broad question and almost impossible to answer without significantly more knowledge of your companies products, quality goals and existing tooling... But I've got some Opinions :tm: that might help, starting with some philosophy (sorry).
What You're For
The function of a testing department isn't to test; the goal is to help the company be confident in its delivery of products. Your customers want to know that your software is accurate and stable. Your Operations team wants to avoid Production going down. Your Developers want to feel confident that their changes work and don't have any negative side effects.
I personally feel that the best way for a testing team to provide that confidence is not by writing tests; It's by editing them. The testing team provides the tooling, guidelines and expertise to help the rest of the Engineering departments make testing an integral part of the process.
It's like cooking. You don't make a well seasoned meal by chopping and sautéing and stirring and then giving it to a head chef to taste. You taste continually while you go because you're the one who knows what the food should be like. The head chef trains you and provides feedback on the final dish so that you learn how to season correctly.
Choosing Tools
Irrelevant. Mostly.
Your tools need to give you what you're after and then get out of your way. At the moment, the company barely knows what it's after, so you could even use a Google Doc to track defects.
You don't want to get in anyone's way to begin with, or they'll start to resent you. Your team needs to provide value and start to earn the social capital to change the Engineering processes to help deliver your goals.
So, use whatever document sharing tools are already in use; Whether that's a Wiki, Google, Dropbox etc. If you're choosing a new one because there's no collaboration, I'm partial to Notion.
If your team already has a collaborative build tool (eg Jenkins, Travis) it's probably best to stick with that, adding in testing steps. Again, the less friction you introduce, the better your initial outcomes.
I wouldn't bother building and maintaining a test grid; Instead, lean on a vendor like Sauce Labs for infrastructure and expertise. That way you've got easy parallelisation, wide platform coverage, test asset collection, insights, as well as their experience in supporting Testing teams. Disclaimer: I'm the Manager of Developer Relations at Sauce Labs, so I'm probably biased ;)
As for testing tools; If you want your engineering teams to collaborate on test production, you need to stick with an ecosystem they can use. This likely means whatever they're already using.
How To Start Testing
Selecting What To Test
Your organisation wants testing so bad they're hiring you. That implies there's a traumatic event that they want to avoid happening again. So, start there. Find out what it is, and create a test for it.
If Black Friday overwhelmed their site, do Load testing. If their build is always breaking, concentrate on unit testing. If functionality doesn't work in Prod, add an integration test.
Test Coverage
There's a trap for new players, and you're likely to hear this from your devs:
We're so far behind on test coverage we'll never catch up
That is absolutely true.... if you never start! Add the tests that prevent the trauma that bought you on board and you're already adding value; You'll catch that problem next time.
Another trap is setting test coverage goals. Test coverage is a great way to monitor your process but a terrible way to improve it. Force your teams to increase test coverage (or not let it slip) and they'll start to resent the process... And write crap tests just to boost the percentage.
Instead, use coverage for feedback. If coverage goes down during a commit, discuss why and talk about how to improve it. if it drops way down you might want to do something, but a little dip while you're getting started is A-OK.
Assuming you've covered the trauma that got you hired, increasing test coverage is best done on an as-worked basis. If a developer is writing new code, it gets tests. If a developer is modifying old code, it gets tests to (at least) prove that the modifications work, and ideally to prove that they don't break the old functionality either.
You may come across old code that literally can't be tested. That's a good time to refactor that code. If people are scared of refactoring because it might break, point out that that's exactly what tests are for. Try to pull out to a level where you can test. If you can't test a unit, test the class. If you can't test the class, test the package. Then, go back in and start re-working. You have to do it some day.
Oh, no, we'll be replacing the Fizzwangle with a new Buzzshooper implementation soon; There's no need to take the risk of refactoring for testability.
This is a lie. Even if they mean it truthfully, it's a lie. Buzzshooper isn't coming any time soon. Refactor that shit.
Tests Are Code, Code Is Tests
Your tests need to be treated like high quality code. Use all the abstractions you use when writing code, like inheritance, polymorphism, modularisation, composability.
Look at techniques like the Page Object Model for front end testing. Your test code should restrict implementation detail knowledge (eg, element locators) to the least number of places, so that changes are easy to implement.
Oh, and also, your Code is Code. Learn about then help your teams write code for testability, and tests for code-ability. Structure your tests and app so you can test in parallel, reliably, as fast as possible:
Give HTML elements unique, simple IDs
Write tests that test a single thing
Bypass complicated test setup by doing things like pre-populating databases
Log in once, then use session management to avoid doing it again
Use data generators to create unique test data (including logins)
Other Resources
Check out past conference talks like SauceCon Online.
Testing Talks Online has some great discussions and is the closest thing I've found to a real-life meetup during Covid.
There's also a lot of great content over at Ministry of Testing.

Related

What are the common methods for having external companies testing your software?

We are a couple of entrepenours who have developed a cross-browser app and a backend administration system for the app. Or actually, we paid a company to develop it. Now we want it tested profesionally, but we dont want to use the same company for this purpose.
The tests may involve
Integration Testing
Functional Testing
System Testing
Stress Testing
Performance Testing
Usability Testing
For some of the tests, we think that the actual source code is required. We dont feel komfortable giving our source code away "just like that", to unknown parties, so what are the common methods for having external companies testing ones software?
you don't really need to give the source to perform the mentioned tests. you need to provide working environments or provide binaries and instructions how to deploy them. it seems sufficient for 1,2 (i don't know what does 3 means), 4 and 5. It's way too late for usability testing. it should have been done during UI design phase (how do you want to test it right now?)
but those tests are not sufficient. you forgot about penetration testing. and above tests are black box testing and they can show you how the application works.
but if you have any real plans for this application you must be sure it's maintainable. and for this you need the whitebox testing, you have to analyze the code.
you can start with automated analyzing to check the overall quality of the code. but at the end you will still need good programmers to perform the code review. but you don't really have to give them the code. you can invite them to your office and let them review the code on your workstations. unless your idea is so simple and brilliant that only one look at the code is enough to reproduce it. in such case you will need to sign NDA or give some shares to the expert who will take care of the quality

How do I systematically test and think like a real tester

My friend asked me this question today. How to test a vending machine and tell me its test cases. I am able to give some test cases but those are some random thoughts. I want to know how to systematically test a product or a piece of software. There are lots of tests like unit testing, functional testing, integration testing, stress testing etc. But I would like to know how do I systematically test and think like a real tester ? Can someone please explain me how all these testings can be differentiated and which one can be applied in a real scenario. For example Test a file system.
Even long-time, well respected, professional testers will tell you: It is an art more than a science.
My trick to designing new test cases starts with the various types of tests you mention, and it must include all those to be thorough, but I try to find a list of all the ways I can interact with the code/product.
For the vending machine example, there are tons of parts, inside and out.
Simple testing, as the product is designed to work, gives plenty of cases
Does it give the correct change
How fast can it process the request
What if an item is out of stock
What if it is overfilled
What if the change drawer is full
What if the items are too big, or badly racked
What if the user puts in too little money
What if it is out of change
Then there are the interesting cases, which normal users wouldn't think about.
What if you try to tip it over
Give it a fake coin
Steal from it
Put a coin in with a string
Give it funny amounts of change
Give it half-ripped bills
Pry it open with a crow-bar
Feed it bad power/brownout
Turn it off in the middle of various operations
The way to think like a tester is figure out every possible way you can attack it, from all the "funny cases" in usual scenarios, to all the methods that are completely outside of how it should be used. Any point of input, including ones you might think the developers/owners have control over, are fair game.
You can also use many automated test tools, such as pairwise test selection, model-based test toolkits, or for software, various stress/load and security tools.
I feel like this answer was a good start, but I now realize it was only half of the story.
Coming up with every single way you can possibly test the system is important. You need to learn to stretch the limits of your imagination, your problem decomposition skills, your understanding of chains of functionality/failure, and your domain knowledge about the thing you are testing. This is the point I was attempting to make above. With the right mindset, and with enough vigilance, these skills will start to improve very quickly - within a year, or within a few years (depending on the complexity of the domain).
The second level of becoming a very competent tester is to determine which tests you should care about. You will always be able to break every system, in a ton of different ways. Whether those failures are important or not is a more interesting question, and is often much more difficult to answer. The benefit to answering this question, though, is two-fold.
First, if you know why it is important to fix pieces of the system that break (or to skip fixing them!), then you can understand where you should focus your efforts. You know what you can afford to spend less time testing, and what you must spend more time on.
Second, and more importantly, you will help your team expose where they should be focusing their efforts. You will start to uncover things that are called "second-order unknowns". Your team doesn't know what it doesn't know.
The primary trick that helps you accomplish this is to always ask "why?", until whoever you are asking is stumped.
An example:
Q: Why this test?
A: Because I want to exercise all functionality in the system.
Q: Why does this system function this way?
A: Because of the decisions that the programmer made, based on the product specifications.
Q: Why did our product specifications ask for this?
A: Because the company that we are writing the software for had a requirement that the software works this way.
Q: Why did that company we are contracting for add that as a requirement?
A: Because their users need to do :thing:
Q: Why do the users need to do :thing:?
A: Because they are trying to accomplish :xyz:
Q: Why do they need to accomplish :xyz:
A: Because they save money by doing :abc:
Q: Why did they choose :xyz: to solve :abc:?
A: ... good question.
Q: What could they do instead?
A: ... now that I think about it, there's a ton of options! Maybe one of them works better?
With practice, you will start knowing which specific "why" questions to ask, and which to focus on. You will also learn to start deeper down the chain, and be less mechanical in your approach.
This is no longer just about ensuring that the product matches the specifications that the dev, pm, customer, or end user provided. It also helps determine if the solution you are providing is the highest quality solution that your team could provide.
A hidden requirement of this is that you must learn that half your job as a tester is to ask questions all the time. You might think that your team mates will be annoyed at this, but hopefully I've shown that it is both crucial to your development, and the quality of the product you are testing. Smart and curious teammates who care about the product (who aren't busy and frustrated) will love your questions.
#brett :
Suppose you have the system with you, which you want to test. Now the main thing that comes into picture is make sure you have the test scenario or test plan. Once you have that, then for you it becomes very much clear about how and what to test about the system.
Once you have test plan then your vision becomes clear regarding what all is expected and what all is something unexpected. For unexpected behavior you can recheck once and file an issue, if you think that that is not correct. I had given you answer in a general case. if you have a real world scrnario, then it may be really helpful to provide guidelines on that.

Is there a right way to implement a continuous improvement (AKA software hardening) process?

Each release it seems that our customers find a few old issues with our software. It makes it look like every release has multiple bugs, when in reality our new code is generally solid.
We have tried to implement some additional testing where we have testers do several hours of monthly regression testing on a single app each month in an effort to stay ahead of small issues. We refer to this process as our Software Hardening process, but it does not seem like we are catching enough of the bugs and it feels like a very backburner process since there is always new code to write.
Is there a trick to this kind of testing? Do I need to target one specific feature at a time?
When you develop your testing procedures, you may want to implement these kind of tests:
unit testing (testing invididual components of your project to test their functionality), these tests are important because they allow you to pinpoint where in the software the error may come from. Basically in these tests you will test a single functionality and use mock objects to simulate the behavior, return value of other objects/entities.
regression testing, which you mentioned
characterization testing, one example could be running automatically the program on automatically generated input (simulating the user input), storing the results and compare the results of every version against these results.
At the beginning this will be very heavy to put in place, but with more releases and more bugs fixes being added to the automated non-regression tests, you should be starting to save time.
It is very important that you do not fall in the trap of designing huge numbers of dumb tests. Testing should make your life easier, if you spend too much time understanding where the tests have broken you should redesign the tests such as they give you better messages/understanding of the problem so you can locate the issue quickly.
Depending of your environment, these tests can be linked to the development process.
In my environment, we use SVN for versioning, a bot runs the tests against every revision and returns the failing tests and messages with the name of the revision which broke it and the contributor (his login).
EDIT:
In my environment, we use a combination of C++ and C# to deliver analytics used in Finance, the code was C++ and is quite old while we are trying to migrate the interfaces toward C# and keep the core of the analytics in C++ (mainly because of speed requirements)
Most of the C++ tests are hand-written unit tests and regression tests.
On the C# side we are using NUnit for unit testing. We have a couple of general tests.
We have a 0 warnings policy, we explicitely forbid people to commit code which is generating warnings unless they can justify why it is useful to bypass the warning for this part of the code. We have as well conventions about exception safety, the use of design patterns and many other aspects.
Setting explicitely conventions and best practices is another way to improve the quality of your code.
Is there a trick to this kind of testing?
You said, "we have testers do several hours of monthly regression testing on a single app each month in an effort to stay ahead of small issues."
I guess that by "regression testing" you mean "manually exercising old features".
You ought to decide whether you're looking for old bugs which have never been found before (which means, running tests which you've never run before); or, whether you're repeating previously-run tests to verify that previously-tested functionality is unchanged. These are two opposite things.
"Regression testing" implies to me that you're doing the latter.
If the problem is that "customers find a few old issues with our software" then either your customers are running tests which you've never run before (in which case, to find these problems you need to run new tests of old software), or they're finding bugs which you have previous tested and found, but which you apparently never fixed after you found them.
Do I need to target one specific feature at a time?
What are you trying to do, exactly:
Find bugs before customers find them?
Convince customers that there's little wrong with the new development?
Spend as little time as possible on testing?
Very general advice is that bugs live in families: so when you find a bug, look for its parents and siblings and cousins, for example:
You might have this exact same bug in other modules
This module might be buggier than other modules (written by somone on an off day, perhaps), so look for every other kind of bug in this module
Perhaps this is one of a class of problems (performance problems, or low-memory problems) which suggests a whole area (or whole type of requirement) which needs better test coverage
Other advice is that it's to do with managing customer expectations: you said, "It makes it look like every release has multiple bugs, when in reality our new code is generally solid" as if the real problem is the mistaken perception that the bug is newly-written.
it feels like a very backburner process since there is always new code to write
Software develoment doesn't happen in the background, on a burner: either someone is working on it, or they're not. Management must to decide whether to assign anyone to this task (i.e. look for existing previously-unfound bugs, or fix-previously-found-but-not-yet-reported bugs), or whether they prefer to concentrate on new development and let the customers do the bug-detecting.
Edit: It's worth mentioning that testing isn't the only way to find bugs. There's also:
Informal design reviews (35%)
Formal design inspections (55%)
Informal code reviews (25%)
Formal code inspections (60%)
Personal desk checking of code (40%)
Unit test (30%)
Component test (30%)
Integration test (35%)
Regression test (25%)
System test (40%)
Low volume beta test (<10 sites) (35%)
High-volume beta test (>1000 sites) (70%)
The percentage which I put next to each is a measure of the defect-removal rate for each technique (taken from page 243 of McConnel's Software Estimation book). The two most effective techniques seem to be formal code inspection, and high-volume beta tests.
So it might be a good idea to introduce formal code reviews: that might be better at detecting defects than black-box testing is.
As soon as your coding ends, first you should go for the unit testing. THere you will get some bugs which should be fixed and you should perform another round of unit testing to find if new bugs came or not. After you finish Unit testing you should go for functional testing.
YOu mentioned here that your tester are performing regression testing on a monthly basis and still there are old bugs coming out. So it is better to sit with the tester and review the test cases as i feel that they need to be updated regularly. Also during review put stress on which module or functionality the bugs are coming. Stress on those areas and add more test cases in those areas and add those in your rgression test cases so that once new build comes those test cases should be run.
YOu can try one more thing if your project is a long term one then you should talk with the tester to automate the regression test cases. It will help you to run the test cases at off time like night and in the next day you will get the results. Also the regression test cases should be updated as the major problem comes when regression test cases are not updated regularly and by running old regression test cases and new progression test cases you are missing few modules that are not tested.
There is a lot of talk here about unit testing and I couldn't agree more. I hope that Josh understands that unit testing is a mechanized process. I disagree with PJ in that unit tests should be written before coding the app and not after. This is called TDD or Test Driven Development.
Some people write unit tests that exercise the middle tier code but neglect testing the GUI code. That is imprudent. You should write unit tests for all tiers in your application.
Since unit tests are also code, there is the question of QA for your test suite. Is the code coverage good? Are there false positives/negatives errors in the unit tests? Are you testing for the right things? How do you assure the quality of your quality assurance process? Basically, the answer to that comes down to peer review and cultural values. Everyone on the team has to be committed to good testing hygiene.
The earlier a bug is introduced into your system, the longer it stays in the system, the harder and more costly it is to remove it. That is why you should look into what is known as continuous integration. When set up correctly, continuous integration means that the project gets compiled and run with the full suite of unit tests shortly after you check in your changes for the day.
If the build or unit tests fail, then the offending coder and the build master gets a notification. They work with the team lead to determine what the most appropriate course correction should be. Sometimes it is just as simple as fix the problem and check the fix in. A build master and team lead needs to get involved to identify any overarching patterns that require additional intervention. For example, a family crisis can cause a developer's coding quality to bottom out. Without CI and some managerial oversight, it might take six months of bugs before you realize what is going on and take corrective action.
You didn't mention what your development environment is. If yours were a J2EE shop, then I would suggest that you look into the following.
CruiseControl for continuous integration
Subversion for the source code versioning control because it integrates well with CruiseControl
Spring because DI makes it easier to mechanize the unit testing for continuous integration purposes
JUnit for unit testing the middle tier
HttpUnit for unit testing the GUI
Apache JMeter for stress testing
Going back and implementing a testing strategy for (all) existing stuff is a pain. It's long, it's difficult, and no one will want to do it. However, I strongly recommend that as a new bug comes in, a test be developed around that bug. If you don't get a bug report on it, then either is (a) works or (b) the user doesn't care that it doesn't work. Either way, a test is a waste of your time.
As soon as its identified, write a test that goes red. Right now. Then fix the bug. Confirm that it is fixed. Confirm that the test is now green. Repeat as new bugs come in.
Sorry to say that but maybe you're just not testing enough, or too late, or both.

In agile like development, who should write test cases? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Our team has a task system where we post small incremental tasks assigned to each developer.
Each task is developed in its own branch, and then each branch is tested before being merged to the trunk.
My question is: Once the task is done, who should define the test cases that should be done on this task?
Ideally I think the developer of the task himself is best suited for the job, but I have had a lot of resistance from developers who think it's a waste of their time, or that they simply don't like doing it.
The reason I don't like having my QA people do it, is because I don't like the idea of them creating their own work. For example they might leave out things that are simply too much work to test, and they may not know the technical detail that is needed.
But likewise, the down part of developers doing the test cases, is that they may leave out things that they think will break. (even subconsciously maybe)
As the project manager, I ended up writing the test cases for each task myself, but my time is taxed and I want to change this.
Suggestions?
EDIT: By test cases I mean the description of the individual QA tasks that should be done to the branch before it should be merged to the trunk. (Black Box)
The Team.
If a defect gets to a customer, it is the team's fault, therefore the team should be writing test cases to assure that defects don't reach the customer.
The Project Manager (PM) should understand the domain better than anyone on the team. Their domain knowledge is vital to having test cases that make sense with regard to the domain. They will need to provide example inputs and answer questions about expectations on invalid inputs. They need to provide at least the 'happy path' test case.
The Developer(s) will know the code. You suggest the developer may be best for the task, but that you are looking for black box test cases. Any tests that a developer comes up with are white box tests. That is the advantage of having developers create test cases – they know where the seams in the code are.
Good developers will also be coming to the PM with questions "What should happen when...?" – each of these is a test case. If the answer is complex "If a then x, but if b then y, except on Thursdays" – there are multiple test cases.
The Testers (QA) know how to test software. Testers are likely to come up with test cases that the PM and the developers would not think of – that is why you have testers.
I think the Project Manager, or Business Analyst should write those test cases.
They should then hand them over to the QA person to flesh out and test.
That way you ensure no missing gaps between the spec, and what's actually tested and delivered.
The developer's should definately not do it, as they'll be testing their unit tests.
So it's a waste of time.
In addition these tests will find errors which the developer will never find as they are probably due to a misunderstanding in the spec, or a feature or route through the code not having been thought through and implemented correctly.
If you find you don't have enough time for this, hire someone else, or promote someone to this role, as it's key to delivering an excellent product.
From past experience, we had pretty good luck defining tests at different levels to test slightly different things:
1st tier: At the code/class level, developers should be writing atomic unit tests. The purpose is to test individual classes and methods as much as possible. These tests should be run by developers as they code, presumably before archiving code into source control, and by a continuous-integration server (automated) if one is being used.
2nd tier: At the component integration level, again have developers creating unit tests, but that test the integration between components. The purpose is not to test individual classes and components, but to test how they interact with each other. These tests should be run manually by an integration engineer, or automated by a continuous-integration seerver, if one is in use.
3rd tier: At the application level, have the QA team running their system tests. These test cases should be based off the business assumptions or requirements documents provided by a product manager. Basically, test as if you were an end user, doing the things end users should be able to do, as documented int eh requirements. These test cases should be written by the QA team and the product managers who (presumably) know what the customer wants and how they are expected to use the application.
I feel this provides a pretty good level of coverage. Of course, tiers 1 and 2 above should ideally be run before sending a built application to the QA team.
Of course, you can adapt this to whatever fits your business model, but this worked pretty well at my last job. Our continous-integration server would kick out an email to the development team if one of the unit tests failed during the build/integration process too, incase someone forgot to run their tests and committed broken code into the source archive.
We experimented with a pairing of the developer with a QA person with pretty good results. They generally 'kept each other honest' and since the developer had unit tests to handle the code, s/he was quite intimate with the changes already. The QA person wasn't but came at it from the black box side. Both were held accountable for completeness. Part of the ongoing review process helped to catch unit test shortcomings and so there weren't too many incidents that I was aware of where anyone was purposely avoiding writing X test because it would likely prove there was a problem.
I like the pairing idea in some instances and think it worked pretty well. Might not always work, but having those players from different areas interact helped to avoid the 'throw it over the wall' mentality that often happens.
Anyhow, hope that is somehow helpful to you.
The reason I don't like having my QA people do it, is because I don't like the idea of them creating their own work. For example they might leave out things that are simply too much work to test, and they may not know the technical detail that is needed.
Yikes, you need to have more trust in your QA department, or a better one. I mean, imagine of you had said "I don't like having my developers develop software. I don't like the idea of them creating their own work."
As a developer, I Know that there are risks involved in writing my own tests. That's not to say I don't do that (I do, especially if I am doing TDD) but I have no illusions about test coverage. Developers are going to write tests that show that their code does what they think it does. Not too many are going to write tests that apply to the actual business case at hand.
Testing is a skill, and hopefully your QA department, or at least, the leaders in that department, are well versed in that skill.
"developers who think it's a waste of their time, or that they simply don't like doing it" Then reward them for it. What social engineering is necessary to get them to create test cases?
Can QA look over the code and test cases and pronounce "Not Enough Coverage -- Need More Cases". If so, then the programmer that has "enough" coverage right away will be the Big Kahuna.
So, my question is: Once the task is done, who should define the goal of "enough" test cases for this task? Once you know "enough", you can make the programmers responsible for filling in "enough" and QA responsible for assuring that "enough" testing is done.
Too hard to define "enough"? Interesting. Probably this is the root cause of the conflict with the programmers in the first place. They might feel it's a waste of their time because they already did "enough" and now someone is saying it isn't "enough".
the QA people, in conjunction with the "customer", should define the test cases for each task [we're really mixing terminology here], and the developer should write them. first!
Select (not just pick randomly) one or two testers, and let them write the test cases. Review. It could also be useful if a developer working with a task looks at the test cases for the task. Encourage testers to suggest improvements and additions to test sets - sometimes people are afraid to fix what the boss did. This way you might find someone who is good at test design.
Let the testers know about the technical details - I think everyone in an agile team should have read access to code, and whatever documentation is available. Most testers I know can read (and write) code, so they might find unit tests useful, possibly even extend them. Make sure the test designers get useful answers from the developers, if they need to know something.
My suggestion would be to having someone else look over the test cases before the code is merged to ensure quality. Granted this may mean that a developer is overlooking another developer's work but that second set of eyes may catch something that wasn't initially caught. The initial test cases can be done by any developer, analyst or manager, not a tester.
QA shouldn't write the test cases as they may be situations where the expected result hasn't been defined and by this point, it may be hard to have someone referee between QA and development if each side thinks their interpretation is the right one. It is something I have seen many many times and wish it didn't happen as often as it does.
I loosely break my tests down into "developer" tests and "customer" tests, the latter of which would be "acceptance tests". The former are the tests that developers write to verify that their code is performing correctly. The later are tests that someone other than developers write to ensure that behavior matches the spec. The developers must never write the accepatance tests because their creation of the software they're testing assumes that they did the right thing. Thus, their acceptance tests are probably going to assert what the developer already knew to be true.
The acceptance tests should be driven by the spec and if they're written by the developer, they'll get driven by the code and thus by the current behavior, not the desired behavior.
The Agile canon is that you should have (at least) two layers of tests: developer tests and customer tests.
Developer tests are written by the same people who write the production code, preferably using test driven development. They help coming up with a well decoupled design, and ensure that the code is doing what the developers think it is doing - even after a refactoring.
Customer tests are specified by the customer or customer proxy. They are, in fact, the specification of the system, and should be written in a way that they are both executable (fully automated) and understandable by the business people. Often enough, teams find ways for the customer to even write them, with the help of QA people. This should happen while - or even before - the functionality gets developed.
Ideally, the only tasks for QA to do just before the merge, is pressing a button to run all automated tests, and do some additional exploratory (=unscripted) testing. You'll want to run those tests again after the merge, too, to make sure that integrating the changes didn't break something.
A test case begins first in the story card.
The purpose of testing is to drive defects to the left (earlier in the software development process when they are cheaper and faster to fix).
Each story card should include acceptance criteria. The Product Owner pairs with the Solution Analyst to define the acceptance criteria for each story. This criteria is used to determine if a story card's purpose has been meet.
The story card acceptance criteria will determine what automated unit tests need to be coded by the developers as they do Test Driven Development. It will also drive the automated functional test implemented by the autoamted testers (and perhaps with developer support if using tools like FIT).
Just as importantly, the acceptance criteria will drive the automated performance tests and can be used when analyzing the profiling of the application by the developers.
Finally, the user acceptance test will be determined by the acceptance criteria in the story cards and should be designed by the business partner and or users. Follow this process and you will likely release with zero defects.
I've rarely have heard of or seen Project Managers write test cases except for in the smaller teams. In any large,complex software application have to have an analyst that really knows the application. I worked at a mortgage company as a PM - was I to understand sub-prime lending, interest rates, and the such? Maybe at a superficial level, but real experts needed to make sure those things worked. My job was to keep the team healthy, protect the agile principles, and look for new opportunities for work for my team.
The system analyst should review over all test-cases and its correct relation with the use-cases.
Plus the Analyst should perform the final UAT, which could be based on test-cases also.
So the analyst and the quality guy are making sort of peer-review.
The quality is reviewing the use-cases while he is building test-cases, and the analyst is reviewing the test-cases after they are written and while he is performing UAT.
Of course BA is the domain expert, not from technical point of view. BA understands the requirements and the test cases should be mapped to the requirements. Developers should not be the persons writing the test cases to test against their code. QA can write detail test steps per requirement. But the person who writes the requirement should dictate what needs to be tested. Who actually writes the test cases, I dont care too much as long as the test cases can be traced back to requirements. I would think it makes sense that BA guides the testing direction or scope, and QA writes the granular testing plans.
We need to evolve from the "this is how it has been done or should be done mentality" it is failing and failing continuously. The best way to resolve the test plan/cases writing issue is that test cases should be written on the requirements doc in waterfall or the user story in agile as those reqs/user stories are being written. This way there is no question what needs to be tested and QA and UAT teams can execute the test case(s) and focus time on actual testing and defect resolution.

How much a tester should know about internal details of code?

How useful, if at all, is for the testers on a product team to know about the internal code details of a product. This does not mean they need to know every line of code but a good idea of how the code is structured, what is the object model, how the various modules are inter-linked, what are the inter-dependencies between various features etc.? This can argubaly help them in finding related issues or defects once they hit one. On the other side, this can potentially 'bias' their "user-centric" approach towards evaluating and certifying the product and can effect the testing results in the end.
I have not heard of any specific model for such interaction. (Lets assume a product that users, potentially non-technical consume, and not a framework or API that the testers are testing - in the latter case the testers may need to understand the code to test that because the user is another programmer).
That entirely depends upon the type of testing being done.
For functional system testing, the testers can and probably should be oblivious to the details of the implementation -- if they know the details they may inadvertently account for that in their test strategy and not properly test the product.
For performance and scalability testing it's often helpful for the testers to have some high-level knowledge of the structure of the codebase, as it's beneficial in identifying potential performance hotspots, and therefore writing targetted test cases. The reason this is important is that generally performance testing is a broad open-ended process, so anything that can be done to focus the testing to get results is beneficial to everybody.
This sounds similiar to this previous question: Should QA test from a strictly black-box perspective?
I've never seen a circumstance where a tester who knew a lot about the internals of system was disadvantaged.
I would assert that there are self justifying myths that an informed tester is as adequate or even better than a deeply technical one because:
It allows project managers to use 'random or low quality resources' for testing. The 'as uninformed as the user myth'. If you want this type of testing - get some 'real' users to test your stuff.
Testers are still often seen as cheaper and less valuable than developers. The 'anybody can do blackbox testing myth'.
Development can defer proper testing to the test team. Two myths in one 'we don't need to train testers' and 'only the test team does testing' myths.
What you are looking at here is the difference between black box (no knowledge of the internals), white box (all knowledge) and grey box (some select knowledge).
The answer really depends on the purpose of the code. For integration heavy projects then where and how they communicate, even if it is entirely behind the scenes, allows testers to produce appropriate non-functional test cases.
These test cases are determining whether or not a component will gracefully handle the lack of availability of a dependency. It can also be used to identify performance related issues.
For example: As a tester if I know that the Web UI component defers a request to a orchestration service that does the real work then I can construct a scenario where the orchestration takes a long time (high load). If the user then performs another request (simulating user impatience) and the web service will receive a second request while the first is still going. If we continually repeat this the web service will eventually die from stress. Without knowing the underlying model it would not be easy to find the problem
In most cases for functionality testing then black box is preferred, as soon as you move towards non-functional or system integration then understanding the interactions can assist in ensuring appropriate test coverage.
Not all testers are skilled or comfortable working/understanding the component interactions or internals so it is on a per tester/per system basis on whether it is appropriate.
In almost all cases we start with black box and head towards white as the need sees.
A tester does not need to know internal details.
The application should be tested without any knowledge of interal structure, development problems, externals depenedncies.
If you encumber the tester with those additional info you push him into a certain testing scheme and the tester should never be pushed in a direction he should just test from a non coder view.
There are multiple testing methodologies that require code reviewing, and also those that don't.
The advantages to white-box testing (i.e. reading the code) is that you can tailor your testing to only test areas that you know (from reading the code) will fail.
Disadvantages include time wasted from actual testing to understand the code.
Black-box testing (i.e. not reading the code) can be just as good (or better?) at finding bugs than white-box.
Normally both types of testing can happen on one project, developers white-box unit testing, and testers black-box integration testing.
I prefer Black Box testing for final test regimes
In an ideal world...
Testers should know nothing about the internals of the code
They should know everything the customer will - i.e. have the documents/help required to use the system/application.(this definetly includes the API description/documents if it's some sort of code deliverable)
If the testers can't manage to find the defects with these limitations, you haven't documented your API/application enough.
If they are dedicated testers (Only thing they do) then I think they should know as little about the code as possible that they are attempting to test.
Too often they try to determine why its failing, that is the responsibility of the developer not the tester.
That said I think developers make great testers, because we tend to know the edge cases for certain types of functionality.
Here's an example of a bug which you can't find if you don't know the code internals, because you simply can't test all inputs:
long long int increment(long long int l) {
if (l == 475636294934LL) return 3;
return l + 1;
}
However, in this case it would be found if the tester had 100% code coverage as a target, and looked at only enough of the internals to write tests to achieve that.
Here's an example of a bug which you quite likely won't find if you do know the code internals, because false confidence is contagious. In particular, it is usually not possible for the author of the code to write a test which catches this bug:
int MyConnect(socket *sock) {
/* socket must have been bound already, but that's OK */
return RealConnect(sock);
}
If the documentation of MyConnect fails to mention that the socket must be bound, then something unexpected will happen some day (someone will call it unbound, and presumably the socket implementation will select an arbitrary local address). But a tester who can see the code often doesn't have the mindset of "testing" the documentation. Unless they're really on form, they won't notice that there's an assumption in the code not mentioned in the docs, and will just accept the assumption. In contrast, a tester writing from the docs could easily spot the bug, because they'll think "what possible states can a socket be in? I'll do a test for each". Since no constraints are mentioned, there's no reason they won't try the case that fails.
Answer: do both. One way to do this is to write a test suite before you see/write the code, and then add more tests to cover any special cases you introduce in your implementation. This applies whether or not the tester is the same person as the programmer, although obviously if the programmer writes the second kind of test, then only one person in the organisation has to understand the code. It's arguable whether it's a good long-term strategy to have code only one person has ever understood, but it's widespread, because it certainly saves time getting something out the door.
[Edit: I decline to say how these bugs came about. Maybe the programmer of the first one was clinically insane, and for the second one there are some restrictions on the port used, in order to workaround some weird network setup known to occur, and the socket is supposed to have been created via some de-weirdifying API whose existence is mentioned in the general sockets docs, but they neglect to require its use. Clearly in both these cases the programmer has been very careless. But that doesn't affect the point: the examples don't need to be realistic, since if you don't catch bugs that only a very careless programmer would make, then you won't catch all the actual bugs in your code unless you never have a bad day, make a crazy typo, etc.]
I guess it depends how good of testing you want. If you just want to sanity check the common scenarios, then by all means, just give the testers / pizza-eaters the application and tell them to go crazy.
However, if you'd like to have a chance at finding edge cases, performance or load issues, or a whole lot of other issues that hide in the depths of your code, you'd probably be better off hiring testers who know how and when to use white box techniques.
Your call.
IMHO, I think the industry view of testers is completely wrong.
Think about it ... you have two plumbers, one is extremely experienced, knows all the rules, the building codes, and can quickly look at something and know if the work is done right or not. The other plumber is good, and get the job done reliably.
Which one would you want to do the final inspection to make sure you don't come home to a flooded house? In fact, in what other industry do they allow someone who knows hardly anything about the system they are inspecting to actually do the inspection?
I have seen the bar for QA go up over the years, and that makes me happy. In time, QA may become something that devs aspire to be.
In short, not only should they be familiar with the code being tested, but they should have an understanding that rivals the architects of the product, as well as be able to effectively interface with the product owner(s) / customers to ensure that what is being created is actually what they want. But now I am going into a whole seperate conversation ...
Will it happen? Probably sooner than you think. I have been able to reduce the number of people needed to do QA, increase the overall effectiveness of the team, and increase the quality of the product simply by hiring very skilled people with dev / architect backgrounds with a strong aptitude for QA. I have lower operating costs, and since the software going out is higher quality, I end up with lower support costs. FWIW ... I have found that while I can backfill the QA guys effectively into a dev role when needed, the opposite is almost always not true.
If there is time, a tester should definitely go through a developers code. This way, you can improve your tests to get better coverage.
So, maybe if you write your black box tests looking at the spec and think you have the time to execute all of those and will still be left with time, going through code cannot be a bad idea.
Basically it all depends on how much time you have.. Another thing you can do to improve coverage is look at the developers design documents. Those should give you a good idea of what the code is going to look like...
Testers have the advantage of being familiar with both the dev code and the test code!
I would say they don't need to know the internal code details at all. However they do need to know the required functionality and system rules in full detail - like an analyst. Otherwise they won't test all the functionality, or won't realise when the system misbehaves.
For user acceptance testing the tester does not need to know the internal code details of the app. They only need to know the expected functionality, the business rules. When a bug is reported
Whoever is fixing the bug should know the inter-dependencies between various features.