What is it and why is it used/useful?
A sanity test isn't limited in any way to the context of programming or software engineering. A sanity test is just a casual term to mean that you're testing/confirming/validating something that should follow very clear and simple logic. It's asking someone else to confirm that you are not insane and that what seems to make sense to you also makes sense to them... or did you down way too many energy drinks in the last 4 hours to maintain sanity?
If you're bashing your head on the wall completely at a loss as to why something very simple isn't working... you would ask someone to do a quick sanity test for you. Have them make sure you didn't overlook that semicolon at the end of your for loop the last 15 times you looked it over. Extremely simple example, really shouldn't happen, but sometimes you're too close to something to step back and see the whole. A different perspective sometimes helps to make sure you're not completely insane.
The difference between smoke and sanity, at least as I understand it, is that smoke test is a quick test to see that after a build the application is good enough for testing. Then, you do a sanity test which would tell you if a particular functional area is good enough that it actually makes sense to proceed with tests on this area.
Example:
Smoke Test: I can launch the application and navigate through all the screens and application does not crash.
-If application crashes or I cannot access all screens, this build has something really wrong, there is "a fire" that needs to be extinguished ASAP and the vesion is not good for testing.
Sanity Test (For Users Management screen): I can get to Users Management screen, create a user and delete it.
So, the application passed the Smoke Test, and now I proceed to Sanity Tests for different areas. If I cannot rely on the application to create a user and to delete it, it is worthless to test more advanced functionalities like user expiration, logins, etc... However, if sanity test has passed, I can go on with the test of this area.
Good example is a sanity check for a database connection.
SELECT 1 FROM DUAL
It's a simple query to test the connection, see:
SELECT 1 from DUAL: MySQL
It doesn't test deep functionality, only that the connection is ok to proceed with.
A sanity test or sanity check is a basic test to quickly evaluate whether a claim or the result of a calculation can possibly be true # http://en.wikipedia.org/wiki/Sanity_testing
Smoke test is for quick test of a new build for its stability.
Sanity test is a test of newly deployed environment.
The basic concept behind a sanity check is making sure that the results of running your code line up with the expected results. Other than being something that gets used far less often than it should, a proper sanity check helps ensure that what you're doing doesn't go completely out of bounds and do something it shouldn't as a result. The most common use for a sanity check is to debug code that's misbehaving, but even a final product can benefit from having a few in place to prevent unwanted bugs from emerging as a result of GIGO (garbage in, garbage out).
Relatedly, never underestimate the ability of your users to do something you didn't expect anyone would actually do. This is a lesson that many programmers never learn, no matter how many times it's taught, and sanity checks are an excellent tool to help you come to terms with it. "I'd never do that" is not a valid excuse for why your code didn't handle a problem, and good sanity checks can help prevent you from ever having to make that excuse.
For a software application, a sanity test is a set of many tests that make a software version releasable to the public after the integration of new features and bug fixes. A sanity test means that while many issues could remain, the very critical issues which could for example make someone lose money or data or crash the program, have been fixed. Therefore if no critical issues remain, the version passes sanity test. This is usually the last test done before release.
It is a basic test to make sure that something is simply working.
For example: connecting to a database. Or pinging a website/server to see if it is up or down.
The act of checking a piece of code (or anything else, e.g., a Usenet posting) for completely stupid mistakes.
Implies that the check is to make sure the author was sane when it was written;
e.g., if a piece of scientific software relied on a particular formula and was giving unexpected results, one might first look at the nesting of parentheses or the coding of the formula, as a sanity check, before looking at the more complex I/O or data structure manipulation routines, much less the algorithm itself.
Related
I have been thinking about this and know what a smoke test is by definition but I am still not able to wrap my head around this.
I would really appreciate it if someone can help me with this and maybe give a real use case or example.
I am confused about the part when ppl say to smoke test is to check major functionalities, what are major functionalities?
So if I have about 100 test cases (e2e) and have 5 tests related to component x how do I decide how many tests to run(smoke test) against the component x (major functionalities) it seems everything is major functionalities for me or I can run all the 5 tests.
Then isn't every individual test a smoke test?
This question might be better asked on sqa or even softwareengineering. but....
"You're sniffing for smoke to detect if there is a fire"
The short answer is: smoke testing is whatever tests you feel you need to run to initially move forward with further testing. In my experience it's going to be different for everyone.
Here is an example: in my case, I know I always need to check this one specific aspect of our app because it breaks a lot and it affects 75% of the application so I always run a few tests against that feature first before running the whole regression.
To directly answer your question about 'major functionalities'; a good example of that would be login. My app requires all users login with username and password. I would consider that "major functionality" since everything else a user does is dependent on logging in to work properly. There is very little point to me testing some other feature if login testing were broken.
I am joining a company, they dont have any formal testing setup. They expect me to start a testing department. I have good understanding of manual and automated testing. Not sure about how to start or which tools to use for document sharing, bugs tracking.
please guide as much info you can provide.
thanks
This is a very broad question and almost impossible to answer without significantly more knowledge of your companies products, quality goals and existing tooling... But I've got some Opinions :tm: that might help, starting with some philosophy (sorry).
What You're For
The function of a testing department isn't to test; the goal is to help the company be confident in its delivery of products. Your customers want to know that your software is accurate and stable. Your Operations team wants to avoid Production going down. Your Developers want to feel confident that their changes work and don't have any negative side effects.
I personally feel that the best way for a testing team to provide that confidence is not by writing tests; It's by editing them. The testing team provides the tooling, guidelines and expertise to help the rest of the Engineering departments make testing an integral part of the process.
It's like cooking. You don't make a well seasoned meal by chopping and sautéing and stirring and then giving it to a head chef to taste. You taste continually while you go because you're the one who knows what the food should be like. The head chef trains you and provides feedback on the final dish so that you learn how to season correctly.
Choosing Tools
Irrelevant. Mostly.
Your tools need to give you what you're after and then get out of your way. At the moment, the company barely knows what it's after, so you could even use a Google Doc to track defects.
You don't want to get in anyone's way to begin with, or they'll start to resent you. Your team needs to provide value and start to earn the social capital to change the Engineering processes to help deliver your goals.
So, use whatever document sharing tools are already in use; Whether that's a Wiki, Google, Dropbox etc. If you're choosing a new one because there's no collaboration, I'm partial to Notion.
If your team already has a collaborative build tool (eg Jenkins, Travis) it's probably best to stick with that, adding in testing steps. Again, the less friction you introduce, the better your initial outcomes.
I wouldn't bother building and maintaining a test grid; Instead, lean on a vendor like Sauce Labs for infrastructure and expertise. That way you've got easy parallelisation, wide platform coverage, test asset collection, insights, as well as their experience in supporting Testing teams. Disclaimer: I'm the Manager of Developer Relations at Sauce Labs, so I'm probably biased ;)
As for testing tools; If you want your engineering teams to collaborate on test production, you need to stick with an ecosystem they can use. This likely means whatever they're already using.
How To Start Testing
Selecting What To Test
Your organisation wants testing so bad they're hiring you. That implies there's a traumatic event that they want to avoid happening again. So, start there. Find out what it is, and create a test for it.
If Black Friday overwhelmed their site, do Load testing. If their build is always breaking, concentrate on unit testing. If functionality doesn't work in Prod, add an integration test.
Test Coverage
There's a trap for new players, and you're likely to hear this from your devs:
We're so far behind on test coverage we'll never catch up
That is absolutely true.... if you never start! Add the tests that prevent the trauma that bought you on board and you're already adding value; You'll catch that problem next time.
Another trap is setting test coverage goals. Test coverage is a great way to monitor your process but a terrible way to improve it. Force your teams to increase test coverage (or not let it slip) and they'll start to resent the process... And write crap tests just to boost the percentage.
Instead, use coverage for feedback. If coverage goes down during a commit, discuss why and talk about how to improve it. if it drops way down you might want to do something, but a little dip while you're getting started is A-OK.
Assuming you've covered the trauma that got you hired, increasing test coverage is best done on an as-worked basis. If a developer is writing new code, it gets tests. If a developer is modifying old code, it gets tests to (at least) prove that the modifications work, and ideally to prove that they don't break the old functionality either.
You may come across old code that literally can't be tested. That's a good time to refactor that code. If people are scared of refactoring because it might break, point out that that's exactly what tests are for. Try to pull out to a level where you can test. If you can't test a unit, test the class. If you can't test the class, test the package. Then, go back in and start re-working. You have to do it some day.
Oh, no, we'll be replacing the Fizzwangle with a new Buzzshooper implementation soon; There's no need to take the risk of refactoring for testability.
This is a lie. Even if they mean it truthfully, it's a lie. Buzzshooper isn't coming any time soon. Refactor that shit.
Tests Are Code, Code Is Tests
Your tests need to be treated like high quality code. Use all the abstractions you use when writing code, like inheritance, polymorphism, modularisation, composability.
Look at techniques like the Page Object Model for front end testing. Your test code should restrict implementation detail knowledge (eg, element locators) to the least number of places, so that changes are easy to implement.
Oh, and also, your Code is Code. Learn about then help your teams write code for testability, and tests for code-ability. Structure your tests and app so you can test in parallel, reliably, as fast as possible:
Give HTML elements unique, simple IDs
Write tests that test a single thing
Bypass complicated test setup by doing things like pre-populating databases
Log in once, then use session management to avoid doing it again
Use data generators to create unique test data (including logins)
Other Resources
Check out past conference talks like SauceCon Online.
Testing Talks Online has some great discussions and is the closest thing I've found to a real-life meetup during Covid.
There's also a lot of great content over at Ministry of Testing.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
One of our developers is continually writing code and putting it into version control without testing it. The quality of our code is suffering as a result.
Besides getting rid of the developer, how can I solve this problem?
EDIT
I have talked to him about it number of times and even given him written warning
If you can do code reviews -- that's a perfect place to catch it.
We require reviews prior to merging to iteration trunk, so typically everything is caught then.
If you systematically perform code reviews before allowing a developer to commit the code, well, your problem is mostly solved. But this doesn't seem to be your case, so this is what I recommend:
Talk to the developer. Discuss the consequences for others in the team. Most developers want to be recognized by their peer, so this might be enough. Also point out it is much easier to fix bugs in the code that's fresh in your mind than weeks-old code. This part makes sense if you have some form of code owneship in place.
If this doesn't work after some time, try to put in place a policy that will make commiting buggy code unpleasant for the author. One popular way is to make the person who broke the build responsible for the chores of creating the next one. If your build process is fully automated, look for another menial task to take care of instead. This approach has the added benefit of not pinpointing anyone in particular, making it more acceptable for everybody.
Use disciplinary measures. Depending on the size of your team and of your company, those can take many forms.
Fire the developer. There is a cost associated with keeping bad apples. When you get this far, the developer doesn't care about his fellow developers, and you've got a people problem on your hands already. If the work environment becomes poisoned, you might lose far more - productivity-wise and people-wise - than this single bad developer.
As a developer who rarely tests his own code, I can tell you the one thing that's made me slowly shift my behavior...
Visibility
If the environment allows pushing code out, waiting for users to find problems, and then essentially asking "How about now?" after making a change to the code, there's no real incentive to test your own stuff.
Code reviews and collaboration encourage you to work towards making a quality product much more than if you were just delivering 'Widget X' while your coworkers work on 'Widget Y' and 'Widget Z'
The more visible your work is, the more likely you are to care about how well it works.
Code review. Stick all of your dev's in a room every Monday morning and ask them to bring their most proud code-based accomplishment from the previous week along with them to the meeting.
Let them take the spotlight and get excited about explaining what they did. Have them bring copies of the code so other dev's can see what they're talking about.
We started this process a few months ago, and it's astonishing to see the amount of sub-conscious quality checks that take place. After all, if the dev's are simply asked to talk about what they're most excited about, they'll be totally stoked to show people their code. Then, other dev's will see the quality errors and publicly discuss why they're wrong and how the code should really be written instead.
If this doesn't get your dev to write quality code, he's probably not a good fit for your team.
Make it part of his Annual Review objectives. If he doesn't achieve it, no pay rise.
Sometimes though you do just have to accept that someone is just not right for your team/environment, it should be a last resort and can be tough to handle but if you have exhausted all other options it may be the best thing in the long run.
Tell the developer you would like to see a change in their practices within 2 weeks or you will begin your company's disciplinary procedure. Offer as much help and assistance as you can, but if you can't change this person, he's not right for your company.
Using Cruise Control or a similar tool, you can make checkins automatically trigger a build and unit tests. You would still need to ensure that there are unit tests for any new functionality he adds, which you can do by looking at his checkins.
However, this is a human problem, so a technical solution can only go so far.
Why not just talk to him? He probably won't actually bite you.
Make him "babysit" the build, and become the build manager. This will give him less time to develop code (thus increasing everyone's performance) and teach him why a good build is so necessary.
Enforce test cases - code cannot be submitted without unit test cases. Modify the build system so that if the test cases don't compile and run correctly, or don't exist, then the entire task checkin is denied.
-Adam
Publish stats on test code coverage per developer, this would be after talking to him.
Here are some ideas from a sea shanty.
Intro
What shall we do with a drunken sailor, (3×)
Early in the morning?
Chorus
Wey–hey and up she rises, (3×)
Early in the morning!
Verses
Stick him in a bag and beat him senseless, (3×)
Early in the morning!
Put him in the longboat till he’s sober, (3×)
Early in the morning!
etc. Replace "drunken sailor" with a "sloppy developer".
Depending on the type of version control system you are using you could set up check-in policies that force the code to pass certain requirements before being allowed to check-in. If you are using a sytem like Team Foundation Server it gives you the ability to specify code-coverage and unit testing requirements for check-ins.
You know, this is a perfect opportunity to avoid singling him out (though I agree you need to talk with him) and implement a Test-first process in-house. If the rules aren't clear and the expectations are known to all, I've found that what you describe isn't all that uncommon. I find that doing the test-first development scheme works well for me and improves the code quality.
They may be overly focused on speed rather than quality.
This can tempt some people into rushing through issues to clear their list and see what comes back in bug reports later.
To rectify this balance:
assign only a couple of items at a time in your issue tracking system,
code review and test anything they have "completed" as soon as possible so it will be back with them immediately if there are any problems
talk to them about your expectations about how long an item will take to do properly
Peer programming is another possibility. If he is with another skilled developer on the team who dies meet quality standards and knows procedure then this has a few benifits:
With an experienced developer over his shoulder he will learn what is expected of him and see the difference between his code and code that meets expectations
The other developer can enforce a test first policy: not allowing code to be written until tests have been written for it
Similarly, the other developer can verify that the code is up to standard before it is checked-in reduicing the nmber of bad check-ins
All of this of course requires the company and developers to be receptive to this process which they may not be.
It seems that people have come up with a lot of imaginative and devious answers to this problem. But the fact is that this isn't a game. Devising elaborate peer pressure systems to "name and shame" him is not going to get to the root of the problem, ie. why is he not writing tests?
I think you should be direct. I know you say that you've talked to him, but have you tried to find out why he isn't writing tests? Clearly at this point he knows that he should be, so surely there must be some reason why he isn't doing what he's been told to do. Is it laziness? Procrastination? Programmers are famous for their egos and strong opinions - perhaps he's convinced for some reason that testing is a waste of time, or that his code is always perfect and doesn't need testing. If he's an immature programmer, he might not fully understand the implications of his actions. If he's "too mature" he might be too set in his ways. Whatever the reason, address it.
If it does come down to a matter of opinion, you need to make him understand that he needs to set his own personal opinion aside and just follow the rules. Make it clear that if he can't be trusted to follow the rules then he will be replaced. If he still doesn't, do just that.
One last thing - document all of your discussions along with any problems that occur as a result of his changes. If it comes to the worst you may be forced to justify your decisions, in which case, having documentary evidence will surely be invaluable.
Stick him on his own development branch, and only bring his stuff into the trunk when you know it's thoroughly tested. This might be a place where a distributed source control management tool like GIT or Mercurial would excel. Although with the increased branching/merging support in SVN, you might not have too much trouble managing it.
EDIT
This is only if you can't get rid of him or get him to change his ways. If you simply can't get this behaviour to stop (by changing or firing), then the best you can do is buffer the rest of the team from the bad effects of his coding.
If you are at a place where you can affect the policies, make some changes. Do code reviews before check ins and make testing part of the development cycle.
It seems pretty simple. Make it a requirement and if he can't do it, replace him. Why would you keep him?
I usually don't advocate this unless all else fails...
Sometimes, a publicly-displayed chart of bug-count-by-developer can apply enough peer pressure to get favorable results.
Try the Carrot, make it a fun game.
E.g The Continuous Integration Game plugin for Hudson
http://wiki.hudson-ci.org/display/HUDSON/The+Continuous+Integration+Game+plugin
Put your developers on branches of your code, based on some logic like, per feature, per bug fix, per dev team, whatever. Then bad check-ins are isolated to those branches. When it comes time to do a build, merge to a testing branch, find problems, resolve, and then merge your release back to a main branch.
Or remove commit rights for that developer and have them send their code to a younger developer for review and testing before it can be committed. That might motivate a change in procedure.
You could put together a report with errors found in the code with the name of the programmer that was responsible for that piece of software.
If he's a reasonable person, discuss the report with him.
If he cares for his "reputation" publish the report regularly and make it available to all his peers.
If he only listens to the "authority", do the report and escalate the issue to his manager.
Anyway, I've seen often that when people are made aware of how bad they seem from outside, they change their behaviour.
Hey this reminds me of something I read on xkcd :)
Are you referring to writing automated unit test or manually unit testing prior to check-in?
If your shop does not write automated tests then his checking in of code that does not work is reckless. Is it impacting the team? Do you have a formalized QA department?
If you are all creating automated unit tests then I would suggest that part of your code review process include the unit tests as well. It will become obvious that the code is not acceptable per your standards during your review.
Your question is rather broad but I hope I provided some direction.
I would agree with Phil that the first step is to individually talk to him and explain the importance of quality. Poor quality can often be linked to the culture of the team, department and company.
Make executed test cases one of the deliverables before something is considered "done."
If you don't have executed test cases, then the work is not complete, and if the deadline passes before you have the documented test case execution, then he has not delivered on time, and the consequences would be the same as if he had not completed the development.
If your company's culture would not allow for this, and it values speed over accuracy, then that's probably the root of the problem, and the developer is simply responding to the incentives that are in place -- he is being rewarded for doing a lot of things half-assed rather than fewer things correctly.
Make the person clean latrines. Worked in the Army. And if you work in a group with individuals who eat a lot of Indian food, it wont take long for them to fall in line.
But that's just me...
Every time a developer checks something in that does not compile, put some money in a jar. You'll think twice before checking in then.
Unfortunately if you have already spoken to him many times and given him written warnings I would say it is about time to eliminate him from the team.
You might find some helpful answers here: How to make junior programmers write tests?
I'd be tempted to suggest elaborating a bit on what you've tried and what results you got as this may have changed a bit but here are my initial suggestions:
Is it any tests or comprehensive tests? Some may code blindly and do zero tests, but this is rather rare, IME. Usually there are some tests done but not enough to cover most of the cases that would be comprehensive testing.
Group dynamics may help. I'd assume he is part of a team and that the team's view may be of some help here. In a way this is trying to get peer pressure which is usually a bad thing but sometimes it can be used in good ways.
How well spelled out were the warnings? In a way this can seem childish but there is a chance that what you think of as testing may not be the same as his. Do you want nUnit tests, an excel spreadsheet, logs from his computer, or something else as proof of the existence and use of tests? From what you've described there isn't anything to confirm that he did understand what you meant, was going to use tests and provide evidence of doing so.
Check-in policy question. Some places, such as my current workplace, encourage committing often which can mean that one does commit code without tests. Is there a known, accepted and well-followed policy where you are? That's another aspect here.
How useful, if at all, is for the testers on a product team to know about the internal code details of a product. This does not mean they need to know every line of code but a good idea of how the code is structured, what is the object model, how the various modules are inter-linked, what are the inter-dependencies between various features etc.? This can argubaly help them in finding related issues or defects once they hit one. On the other side, this can potentially 'bias' their "user-centric" approach towards evaluating and certifying the product and can effect the testing results in the end.
I have not heard of any specific model for such interaction. (Lets assume a product that users, potentially non-technical consume, and not a framework or API that the testers are testing - in the latter case the testers may need to understand the code to test that because the user is another programmer).
That entirely depends upon the type of testing being done.
For functional system testing, the testers can and probably should be oblivious to the details of the implementation -- if they know the details they may inadvertently account for that in their test strategy and not properly test the product.
For performance and scalability testing it's often helpful for the testers to have some high-level knowledge of the structure of the codebase, as it's beneficial in identifying potential performance hotspots, and therefore writing targetted test cases. The reason this is important is that generally performance testing is a broad open-ended process, so anything that can be done to focus the testing to get results is beneficial to everybody.
This sounds similiar to this previous question: Should QA test from a strictly black-box perspective?
I've never seen a circumstance where a tester who knew a lot about the internals of system was disadvantaged.
I would assert that there are self justifying myths that an informed tester is as adequate or even better than a deeply technical one because:
It allows project managers to use 'random or low quality resources' for testing. The 'as uninformed as the user myth'. If you want this type of testing - get some 'real' users to test your stuff.
Testers are still often seen as cheaper and less valuable than developers. The 'anybody can do blackbox testing myth'.
Development can defer proper testing to the test team. Two myths in one 'we don't need to train testers' and 'only the test team does testing' myths.
What you are looking at here is the difference between black box (no knowledge of the internals), white box (all knowledge) and grey box (some select knowledge).
The answer really depends on the purpose of the code. For integration heavy projects then where and how they communicate, even if it is entirely behind the scenes, allows testers to produce appropriate non-functional test cases.
These test cases are determining whether or not a component will gracefully handle the lack of availability of a dependency. It can also be used to identify performance related issues.
For example: As a tester if I know that the Web UI component defers a request to a orchestration service that does the real work then I can construct a scenario where the orchestration takes a long time (high load). If the user then performs another request (simulating user impatience) and the web service will receive a second request while the first is still going. If we continually repeat this the web service will eventually die from stress. Without knowing the underlying model it would not be easy to find the problem
In most cases for functionality testing then black box is preferred, as soon as you move towards non-functional or system integration then understanding the interactions can assist in ensuring appropriate test coverage.
Not all testers are skilled or comfortable working/understanding the component interactions or internals so it is on a per tester/per system basis on whether it is appropriate.
In almost all cases we start with black box and head towards white as the need sees.
A tester does not need to know internal details.
The application should be tested without any knowledge of interal structure, development problems, externals depenedncies.
If you encumber the tester with those additional info you push him into a certain testing scheme and the tester should never be pushed in a direction he should just test from a non coder view.
There are multiple testing methodologies that require code reviewing, and also those that don't.
The advantages to white-box testing (i.e. reading the code) is that you can tailor your testing to only test areas that you know (from reading the code) will fail.
Disadvantages include time wasted from actual testing to understand the code.
Black-box testing (i.e. not reading the code) can be just as good (or better?) at finding bugs than white-box.
Normally both types of testing can happen on one project, developers white-box unit testing, and testers black-box integration testing.
I prefer Black Box testing for final test regimes
In an ideal world...
Testers should know nothing about the internals of the code
They should know everything the customer will - i.e. have the documents/help required to use the system/application.(this definetly includes the API description/documents if it's some sort of code deliverable)
If the testers can't manage to find the defects with these limitations, you haven't documented your API/application enough.
If they are dedicated testers (Only thing they do) then I think they should know as little about the code as possible that they are attempting to test.
Too often they try to determine why its failing, that is the responsibility of the developer not the tester.
That said I think developers make great testers, because we tend to know the edge cases for certain types of functionality.
Here's an example of a bug which you can't find if you don't know the code internals, because you simply can't test all inputs:
long long int increment(long long int l) {
if (l == 475636294934LL) return 3;
return l + 1;
}
However, in this case it would be found if the tester had 100% code coverage as a target, and looked at only enough of the internals to write tests to achieve that.
Here's an example of a bug which you quite likely won't find if you do know the code internals, because false confidence is contagious. In particular, it is usually not possible for the author of the code to write a test which catches this bug:
int MyConnect(socket *sock) {
/* socket must have been bound already, but that's OK */
return RealConnect(sock);
}
If the documentation of MyConnect fails to mention that the socket must be bound, then something unexpected will happen some day (someone will call it unbound, and presumably the socket implementation will select an arbitrary local address). But a tester who can see the code often doesn't have the mindset of "testing" the documentation. Unless they're really on form, they won't notice that there's an assumption in the code not mentioned in the docs, and will just accept the assumption. In contrast, a tester writing from the docs could easily spot the bug, because they'll think "what possible states can a socket be in? I'll do a test for each". Since no constraints are mentioned, there's no reason they won't try the case that fails.
Answer: do both. One way to do this is to write a test suite before you see/write the code, and then add more tests to cover any special cases you introduce in your implementation. This applies whether or not the tester is the same person as the programmer, although obviously if the programmer writes the second kind of test, then only one person in the organisation has to understand the code. It's arguable whether it's a good long-term strategy to have code only one person has ever understood, but it's widespread, because it certainly saves time getting something out the door.
[Edit: I decline to say how these bugs came about. Maybe the programmer of the first one was clinically insane, and for the second one there are some restrictions on the port used, in order to workaround some weird network setup known to occur, and the socket is supposed to have been created via some de-weirdifying API whose existence is mentioned in the general sockets docs, but they neglect to require its use. Clearly in both these cases the programmer has been very careless. But that doesn't affect the point: the examples don't need to be realistic, since if you don't catch bugs that only a very careless programmer would make, then you won't catch all the actual bugs in your code unless you never have a bad day, make a crazy typo, etc.]
I guess it depends how good of testing you want. If you just want to sanity check the common scenarios, then by all means, just give the testers / pizza-eaters the application and tell them to go crazy.
However, if you'd like to have a chance at finding edge cases, performance or load issues, or a whole lot of other issues that hide in the depths of your code, you'd probably be better off hiring testers who know how and when to use white box techniques.
Your call.
IMHO, I think the industry view of testers is completely wrong.
Think about it ... you have two plumbers, one is extremely experienced, knows all the rules, the building codes, and can quickly look at something and know if the work is done right or not. The other plumber is good, and get the job done reliably.
Which one would you want to do the final inspection to make sure you don't come home to a flooded house? In fact, in what other industry do they allow someone who knows hardly anything about the system they are inspecting to actually do the inspection?
I have seen the bar for QA go up over the years, and that makes me happy. In time, QA may become something that devs aspire to be.
In short, not only should they be familiar with the code being tested, but they should have an understanding that rivals the architects of the product, as well as be able to effectively interface with the product owner(s) / customers to ensure that what is being created is actually what they want. But now I am going into a whole seperate conversation ...
Will it happen? Probably sooner than you think. I have been able to reduce the number of people needed to do QA, increase the overall effectiveness of the team, and increase the quality of the product simply by hiring very skilled people with dev / architect backgrounds with a strong aptitude for QA. I have lower operating costs, and since the software going out is higher quality, I end up with lower support costs. FWIW ... I have found that while I can backfill the QA guys effectively into a dev role when needed, the opposite is almost always not true.
If there is time, a tester should definitely go through a developers code. This way, you can improve your tests to get better coverage.
So, maybe if you write your black box tests looking at the spec and think you have the time to execute all of those and will still be left with time, going through code cannot be a bad idea.
Basically it all depends on how much time you have.. Another thing you can do to improve coverage is look at the developers design documents. Those should give you a good idea of what the code is going to look like...
Testers have the advantage of being familiar with both the dev code and the test code!
I would say they don't need to know the internal code details at all. However they do need to know the required functionality and system rules in full detail - like an analyst. Otherwise they won't test all the functionality, or won't realise when the system misbehaves.
For user acceptance testing the tester does not need to know the internal code details of the app. They only need to know the expected functionality, the business rules. When a bug is reported
Whoever is fixing the bug should know the inter-dependencies between various features.
As a novice developer who is getting into the rhythm of my first professional project, I'm trying to develop good habits as soon as possible. However, I've found that I often forget to test, put it off, or do a whole bunch of tests at the end of a build instead of one at a time.
My question is what rhythm do you like to get into when working on large projects, and where testing fits into it.
Well, if you want to follow the TDD guys, before you start to code ;)
I am very much in the same position as you. I want to get more into testing, but I am currently in a position where we are working to "get the code out" rather than "get the code out right" which scares the crap out of me. So I am slowly trying to integrate testing processes in my development cycle.
Currently, I test as I code, trying to bust the code as I write it. I do find it hard to get into the TDD mindset.. Its taking time, but that is the way I would want to work..
EDIT:
I thought I should probably expand on this, this is my basic "working process"...
Plan what I want from the code,
possible object design, whatever.
Create my first class, add a huge comment to the top outlining
what my "vision" for the class is.
Outline the basic test scenarios.. These will basically
become the unit tests.
Create my first method.. Also writing a short comment explaining
how it is expected to work.
Write an automated test to see if it does what I expect.
Repeat steps 4-6 for each method (note the automated tests are in a huge list that runs on F5).
I then create some beefy tests to emulate the class in the working environment, obviously fixing any issues.
If any new bugs come to light following this, I then go back and write the new test in, make sure it fails (this also serves as proof-of-concept for the bug) then fix it..
I hope that helps.. Open to comments on how to improve this, as I said it is a concern of mine..
Before you check the code in.
First and often.
If I'm creating some new functionality for the system I'll be looking to initially define the interfaces and then write unit tests for those interfaces. To work out what tests to write consider the API of the interface and the functionality it provides, get out a pen and paper and think for a while about potential error conditions or ways to prove that it is doing the correct job. If this is too difficult then it's likely that your API isn't good enough.
In regards to the tests, see if you can avoid writing "integration" tests that test more than one specific object and keep them as "unit" test.
Then create a default implementation of your interface (that does nothing, returns rubbish values but doesn't throw exceptions), plug it into the tests to make sure that the tests fail (this tests that your tests work! :) ). Then write in the functionality and re-run the tests.
This mechanism isn't perfect but will cover a lot of simple coding mistakes and provide you with an opportunity to run your new feature without having to plug it into the entire application.
Following this you then need to test it in the main application with the combination of existing features.
This is where testing is more difficult and if possible should be partially outsourced to good QA tester as they'll have the knack of breaking things. Although it helps if you have these skills too.
Getting testing right is a knack that you have to pick up to be honest. My own experience comes from my own naive deployments and the subsequent bugs that were reported by the users when they used it in anger.
At first when this happened to me I found it irritating that the user was intentionally trying to break my software and I wanted to mark all the "bugs" down as "training issues". However after reflecting on it I realised that it is our role (as developers) to make the application as simple and reliable to use as possible even by idiots. It is our role to empower idiots and thats why we get paid the dollar. Idiot handling.
To effectively test like this you have to get into the mindset of trying to break everything. Assume the mantle of a user that bashes the buttons and generally attempts to destroy your application in weird and wonderful ways.
Assume that if you don't find flaws then they will be discovered in production to your companies serious loss of face. Take full responsibility for all of these issues and curse yourself when a bug you are responsible (or even part responsible) for is discovered in production.
If you do most of the above then you should start to produce much more robust code, however it is a bit of an art form and requires a lot of experience to be good at.
A good key to remember is
"Test early, test often and test again, when you think you are done"
When to test? When it's important that the code works correctly!
When hacking something together for myself, I test at the end. Bad practice, but these are usually small things that I'll use a few times and that's it.
On a larger project, I write tests before I write a class and I run the tests after every change to that class.
I test constantly. After I finish even a loop inside of a function, I run the program and hit a breakpoint at the top of the loop, then run through it. This is all just to make sure that the process is doing exactly what I want it to.
Then, once a function is finished, you test it in it's entirety. You probably want to set a breakpoint just before the function is called, and check your debugger to make sure that it works perfectly.
I guess I would say: "Test often."
I've only recently added unit testing to my regular work flow but I write unit tests:
to express the requirements for each new code module (right after I write the interface but before writing the implementation)
every time I think "it had better ... by the time I'm done"
when something breaks, to quantify the bug and prove that I've fixed it
when I write code which explicitly allocates or deallocates memory---I loath hunting for memory leaks...
I run the tests on most builds, and always before running the code.
Start with unit testing. Specifically, check out TDD, Test Driven Development. The concept behind TDD is you write the unit tests first, then write your code. If the test fails, you go back and re-work your code. If it passes, you move on to the next one.
I take a hybrid approach to TDD. I don't like to write tests against nothing, so I usually write some of the code first, then put the unit tests in. It's an iterative process, one which you're never really done with. You change the code, you run your tests. If there's any failures, fix and repeat.
The other sort of testing is integration testing, which comes along later in the process, and might typically be done by a QA testing team. In any case, integration testing addresses the need to test the pieces as a whole. It's the working product you're concerned with testing. This one is more difficult to deal with b/c it usually involves having automated testing tools (like Robot, for ex.).
Also, take a look at a product like CruiseControl.NET to do continuous builds. CC.NET is nice b/c it will run your unit tests with each build, notifying you immediately of any failures.
We don't do TDD here (though some have advocated it), but our rule is that you're supposed to check your unit tests in with your changes. It doesn't always happen, but it's easy to go back and look at a specific changeset and see whether or not tests were written.
I find that if I wait until the end of writing some new feature to test, I forget many of the edge cases that I thought might break the feature. This is ok if you are doing things to learn for yourself, but in a professional environment, I find my flow to be the classic form of: Red, Green, Refactor.
Red: Write your test so that it fails. That way you know the test is asserting against the correct variable.
Green: Make your new test pass in the easiest way possible. If that means hard-coding it, that's ok. This is great for those that just want something to work right away.
Refactor: Now that your test passes, you can go back and change your code with confidence. Your new change broke your test? Great, your change had an implication you didn't realize, now your test is telling you.
This rhythm has made me speed my development over time because I basically have a history compiler for all the things I thought that needed to be checked in order for a feature to work! This, in turn, leads to many other benefits, that I won't get to here...
Lots of great answers here!
I try to test at the lowest level that makes sense:
If a single computation or conditional is difficult or complex, add test code while you're writing it and ensure each piece works. Comment out the test code when you're done, but leave it there to document how you tested the algorithm.
Test each function.
Exercise each branch at least once.
Exercise the boundary conditions -- input values at which the code changes its behavior -- to catch "off by one" errors.
Test various combinations of valid and invalid inputs.
Look for situations that might break the code, and test them.
Test each module with the same strategy as above.
Test the body of code as a whole, to ensure the components interact properly. If you've been diligent about lower-level testing, this is essentially a "confidence test" to ensure nothing broke during assembly.
Since most of my code is for embedded devices, I pay particular attention to robustness, interaction between various threads, tasks, and components, and unexpected use of resources: memory, CPU, filesystem space, etc.
In general, the earlier you encounter an error, the easier it is to isolate, identify, and fix it--and the more time you get to spend creating, rather than chasing your tail.*
**I know, -1 for the gratuitous buffer-pointer reference!*