Sorry for the generic question, but is a doubt about the TDD.
The TDD says first a create test case and after this write the code. But I have a difficult to follow this steps. I mean for create a code from scratch, firstly I do an outline of how will the relationship of objects, and after this in theory I start with the test, but for me is a little difficult to abstract everything I need to write in test.
Sorry if this is a dumb question, but what you do for create test first? You make a list of all behavior you will test before really start to write test, or something else?
You are probably thinking too large. When you talk about an outline and multiple objects, you're thinking about the big picture, and unit testing - the kind of testing we typically talk about when we talk about TDD - unit testing is about testing tiny elements of functionality. Not even full classes, just individual methods of those classes. And you're not anticipating and writing a set of tests before writing your code - you're writing one. One tiny test, then write the code you need to get that test to pass. Then clean up as needed, then iterate again with the next tiny bit of functionality.
You will have an idea of the objects you'll be writing and an idea of their relationships, but just a hazy one, and you don't need to refine that beforehand. Instead, you refine it as you go, test by test, method by method. And when you recognize a way to improve your design - the concrete already-written classes you've developed so far - you make that improvement, so your design, instead of being painstakingly figured out ahead of time, without context of classes and tests, instead, your design emerges, a bit at a time, alongside your code and your tests, through your code and your tests.
Start with a single test. You'll need to create a class to pass that test, and you'll need to add a method to that class. When you're happy with the state of that single method of that single class, then figure out what the next bit of functionality should be, and write a test for it.
That's TDD. It takes practice and discipline, but it's a great way to write great code. Good luck!
Related
I'm recently hired as part of a team at my work whose focus is on writing acceptance test suites for our company's 3D modeling software. We use an in-house C# framework for writing and running them which essentially amounts to subclassing the TestBase class and overriding the Test() method, where generally all the setup, testing, and teardown is done.
While writing my tests, I've noticed that a lot of my code ends up being boilerplate code I rewrite often. I've been interested in trying to extract that code to be reusable and make my code DRYer, but I've struggled to find a way to do so without overabstracting my tests when they should be largely self-contained. I've tried a number of approaches, but they've run into issues:
Using inheritance: the most naive solution, this works well at first and lets me write tests quickly but has run into the usual trappings, i.e., test classes becoming too rigid and being unable to share my code across cousin subclasses, and logic being obfuscated within the inheritance tree. For instance, I have abstract RotateVolumeTest and TranslateVolumeTest classes that both inherit from ModifyVolumeTest, but I know relatively soon I'm going to want to rotate and translate a volume, so this is going to be a problem.
Using composition through interfaces: this solves a lot of the problems with the previous approach, letting me reuse code flexibly for my tests, but it leads to a lot of abstraction and seeming 'class bloat' -- now I have IVolumeModifier, ISetsUp, etc., all of which make the code less clear in what it's actually doing in the test.
Helper methods in a static utility class: this has been very helpful, especially for using a Factory pattern to generate the complex objects I need for tests quickly. However, it's felt 'icky' to put some methods in there that I know aren't actually very general, instead being used for a small subset of tests that need to share very specific code.
Using a testing framework like xUnit.net or similar to share code through [SetUp] and [TearDown] methods in a test suite, generally all in the same class: I've strongly preferred this approach, as it offers the reusability I've wanted without abstracting away from the test code, but my team isn't interested in it; I've tried to show the potential benefits of adopting a framework like that for our tests, but the consensus seems to be that it's not worth the refactoring effort in rewriting our existing test classes. It's a valid point and I think it's unlikely I'll be able to convince them further, especially as a relatively new hire, so unless I want to make my test classes vastly different from the rest of the code base, this one's off the table.
Copy and paste code where it's needed: this is the current approach we use, along with #3 and adding methods to TestBase. I know opinions differ on whether copy/paste coding is acceptable for test code where it of course isn't for production, but I feel that using this approach is going to make my tests much harder to maintain or change in the long run, as I now have N places I need to fix logic if a bug shows up (which plenty have already and only needed to be fixed in one).
At this point I'm really not sure what other options I have but to opt for #5, as much as it slows down my ability to write tests quickly or robustly, in order to stay consistent with the current code base. Any thoughts or input are very much appreciated.
I personally believe the most important thing for a successful testing framework is abstractions. Make it as easy as possible to write the test. The key points for me are that you will end up with more tests and the writer focuses more on what they are testing than how to write the test. Every testing framework I have seen that doesn't focus on abstraction has failed in more ways than one and ended up being maintainability nightmares.
If the logic is not used anywhere else but a single test class then leave in that test class but refactor later if it is needed in more than one place
I would opt in for all except #5.
In some cases unit testing can be really difficult. Normally people say to only test your public API. But in some cases this is just not possible. If your public API depends on files or databases you can't unit test properly. So what do you do?
Because it's my first time TDD-ing, I'm trying to find "my style" for unit testing, since it seems there is just not the one way to do so. I found two approaches on this problem, that aren't flawless at all. On the one hand, you could try to friend your assemblies and test the features that are internal. On the other hand, you could implement interfaces (only for the purpose of unit testing) and create fake objects within your unit tests. This approach looks quite nice first but becomes more ugly as you try to transport data using these fakes.
Is there any "good" solution to this problem? Which of those is less flawed? Or is there even a third approach?
I made a couple of false starts in TDD, grappling with this exact same problem. For me the breakthrough came when I realized what my mentor meant when he said : "We don't want to test the framework." (In our case that was the .Net framework).
In your case it sounds as if you have some business logic that interfaces to files and databases. What I would do is to abstract the file and database logic in the thinnest layers possible. You can then use Mock (of fakes or stubs) to simulate the file and database layers. This will allow you to test scenarios like if-my-database-returns-this-kind-of-information-does-my-business-logic-handle-it-correctly? Likewise for file access you can test the code that figures out which file in which path to open and you can test that your logic would be able to pull apart the contents of any given file correctly and able to use it correctly.
If for example your file access layer consists of a single function that takes a path name and a file name and returns the contents of the file in a long string then you don't really need to test it because essentially you are making a single call to the framework/OS and there is not a lot that can go wrong there.
At the moment I am working on a system that wraps our database as a bunch of functions that return lists of POCO's. Easy to understand for the business layer and easy to simulate via mocks.
Working this way takes some getting used to but it is absolutely byoo-ti-full once it clicks in your mind.
Finally, from your question I guess that you are working with legacy code and trying to do TDD for a new component. This is quite a bit harder than doing TDD on a completely new development. If it is at all possible, try to do your first TDD attempts on new (or well isolated) systems. Once you have learnt the mechanics it would be a lot easier to introduce partially TDD'd bits to legacy systems.
If your public API depends on files or databases you can't unit test properly. So what do you do?
There is an abstraction level that can be used.
IFileSystem/ IFileStorage (for files)
IRepository/ IDataStorage (for databases)
Since this level is very thin its integration tests will be easy to write and maintain. All other code will be unit-test friendly because it is easy to mock interaction with filesystem and database.
On the one hand, you could try to friend your assemblies and test the features that are internal.
People face this problem when their classes violates single responsibility principle (SRP) and dependency injection (DI) is not used.
There is a good rule that classes should be tested via their public methods/properties only. If internal methods are used by others then it is acceptable to test them. Private or protected methods should not be made internal because of testing.
On the other hand, you could implement interfaces (only for the purpose of unit testing) and create fake objects within your unit tests.
Yes, interfaces are easy to mock because of limitations of mocking frameworks.
If you can create an instance (fake/stub) of a type then your dependency should not implement an interface.
Sometimes people use interfaces for their domain entities but I do not support them.
To simplify working with fakes there are two patterns used:
Object Mother
Test Data Builder
When I started writing unit tests I started with 'Object Mother'. Now I am using 'Test Data Builder's.
There are a lot of good ideas that can help you in the book Working Effectively with Legacy Code by Michael Feathers.
Don't let the hard stuff get in your way... If it's inherently hard to test due to db or file integration, just ignore it for the moment. Most likely you can refactor that hard to test stuff into easier to test stuff using mocks with Dependency Injection etc... Until then, test the easy stuff and get a good unit test suite built up... when you do the refactoring of the hard to test stuff, you will have a much higher confidence interval that it's not breaking anything else... And refactoring to make something more easily testable IS a good reason to refactor...
So I'm trying to convert myself to a more test- and behaviour- driven approach to my development. It's good for me, and I've seen good results in the few projects I've used it for so far.
My current project is a FUSE-based filesystem - I want to add some functionality over basic filesystem access so FUSE seemed like a good fit. All I really need to do is implement a set of functions that fit the appropriate interface, wrap it up appropriately, and go.
However, test first, I remind myself. I've already written a set of cucumber features to lay out the basic expectations of how the overall app should work, so now it's time to get down to testing the innards.
Now, I could just write unit tests for each of the functions I need to write for the interface, and then get to coding the interface - but that doesn't seem overly test-driven to me. Sure the tests exist, but the interface is really what's driving things.
Am I going about this wrong? Or am I expecting too much?
Give me a "what-what" in the comments if you think this should be community wiki - I can't even decide if this has a right answer.
Step 1. What is one thing the interface must do? One thing.
Step 2. How will you prove it does that one thing?
Step 3. Write a test to prove the interface really does that one thing.
Step 4. Run the test -- it will fail. You haven't actually written the interface.
Step 5. Code the interface.
Step 6. The test will pass.
Move on to the next thing the interface must do.
This has little to do with the functions you've already designed. This is totally focused on externally visible feature the interface must have. It may turn out that your functions are the right thing. Or it may turn out that you over-engineered these functions. Or under-engineered them. The point is to drive your design from the things a component must do and the tests to prove what it must do.
Even if it's only focused on Ruby, The rspec book has a good introduction on the BDD cycle.
I want to add some functionality over basic filesystem access so FUSE seemed like a good fit
It is hard to develop fuse fs. Two main problems is VERY hard debugging and multi-threading. Also I had (and now have) problems with testing my fs. Maybe inotify will satisfy your requirements.
When you are starting a new project from scratch using DDD, and still isn't very confortable with the domain, TDD comes at a price. While you're still understanding the details of the domain, you figure out a lot of stuff you've done wrong, like a method that makes more sense in some other class, or adding/removing parameters from a constructor, and many other changes.
These changes are VERY frequent, specially in the beginning. Every change usually (and hopefully) requires some changes in the unit tests, which increases cost of change (which, as I said before, is stil very frequent).
My question is: Is TDD worth the cost, even in situations where there is still lots of change happening, but there's hope they will get less frequent (once we have better insight of the domain, for instance) soon?
While you're still understanding the
details of the domain, you figure out
a lot of stuff you've done wrong, like
a method that makes more sense in some
other class, or adding/removing
parameters from a constructor, and
many other changes.
The process of understanding a domain is a design process, and it is helped by TDD, which you must understand is a design technique.
That method which makes more sense in some other class - you come to realize this quickly, more quickly, using TDD, because the first thing you do in writing the method is to write a test for it. When you write that test, you'll see that (for instance) you need to pass in a lot of members from the other class, and that will tell you - before you've even written the method - "Hey, this belongs over there!"
Use TDD to reduce the churn you're describing. It won't eliminate it, but it will reduce it, because you're doing design in the micro, on demand, as needed. It's Just-In-Time Design.
If you're at the point where you are still designing your domain model components at a relatively high level, then you shouldn't be writing unit tests yet. You need to have an understanding of the problem domain, and the responsibilities of your classes before you can start writing any code.
If you are at the point where you are fiddling with constructors and parameters in your classes, then TDD should not be thought of as a cost - it is helping you discover and fix problems in your design. It is an investment in a better object model and less maintenance down the road.
Also, if you're using a refactoring tool like ReSharper or CodeRush, I find that most of the early changes are really not that bad - just minor inconveniences.
I believe so. Part of Test Driven Development is that you're only building what you need to build - only what's required to make the tests pass. So, if you need to change something in the code due to a clearer understanding of the domain, you may have less to change with a TDD approach than without because you haven't spent time building unnecessary things.
There are other advantages as well - you know that the part of the code that you didn't change still works, since it's already got tests. Even if you rewrite 50% of the code, you'll know that your other 50% works. Also, you know that the code is testable - you've been designing it to be tested the whole time. It's often a huge pain to add unit tests to code that wasn't designed to have any - it's usually written in a way that's very difficult to test. Code that's made to be tested from the beginning is much better.
TDD should help in this situation, IMHO. Part of the usefulness of the unit tests is that they verify that you didn't break anything during refactoring. Yes, some changes in code will require changes to the tests, but that should help clarify what you're doing, not make it harder.
As stated above TDD is more about testing and proving your design and less about Unit Testing. TDD is NOT Unit Testing, but Unit Tests can evolve from tests created in your TDD.
TDD is absolutly helpful in understanding the Domain model. TDD can be used to help define the Domain model causei it is a design technique as Carl Manaster stated above. TDD will help you recognize when and where to implement a design pattern, if / when an object is being defined in the wrong domain etc.
the most change rate, the most TDD become useful. A project with requirements sets in stone would get less from doing TDD compared to a project with high change rate.
As a novice developer who is getting into the rhythm of my first professional project, I'm trying to develop good habits as soon as possible. However, I've found that I often forget to test, put it off, or do a whole bunch of tests at the end of a build instead of one at a time.
My question is what rhythm do you like to get into when working on large projects, and where testing fits into it.
Well, if you want to follow the TDD guys, before you start to code ;)
I am very much in the same position as you. I want to get more into testing, but I am currently in a position where we are working to "get the code out" rather than "get the code out right" which scares the crap out of me. So I am slowly trying to integrate testing processes in my development cycle.
Currently, I test as I code, trying to bust the code as I write it. I do find it hard to get into the TDD mindset.. Its taking time, but that is the way I would want to work..
EDIT:
I thought I should probably expand on this, this is my basic "working process"...
Plan what I want from the code,
possible object design, whatever.
Create my first class, add a huge comment to the top outlining
what my "vision" for the class is.
Outline the basic test scenarios.. These will basically
become the unit tests.
Create my first method.. Also writing a short comment explaining
how it is expected to work.
Write an automated test to see if it does what I expect.
Repeat steps 4-6 for each method (note the automated tests are in a huge list that runs on F5).
I then create some beefy tests to emulate the class in the working environment, obviously fixing any issues.
If any new bugs come to light following this, I then go back and write the new test in, make sure it fails (this also serves as proof-of-concept for the bug) then fix it..
I hope that helps.. Open to comments on how to improve this, as I said it is a concern of mine..
Before you check the code in.
First and often.
If I'm creating some new functionality for the system I'll be looking to initially define the interfaces and then write unit tests for those interfaces. To work out what tests to write consider the API of the interface and the functionality it provides, get out a pen and paper and think for a while about potential error conditions or ways to prove that it is doing the correct job. If this is too difficult then it's likely that your API isn't good enough.
In regards to the tests, see if you can avoid writing "integration" tests that test more than one specific object and keep them as "unit" test.
Then create a default implementation of your interface (that does nothing, returns rubbish values but doesn't throw exceptions), plug it into the tests to make sure that the tests fail (this tests that your tests work! :) ). Then write in the functionality and re-run the tests.
This mechanism isn't perfect but will cover a lot of simple coding mistakes and provide you with an opportunity to run your new feature without having to plug it into the entire application.
Following this you then need to test it in the main application with the combination of existing features.
This is where testing is more difficult and if possible should be partially outsourced to good QA tester as they'll have the knack of breaking things. Although it helps if you have these skills too.
Getting testing right is a knack that you have to pick up to be honest. My own experience comes from my own naive deployments and the subsequent bugs that were reported by the users when they used it in anger.
At first when this happened to me I found it irritating that the user was intentionally trying to break my software and I wanted to mark all the "bugs" down as "training issues". However after reflecting on it I realised that it is our role (as developers) to make the application as simple and reliable to use as possible even by idiots. It is our role to empower idiots and thats why we get paid the dollar. Idiot handling.
To effectively test like this you have to get into the mindset of trying to break everything. Assume the mantle of a user that bashes the buttons and generally attempts to destroy your application in weird and wonderful ways.
Assume that if you don't find flaws then they will be discovered in production to your companies serious loss of face. Take full responsibility for all of these issues and curse yourself when a bug you are responsible (or even part responsible) for is discovered in production.
If you do most of the above then you should start to produce much more robust code, however it is a bit of an art form and requires a lot of experience to be good at.
A good key to remember is
"Test early, test often and test again, when you think you are done"
When to test? When it's important that the code works correctly!
When hacking something together for myself, I test at the end. Bad practice, but these are usually small things that I'll use a few times and that's it.
On a larger project, I write tests before I write a class and I run the tests after every change to that class.
I test constantly. After I finish even a loop inside of a function, I run the program and hit a breakpoint at the top of the loop, then run through it. This is all just to make sure that the process is doing exactly what I want it to.
Then, once a function is finished, you test it in it's entirety. You probably want to set a breakpoint just before the function is called, and check your debugger to make sure that it works perfectly.
I guess I would say: "Test often."
I've only recently added unit testing to my regular work flow but I write unit tests:
to express the requirements for each new code module (right after I write the interface but before writing the implementation)
every time I think "it had better ... by the time I'm done"
when something breaks, to quantify the bug and prove that I've fixed it
when I write code which explicitly allocates or deallocates memory---I loath hunting for memory leaks...
I run the tests on most builds, and always before running the code.
Start with unit testing. Specifically, check out TDD, Test Driven Development. The concept behind TDD is you write the unit tests first, then write your code. If the test fails, you go back and re-work your code. If it passes, you move on to the next one.
I take a hybrid approach to TDD. I don't like to write tests against nothing, so I usually write some of the code first, then put the unit tests in. It's an iterative process, one which you're never really done with. You change the code, you run your tests. If there's any failures, fix and repeat.
The other sort of testing is integration testing, which comes along later in the process, and might typically be done by a QA testing team. In any case, integration testing addresses the need to test the pieces as a whole. It's the working product you're concerned with testing. This one is more difficult to deal with b/c it usually involves having automated testing tools (like Robot, for ex.).
Also, take a look at a product like CruiseControl.NET to do continuous builds. CC.NET is nice b/c it will run your unit tests with each build, notifying you immediately of any failures.
We don't do TDD here (though some have advocated it), but our rule is that you're supposed to check your unit tests in with your changes. It doesn't always happen, but it's easy to go back and look at a specific changeset and see whether or not tests were written.
I find that if I wait until the end of writing some new feature to test, I forget many of the edge cases that I thought might break the feature. This is ok if you are doing things to learn for yourself, but in a professional environment, I find my flow to be the classic form of: Red, Green, Refactor.
Red: Write your test so that it fails. That way you know the test is asserting against the correct variable.
Green: Make your new test pass in the easiest way possible. If that means hard-coding it, that's ok. This is great for those that just want something to work right away.
Refactor: Now that your test passes, you can go back and change your code with confidence. Your new change broke your test? Great, your change had an implication you didn't realize, now your test is telling you.
This rhythm has made me speed my development over time because I basically have a history compiler for all the things I thought that needed to be checked in order for a feature to work! This, in turn, leads to many other benefits, that I won't get to here...
Lots of great answers here!
I try to test at the lowest level that makes sense:
If a single computation or conditional is difficult or complex, add test code while you're writing it and ensure each piece works. Comment out the test code when you're done, but leave it there to document how you tested the algorithm.
Test each function.
Exercise each branch at least once.
Exercise the boundary conditions -- input values at which the code changes its behavior -- to catch "off by one" errors.
Test various combinations of valid and invalid inputs.
Look for situations that might break the code, and test them.
Test each module with the same strategy as above.
Test the body of code as a whole, to ensure the components interact properly. If you've been diligent about lower-level testing, this is essentially a "confidence test" to ensure nothing broke during assembly.
Since most of my code is for embedded devices, I pay particular attention to robustness, interaction between various threads, tasks, and components, and unexpected use of resources: memory, CPU, filesystem space, etc.
In general, the earlier you encounter an error, the easier it is to isolate, identify, and fix it--and the more time you get to spend creating, rather than chasing your tail.*
**I know, -1 for the gratuitous buffer-pointer reference!*