Related
We have been developing code using loose coupling and dependency injection.
A lot of "service" style classes have a constructor and one method that implements an interface. Each individual class is very easy to understand in isolation.
However, because of the looseness of the coupling, looking at a class tells you nothing about the classes around it or where it fits in the larger picture.
It's not easy to jump to collaborators using Eclipse because you have to go via the interfaces. If the interface is Runnable, that is no help in finding which class is actually plugged in. Really it's necessary to go back to the DI container definition and try to figure things out from there.
Here's a line of code from a dependency injected service class:-
// myExpiryCutoffDateService was injected,
Date cutoff = myExpiryCutoffDateService.get();
Coupling here is as loose as can be. The expiry date be implemented literally in any manner.
Here's what it might look like in a more coupled application.
ExpiryDateService = new ExpiryDateService();
Date cutoff = getCutoffDate( databaseConnection, paymentInstrument );
From the tightly coupled version, I can infer that the cutoff date is somehow determined from the payment instrument using a database connection.
I'm finding code of the first style harder to understand than code of the second style.
You might argue that when reading this class, I don't need to know how the cutoff date is figured out. That's true, but if I'm narrowing in on a bug or working out where an enhancement needs to slot in, that is useful information to know.
Is anyone else experiencing this problem? What solutions have you? Is this just something to adjust to? Are there any tools to allow visualisation of the way classes are wired together? Should I make the classes bigger or more coupled?
(Have deliberately left this question container-agnostic as I'm interested in answers for any).
While I don't know how to answer this question in a single paragraph, I attempted to answer it in a blog post instead: http://blog.ploeh.dk/2012/02/02/LooseCouplingAndTheBigPicture.aspx
To summarize, I find that the most important points are:
Understanding a loosely coupled code base requires a different mindset. While it's harder to 'jump to collaborators' it should also be more or less irrelevant.
Loose coupling is all about understanding a part without understanding the whole. You should rarely need to understand it all at the same time.
When zeroing in on a bug, you should rely on stack traces rather than the static structure of the code in order to learn about collaborators.
It's the responsibility of the developers writing the code to make sure that it's easy to understand - it's not the responsibility of the developer reading the code.
Some tools are aware of DI frameworks and know how to resolve dependencies, allowing you to navigate your code in a natural way. But when that isn't available, you just have to use whatever features your IDE provides as best you can.
I use Visual Studio and a custom-made framework, so the problem you describe is my life. In Visual Studio, SHIFT+F12 is my friend. It shows all references to the symbol under the cursor. After a while you get used to the necessarily non-linear navigation through your code, and it becomes second-nature to think in terms of "which class implements this interface" and "where is the injection/configuration site so I can see which class is being used to satisfy this interface dependency".
There are also extensions available for VS which provide UI enhancements to help with this, such as Productivity Power Tools. For instance, you can hover over an interface, a info box will pop up, and you can click "Implemented By" to see all the classes in your solution implementing that interface. You can double-click to jump to the definition of any of those classes. (I still usually just use SHIFT+F12 anyway).
I just had an internal discussion about this, and ended up writing this piece, which I think is too good not to share. I'm copying it here (almost) unedited, but even though it's part of a bigger internal discussion, I think most of it can stand alone.
The discussion is about introduction of a custom interface called IPurchaseReceiptService, and whether or not it should be replaced with use of IObserver<T>.
Well, I can't say that I have strong data points about any of this - it's just some theories that I'm pursuing... However, my theory about cognitive overhead at the moment goes something like this: consider your special IPurchaseReceiptService:
public interface IPurchaseReceiptService
{
void SendReceipt(string transactionId, string userGuid);
}
If we keep it as the Header Interface it currently is, it only has that single SendReceipt method. That's cool.
What's not so cool is that you had to come up with a name for the interface, and another name for the method. There's a bit of overlap between the two: the word Receipt appears twice. IME, sometimes that overlap can be even more pronounced.
Furthermore, the name of the interface is IPurchaseReceiptService, which isn't particularly helpful either. The Service suffix is essentially the new Manager, and is, IMO, a design smell.
Additionally, not only did you have to name the interface and the method, but you also have to name the variable when you use it:
public EvoNotifyController(
ICreditCardService creditCardService,
IPurchaseReceiptService purchaseReceiptService,
EvoCipher cipher
)
At this point, you've essentially said the same thing thrice. This is, according to my theory, cognitive overhead, and a smell that the design could and should be simpler.
Now, contrast this to use of a well-known interface like IObserver<T>:
public EvoNotifyController(
ICreditCardService creditCardService,
IObserver<TransactionInfo> purchaseReceiptService,
EvoCipher cipher
)
This enables you to get rid of the bureaucracy and reduce the design the the heart of the matter. You still have intention-revealing naming - you only shift the design from a Type Name Role Hint to an Argument Name Role Hint.
When it comes to the discussion about 'disconnectedness', I'm under no illusion that use of IObserver<T> will magically make this problem go away, but I have another theory about this.
My theory is that the reason many programmers find programming to interfaces so difficult is exactly because they are used to Visual Studio's Go to definition feature (incidentally, this is yet another example of how tooling rots the mind). These programmers are perpetually in a state of mind where they need to know what's 'on the other side of an interface'. Why is this? Could it be because the abstraction is poor?
This ties back to the RAP, because if you confirm programmers' belief that there's a single, particular implementation behind every interface, it's no wonder they think that interfaces are only in the way.
However, if you apply the RAP, I hope that slowly, programmers will learn that behind a particular interface, there may be any implementation of that interface, and their client code must be able to handle any implementation of that interface without changing the correctness of the system. If this theory holds, we've just introduced the Liskov Substitution Principle into a code base without scaring anyone with high-brow concepts they don't understand :)
However, because of the looseness of the coupling, looking at a class
tells you nothing about the classes around it or where it fits in the
larger picture.
This is not accurate.For each class you know exactly what kind of objects the class depends on, to be able to provide its functionality at runtime.
You know them since you know that what objects are expected to be injected.
What you don't know is the actual concrete class that will be injected at runtime which will implement the interface or base class that you know your class(es) depend on.
So if you want to see what is the actual class injected, you just have to look at the configuration file for that class to see the concrete classes that are injected.
You could also use facilities provided by your IDE.
Since you refer to Eclipse then Spring has a plugin for it, and has also a visual tab displaying the beans you configure. Did you check that? Isn't it what you are looking for?
Also check out the same discussion in Spring Forum
UPDATE:
Reading your question again, I don't think that this is a real question.
I mean this in the following manner.
Like all things loose coupling is not a panacea and has its own disadvantages per se.
Most tend to focus on the benefits but as any solution it has its disadvantages.
What you do in your question is describe one of its main disadvantages which is that it indeed is not easy to see the big picture since you have everything configurable and plugged in by anything.
There are other drawbacks as well that one could complaint e.g. that it is slower than tight coupled applications and still be true.
In any case, re-iterating, what you describe in your question is not a problem you stepped upon and can find a standard solution (or any for that manner).
It is one of the drawbacks of loose coupling and you have to decide if this cost is higher than what you actually gain by it, like in any design-decision trade off.
It is like asking:
Hey I am using this pattern named Singleton. It works great but I can't create new objects!How can I get arround this problem guys????
Well you can't; but if you need to, perhaps singleton is not for you....
One thing that helped me is placing multiple closely related classes in the same file. I know this goes against the general advice (of having 1 class per file) and I generally agree with this, but in my application architecture it works very well. Below I will try to explain in which case this is.
The architecture of my business layer is designed around the concept of business commands. Command classes (simple DTO with only data and no behavior) are defined and for each command there is a 'command handler' that contains the business logic to execute this command. Each command handler implements the generic ICommandHandler<TCommand> interface, where TCommand is the actual business command.
Consumers take a dependency on the ICommandHandler<TCommand> and create new command instances and use the injected handler to execute those commands. This looks like this:
public class Consumer
{
private ICommandHandler<CustomerMovedCommand> handler;
public Consumer(ICommandHandler<CustomerMovedCommand> h)
{
this.handler = h;
}
public void MoveCustomer(int customerId, Address address)
{
var command = new CustomerMovedCommand();
command.CustomerId = customerId;
command.NewAddress = address;
this.handler.Handle(command);
}
}
Now consumers only depend on a specific ICommandHandler<TCommand> and have no notion of the actual implementation (as it should be). However, although the Consumer should know nothing about the implementation, during development I (as a developer) am very much interested in the actual business logic that is executed, simply because development is done in vertical slices; meaning that I'm often working on both the UI and business logic of a simple feature. This means I'm often switching between business logic and UI logic.
So what I did was putting the command (in this example the CustomerMovedCommand and the implementation of ICommandHandler<CustomerMovedCommand>) in the same file, with the command first. Because the command itself is concrete (since its a DTO there is no reason to abstract it) jumping to the class is easy (F12 in Visual Studio). By placing the handler next to the command, jumping to the command means also jumping to the business logic.
Of course this only works when it is okay for the command and handler to be living in the same assembly. When your commands need to be deployed separately (for instance when reusing them in a client/server scenario), this will not work.
Of course this is just 45% of my business layer. Another big peace however (say 45%) are the queries and they are designed similarly, using a query class and a query handler. These two classes are also placed in the same file which -again- allows me to navigate quickly to the business logic.
Because the commands and queries are about 90% of my business layer, I can in most cases move very quickly from presentation layer to business layer and even navigate easily within the business layer.
I must say these are the only two cases that I place multiple classes in the same file, but makes navigation a lot easier.
If you want to learn more about how I designed this, I've written two articles about this:
Meanwhile... on the command side of my architecture
Meanwhile... on the query side of my architecture
In my opinion, loosely coupled code can help you much but I agree with you about the readability of it.
The real problem is that name of methods also should convey valuable information.
That is the Intention-Revealing Interface principle as stated by
Domain Driven Design ( http://domaindrivendesign.org/node/113 ).
You could rename get method:
// intention revealing name
Date cutoff = myExpiryCutoffDateService.calculateFromPayment();
I suggest you to read thoroughly about DDD principles and your code could turn much more readable and thus manageable.
I have found The Brain to be useful in development as a node mapping tool. If you write some scripts to parse your source into XML The Brain accepts, you could browse your system easily.
The secret sauce is to put guids in your code comments on each element you want to track, then the nodes in The Brain can be clicked to take you to that guid in your IDE.
Depending on how many developers are working on projects and whether you want to reuse some parts of it in different projects loose coupling can help you a lot. If your team is big and project needs to span several years, having loose coupling can help as work can be assigned to different groups of developers more easily. I use Spring/Java with lots of DI and Eclipse offers some graphs to display dependencies. Using F3 to open class under cursor helps a lot. As stated in previous posts, knowing shortcuts for your tool will help you.
One other thing to consider is creating custom classes or wrappers as they are more easily tracked than common classes that you already have (like Date).
If you use several modules or layer of application it can be a challenge to understand what a project flow is exactly, so you might need to create/use some custom tool to see how everything is related to each other. I have created this for myself, and it helped me to understand project structure more easily.
Documentation !
Yes, you named the major drawback of loose coupled code. And if you probably already realized that at the end, it will pay off, it's true that it will always be longer to find "where" to do your modifications, and you might have to open few files before finding "the right spot"...
But that's when something really important: the documentation. It's weird that no answer explicitly mentioned that, it's a MAJOR requirement in all big sized development.
API Documentation
An APIDoc with a good search feature. That each file and --almost-- each methods have a clear description.
"Big picture" documentation
I think it's good to have a wiki that explain the big picture. Bob have made a proxy system ? How doest it works ? Does it handle authentication ? What kind of component will use it ? Not a whole tutorial, but just a place when you can read 5 minutes, figure out what components are involved and how they are linked together.
I do agree with all the points of Mark Seemann answer, but when you get in a project for the first time(s), even if you understand well the principles behing decoupling, you'll either need a lot of guessing, or some sort of help to figure out where to implement a specific feature you want to develop.
... Again: APIDoc and a little developper Wiki.
I am astounded that nobody has written about the testability (in terms of unit testing of course) of the loose coupled code and the non-testability (in the same terms) of the tightly coupled design! It is no brainer which design you should choose. Today with all the Mock and Coverage frameworks it is obvious, well, at least for me.
Unless you do not do unit tests of your code or you think you do them but in fact you don't...
Testing in isolation can be barely achieved with tight coupling.
You think you have to navigate through all the dependencies from your IDE? Forget about it! It is the same situation as in case of compilation and runtime. Hardly any bug can be found during the compilation, you cannot be sure whether it works unless you test it, which means execute it. Want to know what is behind the interface? Put a breakpoint and run the goddamn application.
Amen.
...updated after the comment...
Not sure if it is going to serve you but in Eclipse there is something called hierarchy view. It shows you all the implementations of an interface within your project (not sure if the workspace as well). You can just navigate to the interface and press F4. Then it will show you all the concrete and abstract classes implementing the interface.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
One of the most frequent arguments I hear for not adhering to the SOLID principles in object-oriented design is YAGNI (although the arguer often doesn't call it that):
"It is OK that I put both feature X and feature Y into the same class. It is so simple why bother adding a new class (i.e. complexity)."
"Yes, I can put all my business logic directly into the GUI code it is much easier and quicker. This will always be the only GUI and it is highly unlikely that significant new requirements will ever come in."
"If in the unlikely case of new requirements my code gets too cluttered I still can refactor for the new requirement. So your 'What if you later need to…' argument doesn't count."
What would be your most convincing arguments against such practice? How can I really show that this is an expensive practice, especially to somebody that doesn't have too much experience in software development.
Design is the management and balance of trade-offs. YAGNI and SOLID aren't conflicting: the former says when to add features, the latter says how, but they both guide the design process. My responses, below, to each of your specific quotes use principles from both YAGNI and SOLID.
It is three times as difficult to build reusable components as single use
components.
A reusable component should be tried out in three different
applications before it will be sufficiently general to accept into a reuse
library.
— Robert Glass' Rules of Three, Facts and Fallacies of Software Engineering
Refactoring into reusable components has the key element of first finding the same purpose in multiple places, and then moving it. In this context, YAGNI applies by inlining that purpose where needed, without worrying about possible duplication, instead of adding generic or reusable features (classes and functions).
The best way, in the initial design, to show when YAGNI doesn't apply is to identify concrete requirements. In other words, do some refactoring before writing code to show that duplication is not merely possible, but already exists: this justifies the extra effort.
Yes, I can put all my business logic directly into the GUI code it is much easier and quicker. This will always be the only GUI and it is highly unlikely that signifcant new requirements will ever come in.
Is it really the only user interface? Is there a background batch mode planned? Will there ever be a web interface?
What is your testing plan, and will you be testing back-end functionality without a GUI? What will make the GUI easy for you to test, since you usually don't want to be testing outside code (such as platform-generic GUI controls) and instead concentrate on your project.
It is OK that I put both feature X and feature Y into the same class. It is so simple why bother adding a new class (i.e. complexity).
Can you point out a common mistake that needs to be avoided? Some things are simple enough, such as squaring a number (x * x vs squared(x)) for an overly-simple example, but if you can point out a concrete mistake someone made—especially in your project or by those on your team—you can show how a common class or function will avoid that in the future.
If, in the unlikely case of new requirements, my code gets too cluttered I still can refactor for the new requirement. So your "What if you later need to..." argument doesn't count.
The problem here is the assumption of "unlikely". Do you agree it's unlikely? If so, you're in agreement with this person. If not, your idea of the design doesn't agree with this person's—resolving that discrepancy will solve the problem, or at least show you where to go next. :)
I like to think about YAGNI in terms of "half, not half-assed", to borrow the phrase from 37signals (https://gettingreal.37signals.com/ch05_Half_Not_Half_Assed.php). It's about limiting your scope so you can focus on doing the most important things well. It's not an excuse to get sloppy.
Business logic in the GUI feels half-assed to me. Unless your system is trivial, I'd be surprised if your business logic and GUI haven't already changed independently, several times over. So you should follow the SRP ("S" in SOLID) and refactor - YAGNI doesn't apply, because you already need it.
The argument about YAGNI and unnecessary complexity absolutely applies if you're doing extra work today to accommodate hypothetical future requirements. When those "what if later we need to..." scenarios fail to materialize, you're stuck with higher maintenance costs from the abstractions that now get in the way of the changes you actually have. In this case, we're talking about simplifying the design by limiting scope -- doing half, rather than being half-assed.
It sounds like you're arguing with a brick wall. I'm a big fan of YAGNI, but at the same time, I also expect that my code will always be used in at least two places: the application, and the tests. That's why things like business logic in UI code don't work; you can't test business logic separate of UI code in that circumstance.
However, from the responses you're describing, it sounds like the person is simply uninterested in doing better work. At that point, no principle is going to help them; they only want to do the minimum possible. I'd go so far as to say that it's not YAGNI driving their actions, but rather laziness, and you alone aren't going to beat laziness (almost nothing can, except a threatening manager or the loss of a job).
There is no answer, or rather, there is an answer neither you nor your interlocutor might like: both YAGNI and SOLID can be wrong approaches.
Attempting to go for SOLID with an inexperienced team, or a team with tight delivery objectives pretty much guarantees you will end up with an expensive, over-engineered bunch of code... that will NOT be SOLID, just over-engineered (aka welcome to the real-world).
Attempting to go YAGNI for a long term project and hope you can refactor later only works to an extent (aka welcome to the real-world). YAGNI excels at proof-of-concepts and demonstrators, getting the market/contract and then be able to invest into something more SOLID.
You need both, at different points in time.
The correct application of these principles is often not very obvious and depends very much on experience. Which is hard to obtain if you didn't do it yourself. Every programmer should have had experiences of the consequences of doing it wrong, but of course it always should be "not my" project.
Explain to them what the problem is, if they don't listen and you're not in a position to make them listen, let them do the mistakes. If you're too often the one having to fix the problem, you should polish your resume.
In my experience, it's always a judgment call. Yes, you should not worry about every little detail of your implementation, and sometimes sticking a method into an existing class is an acceptable, though ugly solution.
It's true that you can refactor later. The important point is to actually do the refactoring. So I believe the real problem is not the occasional design compromise, but putting off refactoring once it becomes clear there's a problem. Actually going through with it is the hard part (just like with many things in life... ).
As to your individual points:
It is OK that I put both feature X
and feature Y into the same class. It
is so simple why bother adding a new
class (i.e. complexity).
I would point out that having everything in one class is more complex (because the relationship between the methods is more intimate, and harder to understand). Having many small classes is not complex. If you feel the list is getting to long, just organize them into packages, and you'll be fine :-). Personally, I have found that just splitting a class into two or three classes can help a lot with readability, without any further change.
Don't be afraid of small classes, they don't bite ;-).
Yes, I can put all my business logic
directly into the GUI code it is much
easier and quicker. This will always
be the only GUI and it is highly
unlikely that signifcant new
requirements will ever come in.
If someone can say "it is highly unlikely that signifcant new requirements will ever come in." with a straight face, I believe that person really, really needs a reality check. Be blunt, but gentle...
If in the unlikely case of new
requirements my code gets too
cluttered I still can refactor for the
new requirement. So your 'What if you
later need to ...' argument doesn't
count
That has some merit, but only if they actually do refactor later. So accept it, and hold them to their promise :-).
SOLID principles allow software to adapt to change - in both requirements and techical changes (new components, etc), two of your arguments are for unchanging requirements:
"it is highly unlikely that signifcant new requirements will ever come in."
"If in the unlikely case of new requirements"
Could this really be true?
There is no substitute for experience when it comes to the various expenses of development. For many practitioners I think doing things in the lousy, difficult to maintain way has never resulted in problems for them (hey! job security). Over the long term of a product I think these expenses become clear, but doing something about them ahead of time is someone else's job.
There are some other great answers here.
Understandable, flexible and capable of fixes and improvements are always things that you are going to need. Indeed, YAGNI assumes that you can come back and add new features when they prove necessary with relative ease, because nobody is going to do something crazy like bunging irrelevant functionality in a class (YAGNI in that class!) or pushing business logic to UI logic.
There can be times when what seems crazy now was reasonable in the past - sometimes the boundary lines of UI vs business or between different sets of responsibilities that should be in a different class aren't that clear, or even move. There can be times when 3hours of work is absolutely necessary in 2hours time. There are times when people just don't make the right call. For those reasons occasional breaks in this regard will happen, but they are going to get in the way of using the YAGNI principle, not be a cause of it.
Quality unit tests, and I mean unit tests not integration tests, need code that adheres to SOLID. Not necessarily 100%, in fact rarely so, but in your example stuffing two features into one class will make unit testing harder, breaks the single responsibility principle, and makes code maintenance by team newbies much harder (as it is much harder to comprehend).
With the unit tests (assuming good code coverage) you'll be able to refactor feature 1 safe and secure you won't break feature 2, but without unit tests and with the features in same class (simply to be lazy in your example) refactoring is risky at best, disastrous at best.
Bottom line: follow the KIS principle (keep it simple), or for the intellectual the KISS principle (kis stupid). Take each case on merit, there's no global answer but always consider if other coders need to read / maintain the code in the future and the benefit of unit tests in each scenario.
tldr;
SOLID assumes, you understand (somewhat atleast), the future changes to the code, wrt SRP. I will say that is being optimistic about capability to predict.
YAGNI on the other hand, assumes most of the times you don't know future direction of change, which is pessimistic about capability to predict.
Hence it follows that SOLID/SRP asks you to form classes for the code such that it will have single reason for change. E.g. a small GUI change or ServiceCall change.
YAGNI says (if you want to force apply it in this scenario), since you don't know WHAT is going to change, and if a GUI change will cause a GUI+ServiceCall change (similarly A backend change causing GUI+SeviceCall change), just put all that code in single class.
Long answer :
Read the book 'Agile Software Development, Principles, Patterns, and Practices'
I am putting short excerpt from it about SOLID/SRP :
"If,[...]the application is not changing in ways that cause the two responsibilities to change at different times, there is no need to separate them. Indeed, separating them would smell of needless complexity.
There is a corrolary here. An axis of change is an axis of change only if the changes occur. It is not wise to apply SRP—or any other principle, for that matter—if there is no symptom."
Lately, I've been making use a lot of the Strategy Pattern along with the Factory Pattern. And I really mean a lot. I have a lot of "algorithms" for everything and factories that retrieve algorithms based on parameters.
Even though the code seems very extensible, and it is, having N factories seems a bit of an abuse.
I know this is pretty subjective, and we're talking without seeing code, but is this acceptable in real world code? Would you change something ?
OK- ask yourself a question. does/will this algorithm implementation ever change? If no then remove the strategy.
I am maintence.
I once was forced (by my pattern lovin' boss) to write a set of 16 "buffer interpreter tuxedo services" using an AbstractFactory and a double DAO pattern in C++ (no reflections, no code gen). All up it something like 20,000 lines of the nastiest code I've even seen (not the least because I didn't really know C++ when I started) and it took about three months.
Since my old boss has moved on I've rewritten them using good 'ole "straight up and down" procedural style C++, with couple of funky-macros... each service is like 60 lines of code, times 16... all up less than a 1000 lines of really SIMPLE code; so simple that even I can follow it.
Cheers. Keith.
Whenever I'm implementing code in this fashion, some questions I ask are:
what components do I need to substitute to test ?
what components will I expect users/admins to disable or substitute (e.g. via Spring configs or similar) ?
what components do I expect or suspect will not be required in the future due to (possibly) changing requirements ?
This all drives how I construct object or components (via factories) and how I implement algorithms. The above is vague, but (of course) the requirements can be similarly difficult to pin down. Without seeing your implementation and your requirements, I can't comment further, but the above should act as some guideline to help you determine whether what you've done is overkill.
If you're using the same design pattern all over the place, perhaps you should either switch to a language that has better support for what you're trying to do or rethink your code to be more idiomatic in your language of choice. After all, that's why we have more than one programming language.
Would depends on the kind of software I'm working on.
Maintenance asks for simple code and factories is NOT simple code.
But extensibility asks sometimes for factories...
So you have to take both in consideration.
Just have in mind that most of the time, you will have to maintain a source file MANY times a year and you will NOT have to extend it.
IMO, patterns should only be used when absolutely needed. If you think it can be handy in two years, you are better to use them... in two years.
How complex is the work the factory is handling? Does object creation really need to be abstracted to a different class? A variation of the factory method is having a simple, in-class factory. This really works best if any dependencies have already been injected.
For instance,
public class Customer
{
public Customer CreateNewCustomer()
{
// handle minimally complex create logic here
}
}
As far as Strategy overuse... Again, as #RichardOD explained, will the algorithm ever really change?
Always keep in mind the YAGNI principle. You Aren't Gonna Need It.
Can't you make an AbstractFactory instead off different standalone factories?
AlgorithmFactory creates the algorithms you need based on the concrete factory.
When you are starting a new project from scratch using DDD, and still isn't very confortable with the domain, TDD comes at a price. While you're still understanding the details of the domain, you figure out a lot of stuff you've done wrong, like a method that makes more sense in some other class, or adding/removing parameters from a constructor, and many other changes.
These changes are VERY frequent, specially in the beginning. Every change usually (and hopefully) requires some changes in the unit tests, which increases cost of change (which, as I said before, is stil very frequent).
My question is: Is TDD worth the cost, even in situations where there is still lots of change happening, but there's hope they will get less frequent (once we have better insight of the domain, for instance) soon?
While you're still understanding the
details of the domain, you figure out
a lot of stuff you've done wrong, like
a method that makes more sense in some
other class, or adding/removing
parameters from a constructor, and
many other changes.
The process of understanding a domain is a design process, and it is helped by TDD, which you must understand is a design technique.
That method which makes more sense in some other class - you come to realize this quickly, more quickly, using TDD, because the first thing you do in writing the method is to write a test for it. When you write that test, you'll see that (for instance) you need to pass in a lot of members from the other class, and that will tell you - before you've even written the method - "Hey, this belongs over there!"
Use TDD to reduce the churn you're describing. It won't eliminate it, but it will reduce it, because you're doing design in the micro, on demand, as needed. It's Just-In-Time Design.
If you're at the point where you are still designing your domain model components at a relatively high level, then you shouldn't be writing unit tests yet. You need to have an understanding of the problem domain, and the responsibilities of your classes before you can start writing any code.
If you are at the point where you are fiddling with constructors and parameters in your classes, then TDD should not be thought of as a cost - it is helping you discover and fix problems in your design. It is an investment in a better object model and less maintenance down the road.
Also, if you're using a refactoring tool like ReSharper or CodeRush, I find that most of the early changes are really not that bad - just minor inconveniences.
I believe so. Part of Test Driven Development is that you're only building what you need to build - only what's required to make the tests pass. So, if you need to change something in the code due to a clearer understanding of the domain, you may have less to change with a TDD approach than without because you haven't spent time building unnecessary things.
There are other advantages as well - you know that the part of the code that you didn't change still works, since it's already got tests. Even if you rewrite 50% of the code, you'll know that your other 50% works. Also, you know that the code is testable - you've been designing it to be tested the whole time. It's often a huge pain to add unit tests to code that wasn't designed to have any - it's usually written in a way that's very difficult to test. Code that's made to be tested from the beginning is much better.
TDD should help in this situation, IMHO. Part of the usefulness of the unit tests is that they verify that you didn't break anything during refactoring. Yes, some changes in code will require changes to the tests, but that should help clarify what you're doing, not make it harder.
As stated above TDD is more about testing and proving your design and less about Unit Testing. TDD is NOT Unit Testing, but Unit Tests can evolve from tests created in your TDD.
TDD is absolutly helpful in understanding the Domain model. TDD can be used to help define the Domain model causei it is a design technique as Carl Manaster stated above. TDD will help you recognize when and where to implement a design pattern, if / when an object is being defined in the wrong domain etc.
the most change rate, the most TDD become useful. A project with requirements sets in stone would get less from doing TDD compared to a project with high change rate.
I tend to do a lot of projects on short deadlines and with lots of code that will never be used again, so there's always pressure/temptation to cut corners. One rule I always stick to is encapsulation/loose coupling, so I have lots of small classes rather than one giant God class. But what else should I never compromise on?
Update - thanks for the great response. Lots of people have suggested unit testing, but I don't think that's really appropriate to the kind of UI coding I do. Usability / User acceptance testing seems much important. To reiterate, I'm talking about the BARE MINIMUM of coding standards for impossible deadline projects.
Not OOP, but a practice that helps in both the short and long run is DRY, Don't Repeat Yourself. Don't use copy/paste inheritance.
Not a OOP practice, but common sense ;-).
If you are in a hurry, and have to write a hack. Always add a piece of comment with the reasons. So you can trace it back and make a good solution later.
If you never had the time to come back, you always have the comment so you know, why the solution was chosen at the moment.
Use Source control.
No matter how long it takes to set up (seconds..), it will always make your life easier! (still it's not OOP related).
Naming. Under pressure you'll write horrible code that you won't have time to document or even comment. Naming variables, methods and classes as explicitly as possible takes almost no additional time and will make the mess readable when you must fix it. From an OOP point of view, using nouns for classes and verbs for methods naturally helps encapsulation and modularity.
Unit tests - helps you sleep at night :-)
This is rather obvious (I hope), but at the very least I always make sure my public interface is as correct as possible. The internals of a class can always be refactored later on.
no public class with mutable public variables (struct-like).
Before you know it, you refer to this public variable all over your code, and the day you decide this field is a computed one and must have some logic in it... the refactoring gets messy.
If that day is before your release date, it gets messier.
Think about the people (may even be your future self) who have to read and understand the code at some point.
Application of the single responsibility principal. Effectively applying this principal generates a lot of positive externalities.
Like everyone else, not as much OOP practices, as much as practices for coding that apply to OOP.
Unit test, unit test, unit test. Defined unit tests have a habit of keeping people on task and not "wandering" aimlessly between objects.
Define and document all hierarchical information (namespaces, packages, folder structures, etc.) prior to writing production code. This helps to flesh out object relations and expose flaws in assumptions related to relationships of objects.
Define and document all applicable interfaces prior to writing production code. If done by a lead or an architect, this practice can additionally help keep more junior-level developers on task.
There are probably countless other "shoulds", but if I had to pick my top three, that would be the list.
Edit in response to comment:
This is precisely why you need to do these things up front. All of these sorts of practices make continued maintenance easier. As you assume more risk in the kickoff of a project, the more likely it is that you will spend more and more time maintaining the code. Granted, there is a larger upfront cost, but building on a solid foundation pays for itself. Is your obstacle lack of time (i.e. having to maintain other applications) or a decision from higher up? I have had to fight both of those fronts to be able to adopt these kinds of practices, and it isn't a pleasant situation to be in.
Of course everything should be Unit tested, well designed, commented, checked into source control and free of bugs. But life is not like that.
My personal ranking is this:
Use source control and actually write commit comments. This way you have a tiny bit of documentation should you ever wonder "what the heck did I think when I wrote this?"
Write clean code or document. Clean well-written code should need little documentation, as it's meaning can be grasped from reading it. Hacks are a lot different. Write why you did it, what you do and what you'd like to do if you had the time/knowledge/motivation/... to do it right
Unit Test. Yes it's down on number three. Not because it's unimportant but because it's useless if you don't have the other two at least halfway complete. Writing Unit tests is another level of documentation what you code should be doing (among others).
Refactor before you add something. This might sound like a typical "but we don't have time for it" point. But as with many of those points it usually saves more time than it costs. At least if you have at least some experience with it.
I'm aware that much of this has already been mentioned, but since it's a rather subjective matter, I wanted to add my ranking.
[insert boilerplate not-OOP specific caveat here]
Separation of concerns, unit tests, and that feeling that if something is too complex it's probably not conceptualised quite right yet.
UML sketching: this has clarified and saved any amount of wasted effort so many times. Pictures are great aren't they? :)
Really thinking about is-a's and has-a's. Getting this right first time is so important.
No matter how fast a company wants it, I pretty much always try to write code to the best of my ability.
I don't find it takes any longer and usually saves a lot of time, even in the short-term.
I've can't remember ever writing code and never looking at it again, I always make a few passes over it to test and debug it, and even in those few passes practices like refactoring to keep my code DRY, documentation (to some degree), separation of concerns and cohesion all seem to save time.
This includes crating many more small classes than most people (One concern per class, please) and often extracting initialization data into external files (or arrays) and writing little parsers for that data... Sometimes even writing little GUIs instead of editing data by hand.
Coding itself is pretty quick and easy, debugging crap someone wrote when they were "Under pressure" is what takes all the time!
At almost a year into my current project I finally set up an automated build that pushes any new commits to the test server, and man, I wish I had done that on day one. The biggest mistake I made early-on was going dark. With every feature, enhancement, bug-fix etc, I had a bad case of the "just one mores" before I would let anyone see the product, and it literally spiraled into a six month cycle. If every reasonable change had been automatically pushed out it would have been harder for me to hide, and I would have been more on-track with regard to the stakeholders' involvement.
Go back to code you wrote a few days/weeks ago and spend 20 minutes reviewing your own code. With the passage of time, you will be able to determine whether your "off-the-cuff" code is organized well enough for future maintenance efforts. While you're in there, look for refactoring and renaming opportunities.
I sometimes find that the name I chose for a function at the outset doesn't perfectly fit the function in its final form. With refactoring tools, you can easily change the name early before it goes into widespread use.
Just like everybody else has suggested these recommendations aren't specific to OOP:
Ensure that you comment your code and use sensibly named variables. If you ever have to look back upon the quick and dirty code you've written, you should be able to understand it easily. A general rule that I follow is; if you deleted all of the code and only had the comments left, you should still be able to understand the program flow.
Hacks usually tend to be convoluted and un-intuitive, so some good commenting is essential.
I'd also recommend that if you usually have to work to tight deadlines, get yourself a code library built up based upon your most common tasks. This will allow you to "join the dots" rather than reinvent the wheel each time you have a project.
Regards,
Docta
An actual OOP practice I always make time for is the Single Responsibility Principle, because it becomes so much harder to properly refactor the code later on when the project is "live".
By sticking to this principle I find that the code I write is easily re-used, replaced or rewritten if it fails to match the functional or non-functional requirements. When you end up with classes that have multiple responsibilities, some of them may fulfill the requirements, some may not, and the whole may be entirely unclear.
These kinds of classes are stressful to maintain because you are never sure what your "fix" will break.
For this special case (short deadlines and with lots of code that will never be used again) I suggest you to pay attention to embedding some script engine into your OOP code.
Learn to "refactor as-you-go". Mainly from an "extract method" standpoint. When you start to write a block of sequential code, take a few seconds to decide if this block could stand-alone as a reusable method and, if so, make that method immediately. I recommend it even for throw-away projects (especially if you can go back later and compile such methods into your personal toolbox API). It doesn't take long before you do it almost without thinking.
Hopefully you do this already and I'm preaching to the choir.