SOLID vs. YAGNI [closed] - oop

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
One of the most frequent arguments I hear for not adhering to the SOLID principles in object-oriented design is YAGNI (although the arguer often doesn't call it that):
"It is OK that I put both feature X and feature Y into the same class. It is so simple why bother adding a new class (i.e. complexity)."
"Yes, I can put all my business logic directly into the GUI code it is much easier and quicker. This will always be the only GUI and it is highly unlikely that significant new requirements will ever come in."
"If in the unlikely case of new requirements my code gets too cluttered I still can refactor for the new requirement. So your 'What if you later need to…' argument doesn't count."
What would be your most convincing arguments against such practice? How can I really show that this is an expensive practice, especially to somebody that doesn't have too much experience in software development.

Design is the management and balance of trade-offs. YAGNI and SOLID aren't conflicting: the former says when to add features, the latter says how, but they both guide the design process. My responses, below, to each of your specific quotes use principles from both YAGNI and SOLID.
It is three times as difficult to build reusable components as single use
components.
A reusable component should be tried out in three different
applications before it will be sufficiently general to accept into a reuse
library.
  — Robert Glass' Rules of Three, Facts and Fallacies of Software Engineering
Refactoring into reusable components has the key element of first finding the same purpose in multiple places, and then moving it. In this context, YAGNI applies by inlining that purpose where needed, without worrying about possible duplication, instead of adding generic or reusable features (classes and functions).
The best way, in the initial design, to show when YAGNI doesn't apply is to identify concrete requirements. In other words, do some refactoring before writing code to show that duplication is not merely possible, but already exists: this justifies the extra effort.
Yes, I can put all my business logic directly into the GUI code it is much easier and quicker. This will always be the only GUI and it is highly unlikely that signifcant new requirements will ever come in.
Is it really the only user interface? Is there a background batch mode planned? Will there ever be a web interface?
What is your testing plan, and will you be testing back-end functionality without a GUI? What will make the GUI easy for you to test, since you usually don't want to be testing outside code (such as platform-generic GUI controls) and instead concentrate on your project.
It is OK that I put both feature X and feature Y into the same class. It is so simple why bother adding a new class (i.e. complexity).
Can you point out a common mistake that needs to be avoided? Some things are simple enough, such as squaring a number (x * x vs squared(x)) for an overly-simple example, but if you can point out a concrete mistake someone made—especially in your project or by those on your team—you can show how a common class or function will avoid that in the future.
If, in the unlikely case of new requirements, my code gets too cluttered I still can refactor for the new requirement. So your "What if you later need to..." argument doesn't count.
The problem here is the assumption of "unlikely". Do you agree it's unlikely? If so, you're in agreement with this person. If not, your idea of the design doesn't agree with this person's—resolving that discrepancy will solve the problem, or at least show you where to go next. :)

I like to think about YAGNI in terms of "half, not half-assed", to borrow the phrase from 37signals (https://gettingreal.37signals.com/ch05_Half_Not_Half_Assed.php). It's about limiting your scope so you can focus on doing the most important things well. It's not an excuse to get sloppy.
Business logic in the GUI feels half-assed to me. Unless your system is trivial, I'd be surprised if your business logic and GUI haven't already changed independently, several times over. So you should follow the SRP ("S" in SOLID) and refactor - YAGNI doesn't apply, because you already need it.
The argument about YAGNI and unnecessary complexity absolutely applies if you're doing extra work today to accommodate hypothetical future requirements. When those "what if later we need to..." scenarios fail to materialize, you're stuck with higher maintenance costs from the abstractions that now get in the way of the changes you actually have. In this case, we're talking about simplifying the design by limiting scope -- doing half, rather than being half-assed.

It sounds like you're arguing with a brick wall. I'm a big fan of YAGNI, but at the same time, I also expect that my code will always be used in at least two places: the application, and the tests. That's why things like business logic in UI code don't work; you can't test business logic separate of UI code in that circumstance.
However, from the responses you're describing, it sounds like the person is simply uninterested in doing better work. At that point, no principle is going to help them; they only want to do the minimum possible. I'd go so far as to say that it's not YAGNI driving their actions, but rather laziness, and you alone aren't going to beat laziness (almost nothing can, except a threatening manager or the loss of a job).

There is no answer, or rather, there is an answer neither you nor your interlocutor might like: both YAGNI and SOLID can be wrong approaches.
Attempting to go for SOLID with an inexperienced team, or a team with tight delivery objectives pretty much guarantees you will end up with an expensive, over-engineered bunch of code... that will NOT be SOLID, just over-engineered (aka welcome to the real-world).
Attempting to go YAGNI for a long term project and hope you can refactor later only works to an extent (aka welcome to the real-world). YAGNI excels at proof-of-concepts and demonstrators, getting the market/contract and then be able to invest into something more SOLID.
You need both, at different points in time.

The correct application of these principles is often not very obvious and depends very much on experience. Which is hard to obtain if you didn't do it yourself. Every programmer should have had experiences of the consequences of doing it wrong, but of course it always should be "not my" project.
Explain to them what the problem is, if they don't listen and you're not in a position to make them listen, let them do the mistakes. If you're too often the one having to fix the problem, you should polish your resume.

In my experience, it's always a judgment call. Yes, you should not worry about every little detail of your implementation, and sometimes sticking a method into an existing class is an acceptable, though ugly solution.
It's true that you can refactor later. The important point is to actually do the refactoring. So I believe the real problem is not the occasional design compromise, but putting off refactoring once it becomes clear there's a problem. Actually going through with it is the hard part (just like with many things in life... ).
As to your individual points:
It is OK that I put both feature X
and feature Y into the same class. It
is so simple why bother adding a new
class (i.e. complexity).
I would point out that having everything in one class is more complex (because the relationship between the methods is more intimate, and harder to understand). Having many small classes is not complex. If you feel the list is getting to long, just organize them into packages, and you'll be fine :-). Personally, I have found that just splitting a class into two or three classes can help a lot with readability, without any further change.
Don't be afraid of small classes, they don't bite ;-).
Yes, I can put all my business logic
directly into the GUI code it is much
easier and quicker. This will always
be the only GUI and it is highly
unlikely that signifcant new
requirements will ever come in.
If someone can say "it is highly unlikely that signifcant new requirements will ever come in." with a straight face, I believe that person really, really needs a reality check. Be blunt, but gentle...
If in the unlikely case of new
requirements my code gets too
cluttered I still can refactor for the
new requirement. So your 'What if you
later need to ...' argument doesn't
count
That has some merit, but only if they actually do refactor later. So accept it, and hold them to their promise :-).

SOLID principles allow software to adapt to change - in both requirements and techical changes (new components, etc), two of your arguments are for unchanging requirements:
"it is highly unlikely that signifcant new requirements will ever come in."
"If in the unlikely case of new requirements"
Could this really be true?
There is no substitute for experience when it comes to the various expenses of development. For many practitioners I think doing things in the lousy, difficult to maintain way has never resulted in problems for them (hey! job security). Over the long term of a product I think these expenses become clear, but doing something about them ahead of time is someone else's job.
There are some other great answers here.

Understandable, flexible and capable of fixes and improvements are always things that you are going to need. Indeed, YAGNI assumes that you can come back and add new features when they prove necessary with relative ease, because nobody is going to do something crazy like bunging irrelevant functionality in a class (YAGNI in that class!) or pushing business logic to UI logic.
There can be times when what seems crazy now was reasonable in the past - sometimes the boundary lines of UI vs business or between different sets of responsibilities that should be in a different class aren't that clear, or even move. There can be times when 3hours of work is absolutely necessary in 2hours time. There are times when people just don't make the right call. For those reasons occasional breaks in this regard will happen, but they are going to get in the way of using the YAGNI principle, not be a cause of it.

Quality unit tests, and I mean unit tests not integration tests, need code that adheres to SOLID. Not necessarily 100%, in fact rarely so, but in your example stuffing two features into one class will make unit testing harder, breaks the single responsibility principle, and makes code maintenance by team newbies much harder (as it is much harder to comprehend).
With the unit tests (assuming good code coverage) you'll be able to refactor feature 1 safe and secure you won't break feature 2, but without unit tests and with the features in same class (simply to be lazy in your example) refactoring is risky at best, disastrous at best.
Bottom line: follow the KIS principle (keep it simple), or for the intellectual the KISS principle (kis stupid). Take each case on merit, there's no global answer but always consider if other coders need to read / maintain the code in the future and the benefit of unit tests in each scenario.

tldr;
SOLID assumes, you understand (somewhat atleast), the future changes to the code, wrt SRP. I will say that is being optimistic about capability to predict.
YAGNI on the other hand, assumes most of the times you don't know future direction of change, which is pessimistic about capability to predict.
Hence it follows that SOLID/SRP asks you to form classes for the code such that it will have single reason for change. E.g. a small GUI change or ServiceCall change.
YAGNI says (if you want to force apply it in this scenario), since you don't know WHAT is going to change, and if a GUI change will cause a GUI+ServiceCall change (similarly A backend change causing GUI+SeviceCall change), just put all that code in single class.
Long answer :
Read the book 'Agile Software Development, Principles, Patterns, and Practices'
I am putting short excerpt from it about SOLID/SRP :
"If,[...]the application is not changing in ways that cause the two responsibilities to change at different times, there is no need to separate them. Indeed, separating them would smell of needless complexity.
There is a corrolary here. An axis of change is an axis of change only if the changes occur. It is not wise to apply SRP—or any other principle, for that matter—if there is no symptom."

Related

How flexible should you make your classes?

I have always wondered if I am thinking ahead too much or too little before I code something. This is especially true for me if I am not sure what possible future requirement changes I will be required to account for are. I don't know how flexible or abstracted I should make my classes. I'll give a quick example.
You want to write a program that plays blackjack against a computer and you're the type of person that likes to experiment. You begin to write the code for the deck, but then you realize blackjack could have 1, 2, 4, or any number of decks. You account for that, but then you realize that maybe the deck will be altered and not have any cards of value ten. You then decide that the deck should be completely versatile to allow any number of suits or ranks. You then decide that the rules for the deck should be able to be altered from the standard number of suits multiplied by the unique ranks to equal the total amount of cards in the deck... You can see where I am going here.
My question is this, are there any guidelines for how flexible a class should be?
Favor minimalism and encapsulation, avoiding functionalities you don't need.
It's of course good to design based on needs, but cluttering designs with things you do not use -- or could possibly use in the future -- should be minimized. It's fine to consider and implement what you are sure you will need.
When you understand and specify a 'future problem' (specifically, at that point in the future), you will often solve it different from today's solution.
Check out the great paper "On the Criteria to be Used in Decomposing Systems into Modules" by David Parnas, from back in 1972.
Generally speaking, you should try to identify areas of responsibility that can be pushed behind a very simple interface that hides useful functionality and complexity. You should strive to separate the what from the how in areas you feel are most likely to change (i.e. predicting variation).
Flexibility is indeed are a requirement for an application / system to be maintainable. Usually I find that a design following SOLID design principle, TDD and stateless business logic is easier to maintain.
Among all of SOLID principle, I find that [S]RP is the rule that makes the application maintainable. Following [S]RP, your system will be broken down to smaller pieces, with replaceable classes. Say that it can be broken to Deck, DeckRule, HitAction, etc.
Interface or inheritance will help, since you can easily swap your Deck with NoTenDeck or SpadeOnlyDeck. And you can swap the DeckRule to HardToWinDeckRule or ImpossibleWinDeckRule. With decorator or other design patterns such as composite will also help to make your system flexible. Don't forget Test Unit, it will help you to refactor the code.
And also sometimes you will need something like Breaking Change in which you need to tear down your current architecture and interfaces to be replaced with another design. Sometimes it is needed, but mostly not.
You can find several discussion at stackoverflow answer for DI vs Singleton and little about state or stateless.
I try to follow the agile principle of YAGNI ! - You Ain't Gonna Need It.
It isn't worth the effort of coming up with all these possible future requirements. There are an infinite number of possible future requirements. You can't account for all of them. Just do what you need to do to fulfill the requirements you already have.
If in the future you get new requirements, THEN change the system. (you do have good tests to make sure you don't break anything during refactoring right?)
Overall thoughts on flexibility
From your description of the problem I don't think your classes should be flexible as in "keep throwing every new aspect of the game rules into the same class". A class with too many responsibilities is fragile, hard to maintain and thus hard to change - ironically, treating a class as if it were flexible will eventually make it rigid ! Don't put all your eggs in one basket. Separate concern means separate class. Your overall design should be flexible, not so much your classes.
On the blackjack problem
Card games, especially complex and/or evolving ones, are generally most peculiar animals and thus probably not a good standard example to start experimenting with when trying to improve your design skills.
If you want real modularity, you'll probably need a pluggable rules engine that allows plugins to hook at different stages of a game, giving you access to relevant resources to alter anything from scores to the sequence of events in a turn to even other rules.
My take on this is
You already know your game will evolve in the future and you're going to need such an engine. To answer the "thinking ahead" part of your question, this means you'll start with a simple standard turn structure, a minimal rules engine and incrementally add to it as you implement each feature in your backlog. The thinking ahead you shouldn't do is trying to forecast every little detail in the engine upfront. In other words you'll use YAGNI as in "I ain't gonna need this rule/type of hook into the game" rather than "I ain't gonna need a rules engine" since you know you've got to have one anyway.
Or,
It's going to be, at least at first, a one-shot fixed-rules game. You'll need less raw flexibility in the game system here. Concentrate on use cases and try to make acceptance tests pass with the simplest possible technical solution. Take one little step at a time. Implement a working solution for just 1 deck with simple rules at first, then expand to more complex areas. This hopefully will lead you to a no-nonsense, well-designed system which may or may not involve some kind of rules engine.
You might also want to have a look at https://gamedev.stackexchange.com/ for game-specific design guidelines.
When writing code I tend to look for things that may be obvious additions later on. "Obvious is probably a word to define though. :-) It means things that you are sure will be in a future release. Other than that, I try not to worry about it.

Compromising design & code quality to integrate with existing modules

Greetings!
I inherited a C#.NET application I have been extending and improving for a while now. Overall it was obviously a rush-job (or whoever wrote it was seemingly less competent than myself). The app pulls some data from an embedded device & displays and manipulates it. At the core is a communications thread in the main application form which executes a 600+ lines of code method which calls functions all over the place, implementing a state machine - lots of if-state-then-do type code. Interaction with the device is done by setting the state/mode globally and letting the thread do it's thing. (This is just one example of the badness of the code - overall it is not very OO-like, it reminds of the style of embedded C code the device firmware is written in).
My problem is that this piece of code is central to the application. The software, communications protocol or device firmware are not documented at all. Obviously to carry on with my work I have to interact with this code.
What I would like some guidance on, is whether it is worth scrapping this code & trying to piece together something more reasonable from the information I can reverse engineer? I can't decide! The reason I don't want to refactor is because the code already works, and changing it will surely be a long, laborious and unpleasant task. On the flip side, not refactoring means I have to sometimes compromise the design of other modules so that I may call my code from this state machine!
I've heard of "If it ain't broke don't fix it!", so I am wondering if it should apply when "it" is influencing the design of future code! Any advice would be appreciated!
Thanks!
Also, the longer you wait, the worse the codebase will smell. My suggestion would be first create a testsuite that you can evaluate your refactoring against. This makes it a lot easier to see if you are refactoring or just plain breaking things :).
I would definitely recommend you to refactor the code if you feel its junky. Yes, during the process of refactoring you may have some inconsistencies/problems at the start. But that is why we have iterations and testing. Since you are going to build up on this core engine in future, why not make the basement as stable as possible.
However, be very sure on what you are going to do. Because at times long lines of code does not necessarily mean evil. On the other hand they may be very efficient in running time. If/else blocks are not bad if you ask me, as they are very intelligent in branching from a microprocessor's perspective. So, you will have to be judgmental and very clear before you touch this.
But once you refactor the code, you will definitely have fine control over it. And don't forget to document it!! Tomorrow, someone might very well come and say about you on whatever you've told about this guy who have written that core code.
This depends on the constraints you are facing, it's a decision to be based on practical basis, not on theoretical ones. You need three things to consider.
Time: you need to have enough time to learn it, implement it, and test it, without too many other tasks interrupting you
Boss #1: if you are working for someone, he needs to know and approve the time and effort you will spend immediately, required to rebuild your solution
Boss #2: your boss also needs to know that the advantage of having new and clean software will come at the price of possible regressions, and therefore at the beginning of the deployment there may be unexpected bugs
If you have those three, then go ahead and refactor it. It will be surely be worth it!
First and foremost, get all the business logic out of the Form. Second, locate all the parts where the code interacts with the global state (e.g. accessing the embedded system). Delegate all this access to methods. Then, move these methods into a new class and create an instance in the class's constructor. Finally, inject an instance for the class to use.
Following these steps, you can move your embedded system logic ("existing module") to a wrapper class you write, so the interface can be nice and clean and more manageable. Then you can better tackle refactoring the monster method because there is less global state to worry about (only local state).
If the code works and you can integrate your part with minimal changes to it then let the code as it is and do your integration.
If the code is simply a big barrier in your way to add new functionality then it is best for you to refactor it.
Talk with other people that are responsible for the project, explain the situation, give an estimation explaining the benefits gained after refactoring the code and I'm sure (I hope) that the best choice will be made. It is best to speak about what you think, don't keep anything inside, especially if this affects your productivity, motivation etc.
NOTE: Usually rewriting code is out of the question but depending on situation and amount of code needed to be rewritten the decision may vary.
You say that this is having an impact on the future design of the system. In this case I would say it is broken and does need fixing.
But you do have to take into account the business requirements. Often reality gets in the way!
Would it be possible to wrap this code up in another class whose interface better suits how you want to take the system forward? (See adapter pattern)
This would allow you to move forward with your requirements without the poor design having an impact.
It gives you an interface that you understand which you could write some unit tests for. These tests can be based on what your design requires from this code. It ensures that your assumptions about what it is doing is correct. If you say that this code works, then any failing tests may be that your assumptions are incorrect.
Once you have these tests you can safely refactor - one step at a time, and when you have some spare time or when it is needed - as per business requirements.
Quite often I find the best way to truly understand a piece of code is to refactor it.
EDIT
On reflection, as this is one big method with multiple calls to the outside world, you are going to need some kind of inverse Adapter class to wrap this method. If you can inject dependencies into the method (see Dependency Inversion such that the method calls methods in your classes then you can route these to the original calls.

Are KISS and YAGNI at odds with the trends towards increasingly more sophisticated patterns and practices like SOA, DDD, IoC, MVC, POCO, MVVM? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 13 years ago.
It seems to me that Agile methodologies encourage us to keep things simple, and lean, and not add complexity and sophistication until its needed. But the pace and volume of technology change encourages the use of increasingly abstract, complex and sophisticated tools and patterns to solve problems that we may not have yet (and may never encounter) in complex ways with significant learning curves and significant investments of effort.
Are KISS and YAGNI at odds with the trends towards increasingly more sophisticated ...
A car has an accelator and a brake, and a steering wheel that can turn left and/or right: it's up to the drivers to decide which to use when.
I'll keep my answer short and let the experts lay it out better...
I think that KISS applies to everything you listed. You mention increasing abstraction and complexity, which, I think, balance eachother.
The systems we are developing today must be complex, because, most of the time, the solution to a complex problem is inherently complex. However, to keep things simple, we use abstraction. Even if our complex system is built with, say, eight layers, we can follow KISS by keeping each layer simple.
For instance, to pick an item or two off your list:
SOA is not complex because we can wrap service calls in a wrapper object. This object handles the connection and makes calls, which are pretty easy to do because they simply pass parameters on.
MVC is not complex because we clearly separate our logic. We have a simple controller for directing requests and setting up data, a simple model to represent our domain, and a simple view that displays whatever data is passed to it.
However, in both of these cases, the pattern as a whole (or the system, if you will) is complex and non-trivial. It is the fact that we consider small, simple parts one at a time, and then fit them together, that lets us maintain our mental model as we work.
I agree with ChrisW's answer.
The idea is to stick with KISS and YAGNI as much as possible, but when the need arises and you need a sophisticated / complex solution, stand on the shoulders of giants and use proven patterns to guide you. These patterns and practices are meant to simplify your work, if using them is harder than the hack alternative, you should stick with the hack. Just make sure you take into account maintainability etc.
As an example, when you build the 1st version of a website it may consist of 1 or 2 main functionalities and just a few pages. You probably don't need MVC for this (even though it might be nice to start that way)
BUT, after you add a few more features and you have dozens of pages to manage along with how to share functionality between them, it might become apparent that you need MVC to better structure your application.
Similarly, if from the get go you know you will have to deal with something like returning multiple views of a common piece of data, MVC simplifies your problem by laying out a pattern for you to follow.
In summary, YAGNI now, but if you need it later then KISS by using a known pattern / solution.
Sigh.
We must have increasingly sophisticated and abstract components to match the demand for increasingly sophisticated software.
Most of us have limited brain space. We must learn to cope with our limited brains by using more sophisticated abstractions.
The alternative is not using abstractions, limiting ourselves to machine code.
Please read http://www.cs.utexas.edu/~EWD/transcriptions/EWD01xx/EWD117.html
"In spite of all its deficiencies,
mathematical reasoning presents an
outstanding model of how to grasp
extremely complicated structures with
a brain of limited capacity. And it
seems worthwhile to investigate to
what extent these proven methods can
be transplanted to the art of computer
usage. In the design of programming
languages one can let oneself be
guided primarily by considering "what
the machine can do". Considering,
however, that the programming language
is the bridge between the user and the
machine --that it can, in fact, be
regarded as his tool-- it seems just
as important to take into
consideration "what Man can think". It
is in this vein that we shall continue
our investigations."
I'm going to make a subjective answer (so sue me). I think that if you program by acronyms then you are going to run into trouble.
At the end of the day you are trying to make money for a business, or hopefully yourself. As such each decision you make is an engineering decision based on cost, time and benefits. You have to evaluate the use of a technique on the cost of implementation, maintenance etc, and make the best choice.
I think the only fair answer is that the tools and techniques chosen have to match with the desired goal of the engineering.
Its a matter of the right tool for the right job. The problem is when architects and/or developers begin to believe that a particular methodology or technology is a "golden hammer." That is when things become religious, and religion and reason do not play nicely together ;)
Oh and by the way, "agile" does not necessarily mean you don't use some of the acronyms you mention, or some framework that implements them. Those decisions are usually made far in advance of implementing the sorts of things that developers have come to associate with agile, e.g. user stories, sprints, etc.
First off, the list of acronyms doesn't really necessarily make sense - there's not really much simpler than POCO, for example...
However, KISS and YAGNI are achieved most effectively, in many circumstances, by using concepts like IoC, MVC, and MVVM - provided you use the patterns correctly.
Patterns aren't complicated, in and of themselves. It may take a bit of learning to understand what the pattern is trying to accomplish, but often, a pattern exists purely to simplify either code, maintenance, or usability - and usually all of the above. This fits in perfectly with keep it simple, for example.
IMHO, you (generally) don't want to start out with a complicated design. Could this be a local method rather than a service? Do I need an IoC container yet? This is particularly relevant when it comes to design patterns.
However, as you test and refactor your code, certain patterns (such as Ioc) will help you to achieve goals such as testability and DRY (Don't Repeat Yourself). If you know design patterns well, you can apply them at the appropriate time.
yes
-- this space intentionally left blank --

Test driven development: What if the bug is in the interface?

I read the latest coding horror post, and one of the comments touched a nerve for me:
This is the type of situation that test driven design/refactoring are supposed to fix. If (big if) you have tests for the interfaces, rewriting the implementation is risk-free, because you will know whether you caught everything.
Now in theory I like the idea of test driven development, but all the times I've tried to make it work, it hasn't gone particularly well, I get out of the habit, and next thing I know all the tests that I had originally written not only don't pass, but they're no longer a reflection of the design of the system.
It's all well and good if you've been handed a perfect design from on high, straight from the start (which in my experience never actually happens), but what if halfway through the production of a system you notice that there's a critical flaw in the design? Then it's no longer a simple matter of diving in and fixing "the bug", but you also have to rewrite all the tests. A fundamental assumption was wrong, and now you have to change it. Now test driven development is no longer a handy thing, but it just means that there's twice as much work to do everything.
I've tried to ask this question before, both of peers, and online, but I've never heard a very satisfactory answer. ... Oh wait.. what was the question?
How do you combine test driven development with a design that has to change to reflect a growing understanding of the problem space? How do you make the TDD practice work for you instead of against you?
Update:
I still don't think I fully understand it all, so I can't really make a decision about which answer to accept. Most of my leaps in understanding have happened in the comments sections, not in the answers. Here' s a collection of my favorites so far:
"Anyone who uses terms like "risk-free"
in software development is indeed full
of shit. But don't write off TDD just
because some of its proponents are
hyper-susceptible to hype. I find it
helps me clarify my thinking before
writing a chunk of code, helps me to
reproduce bugs and fix them, and makes
me more confident about refactoring
things when they start to look ugly"
-Kristopher Johnson
"In that case, you rewrite the tests
for just the portions of the interface
that have changed, and consider
yourself lucky to have good test
coverage elsewhere that will tell you
what other objects depend on it."
-rcoder
"In TDD, the reason to write the tests
is to do design. The reason to make
the tests automated is so that you can
reuse them as the design and code
evolve. When a test breaks, it means
you've somehow violated an earlier
design decision. Maybe that's a
decision you want to change, but it's
good to get that feedback as soon as
possible."
-Kristopher Johnson
[about testing interfaces] "A test would insert some elements,
check that the size corresponds to the
number of elements inserted, check
that contains() returns true for them
but not for things that weren't
inserted, checks that remove() works,
etc. All of these tests would be
identical for all implementations, and
of course you would run the same code
for each implementation and not copy
it. So when the interface changes,
you'd only have to adjust the test
code once, not once for each
implementation."
–Michael Borgwardt
One of the practices of TDD is the use of Baby Steps (which could be very boring in the beggining) which is the use of really small steps in order for you to understand your problem space and make a good and satisfactory solution for your problem.
If you already know the design of your application you aren't doing TDD at all. We should design it while doing your tests.
So the suggestion I would give is for you to concentrate on the baby steps in order to get a proper testable design
I don't think any real practitioner of TDD will claim that it completely eliminates the possibility of error or regression.
Remember that TDD is fundamentally about design, not about testing or quality control. Saying "all my tests pass" does not mean "I'm finished."
If your requirements or high-level design change drastically, then you may need to throw away all your tests along with all the code. That's just how things are sometimes. It doesn't mean that TDD isn't helping you.
Properly applied, TDD should actually make your life a lot easier in the face of changing requirements.
In my experience, code that is easy to test is code that is orthogonal from other subsystems, and which has clearly defined interfaces. Given such a starting point, it is much easier to rewrite significant portions of your application, since you can work with confidence knowing that a) your changes will be isolated to a few subsystems, and b) any breakage will quickly show up as failing tests.
If, on the other hand, you're just slapping unit tests on your code after it has been designed, then you may well have problems when requirements change. There's a difference between tests that fail quickly when subsystems change (because they're effectively flagging regressions) and those that are brittle, because they depend on too many unrelated pieces of system state. The former should be fixable by a few lines of code, while the latter may leave you scratching your head for hours trying to unravel them.
The only true answer is it depends.
There are ways to do TDD wrong, such
that it doesn't fit in with your
environment and eats effort with
minimal benefit.
There are ways to do TDD right, such
that it both cuts costs and increases
quality.
There are ways to something
similar-but-different to TDD, which
may or may not get called TDD, and
may or may not be more appropriate in
your particular situation.
It's a strange quirk of the market for software tools and experts that, to maximise the revenue for those pushing them, they are always written as if they somehow apply to 'all software'.
Truth is, 'software' is every bit as diverse as 'hardware', and nobody would think of buying a book on bridge-making to design an electronic gadget or build a garden shed.
I think you have some misconceptions about TDD. For a good explanation and example of what it is and how to use it, I recommend reading Kent Beck's Test-Driven Development: By Example.
Here are a few further comments that may help you understand what TDD is and why some people swear by it:
"How do you combine test driven development with a design that has to change to reflect a growing understanding of the problem space?"
TDD is a technique for exploring a problem space and creating and evolving a design that meets your needs. TDD is not something you do in addition to doing design; it is doing design.
"How do you make the TDD practice work for you instead of against you?"
TDD is not "twice as much work" as not doing TDD. Yes, you'll write a lot of tests, but that doesn't really take much time, and the effort isn't wasted. You have to test your code somehow, right? Running automated tests are a lot quicker than manually testing whenever you change something.
A lot of TDD tutorials present highly detailed tests of every method of every class. In real life, people don't do this. It is silly to write a test for every setter, every getter, and so on. The Beck book does a good job of showing how to use TDD to quickly design and implement something, slowing down to "baby steps" only when things get tricky. See How Deep Are Your Unit Tests for more on this point.
TDD is not about regression testing. TDD is about thinking before you write code. But having regression tests is a nice side benefit. They don't guarantee that code will never break, but they help a lot.
When you make changes that cause tests to break, that's not a bad thing; it's valuable feedback. Designs do change, and your tests aren't written in stone. If your design has changed so much that some tests are no longer valid, then just throw them away. Write the new tests you need to be confident about the new design.
it's no longer a simple matter of
diving in and fixing "the bug", but
you also have to rewrite all the
tests.
A fundamental creed of TDD is to avoid duplication both in the production code AND in the test code. If a single design change means you have to rewrite everything, you weren't doing TDD (or not doing it correctly at all).
Ideally, in a well-designed system with proper separation of concerns, design changes are local, just like implementation changes. While the real world is rarely ideal, you still usually get something in between: you have to change some of the production code and some of the tests, but not everything, and the changes are mostly simple and may even be done automatically by refactoring tools.
Coding something without knowing what will work best in the UI, while at the same time writing unittests. That is very time consuming. It's better to start out making some prototypes of the GUI to get the interaction right.. and then rewrite it with unittests (if you employer allows you).
Continuous Integration (CI) is one key. If your tests run automatically every time you check in to source control (and everyone else sees it if they fail), it's easier to avoid "stale" tests and stay in the green.
As Mr. Dias mentioned, Baby Steps are important. You make a small refactoring, you run your tests. If tests break, you immediately determine if this is expected (design change) or a failed refactoring. When tests are truly independent (comes with practice), this is seldom very difficult. Evolve your design slowly.
See also http://thought-tracker.blogspot.com/2005/11/notes-on-pragmatic-unit-testing.html - and definitely buy the book!
EDIT: Perhaps I'm looking at this the wrong way. Say you had a legacy codebase that you wanted to redesign. The first thing I would try to do is add tests for the current behavior. Refactoring without tests is risky - you might change behavior. After that, I would start to clean up the design, in small steps, running my unit tests after each step. That would give me confidence that my changes weren't breaking anything.
At some point the API might change. This would be a breaking change - clients would have to be updated. The tests would tell me this - which is good, because I'd have to update any existing clients (including the tests).
Now that's not TDD. But the idea is the same - the tests are specifications of behavior (yes, I'm shading into BDD), and they give me the confidence to refactor the implementation while insuring that I preserve the behavior (as well as letting me know when I change the interface).
In practice, I've found TDD gives me immediate feedback on poor interface design. I'm my first client - I know when my API is hard to use.
We tend to do much less design up front with TDD, knowing it can change. I have taken projects through huge gyrations (it's a web app, no it's a RESTful server, no it's a bot). The tests provide me with the ability to refactor and restructure and evolve your code much more easily than untested code. Although it seems contradictory, it is true-- even though you have more code, you are able to make major changes and have confidence that nothing has broken in the existing functionality.
I understand your concern that fundamental assumptions changing make you throw out tests. This seems intuitive, but I personally haven't seen it. Some tests go, but most are still valid-- often a major change isn't as major as it seems at first. Plus, as you get better at writing tests, you tend to write less brittle ones, which helps.

What OOP coding practices should you always make time for?

I tend to do a lot of projects on short deadlines and with lots of code that will never be used again, so there's always pressure/temptation to cut corners. One rule I always stick to is encapsulation/loose coupling, so I have lots of small classes rather than one giant God class. But what else should I never compromise on?
Update - thanks for the great response. Lots of people have suggested unit testing, but I don't think that's really appropriate to the kind of UI coding I do. Usability / User acceptance testing seems much important. To reiterate, I'm talking about the BARE MINIMUM of coding standards for impossible deadline projects.
Not OOP, but a practice that helps in both the short and long run is DRY, Don't Repeat Yourself. Don't use copy/paste inheritance.
Not a OOP practice, but common sense ;-).
If you are in a hurry, and have to write a hack. Always add a piece of comment with the reasons. So you can trace it back and make a good solution later.
If you never had the time to come back, you always have the comment so you know, why the solution was chosen at the moment.
Use Source control.
No matter how long it takes to set up (seconds..), it will always make your life easier! (still it's not OOP related).
Naming. Under pressure you'll write horrible code that you won't have time to document or even comment. Naming variables, methods and classes as explicitly as possible takes almost no additional time and will make the mess readable when you must fix it. From an OOP point of view, using nouns for classes and verbs for methods naturally helps encapsulation and modularity.
Unit tests - helps you sleep at night :-)
This is rather obvious (I hope), but at the very least I always make sure my public interface is as correct as possible. The internals of a class can always be refactored later on.
no public class with mutable public variables (struct-like).
Before you know it, you refer to this public variable all over your code, and the day you decide this field is a computed one and must have some logic in it... the refactoring gets messy.
If that day is before your release date, it gets messier.
Think about the people (may even be your future self) who have to read and understand the code at some point.
Application of the single responsibility principal. Effectively applying this principal generates a lot of positive externalities.
Like everyone else, not as much OOP practices, as much as practices for coding that apply to OOP.
Unit test, unit test, unit test. Defined unit tests have a habit of keeping people on task and not "wandering" aimlessly between objects.
Define and document all hierarchical information (namespaces, packages, folder structures, etc.) prior to writing production code. This helps to flesh out object relations and expose flaws in assumptions related to relationships of objects.
Define and document all applicable interfaces prior to writing production code. If done by a lead or an architect, this practice can additionally help keep more junior-level developers on task.
There are probably countless other "shoulds", but if I had to pick my top three, that would be the list.
Edit in response to comment:
This is precisely why you need to do these things up front. All of these sorts of practices make continued maintenance easier. As you assume more risk in the kickoff of a project, the more likely it is that you will spend more and more time maintaining the code. Granted, there is a larger upfront cost, but building on a solid foundation pays for itself. Is your obstacle lack of time (i.e. having to maintain other applications) or a decision from higher up? I have had to fight both of those fronts to be able to adopt these kinds of practices, and it isn't a pleasant situation to be in.
Of course everything should be Unit tested, well designed, commented, checked into source control and free of bugs. But life is not like that.
My personal ranking is this:
Use source control and actually write commit comments. This way you have a tiny bit of documentation should you ever wonder "what the heck did I think when I wrote this?"
Write clean code or document. Clean well-written code should need little documentation, as it's meaning can be grasped from reading it. Hacks are a lot different. Write why you did it, what you do and what you'd like to do if you had the time/knowledge/motivation/... to do it right
Unit Test. Yes it's down on number three. Not because it's unimportant but because it's useless if you don't have the other two at least halfway complete. Writing Unit tests is another level of documentation what you code should be doing (among others).
Refactor before you add something. This might sound like a typical "but we don't have time for it" point. But as with many of those points it usually saves more time than it costs. At least if you have at least some experience with it.
I'm aware that much of this has already been mentioned, but since it's a rather subjective matter, I wanted to add my ranking.
[insert boilerplate not-OOP specific caveat here]
Separation of concerns, unit tests, and that feeling that if something is too complex it's probably not conceptualised quite right yet.
UML sketching: this has clarified and saved any amount of wasted effort so many times. Pictures are great aren't they? :)
Really thinking about is-a's and has-a's. Getting this right first time is so important.
No matter how fast a company wants it, I pretty much always try to write code to the best of my ability.
I don't find it takes any longer and usually saves a lot of time, even in the short-term.
I've can't remember ever writing code and never looking at it again, I always make a few passes over it to test and debug it, and even in those few passes practices like refactoring to keep my code DRY, documentation (to some degree), separation of concerns and cohesion all seem to save time.
This includes crating many more small classes than most people (One concern per class, please) and often extracting initialization data into external files (or arrays) and writing little parsers for that data... Sometimes even writing little GUIs instead of editing data by hand.
Coding itself is pretty quick and easy, debugging crap someone wrote when they were "Under pressure" is what takes all the time!
At almost a year into my current project I finally set up an automated build that pushes any new commits to the test server, and man, I wish I had done that on day one. The biggest mistake I made early-on was going dark. With every feature, enhancement, bug-fix etc, I had a bad case of the "just one mores" before I would let anyone see the product, and it literally spiraled into a six month cycle. If every reasonable change had been automatically pushed out it would have been harder for me to hide, and I would have been more on-track with regard to the stakeholders' involvement.
Go back to code you wrote a few days/weeks ago and spend 20 minutes reviewing your own code. With the passage of time, you will be able to determine whether your "off-the-cuff" code is organized well enough for future maintenance efforts. While you're in there, look for refactoring and renaming opportunities.
I sometimes find that the name I chose for a function at the outset doesn't perfectly fit the function in its final form. With refactoring tools, you can easily change the name early before it goes into widespread use.
Just like everybody else has suggested these recommendations aren't specific to OOP:
Ensure that you comment your code and use sensibly named variables. If you ever have to look back upon the quick and dirty code you've written, you should be able to understand it easily. A general rule that I follow is; if you deleted all of the code and only had the comments left, you should still be able to understand the program flow.
Hacks usually tend to be convoluted and un-intuitive, so some good commenting is essential.
I'd also recommend that if you usually have to work to tight deadlines, get yourself a code library built up based upon your most common tasks. This will allow you to "join the dots" rather than reinvent the wheel each time you have a project.
Regards,
Docta
An actual OOP practice I always make time for is the Single Responsibility Principle, because it becomes so much harder to properly refactor the code later on when the project is "live".
By sticking to this principle I find that the code I write is easily re-used, replaced or rewritten if it fails to match the functional or non-functional requirements. When you end up with classes that have multiple responsibilities, some of them may fulfill the requirements, some may not, and the whole may be entirely unclear.
These kinds of classes are stressful to maintain because you are never sure what your "fix" will break.
For this special case (short deadlines and with lots of code that will never be used again) I suggest you to pay attention to embedding some script engine into your OOP code.
Learn to "refactor as-you-go". Mainly from an "extract method" standpoint. When you start to write a block of sequential code, take a few seconds to decide if this block could stand-alone as a reusable method and, if so, make that method immediately. I recommend it even for throw-away projects (especially if you can go back later and compile such methods into your personal toolbox API). It doesn't take long before you do it almost without thinking.
Hopefully you do this already and I'm preaching to the choir.