TDD and DDD while still understanding the domain - testing

When you are starting a new project from scratch using DDD, and still isn't very confortable with the domain, TDD comes at a price. While you're still understanding the details of the domain, you figure out a lot of stuff you've done wrong, like a method that makes more sense in some other class, or adding/removing parameters from a constructor, and many other changes.
These changes are VERY frequent, specially in the beginning. Every change usually (and hopefully) requires some changes in the unit tests, which increases cost of change (which, as I said before, is stil very frequent).
My question is: Is TDD worth the cost, even in situations where there is still lots of change happening, but there's hope they will get less frequent (once we have better insight of the domain, for instance) soon?

While you're still understanding the
details of the domain, you figure out
a lot of stuff you've done wrong, like
a method that makes more sense in some
other class, or adding/removing
parameters from a constructor, and
many other changes.
The process of understanding a domain is a design process, and it is helped by TDD, which you must understand is a design technique.
That method which makes more sense in some other class - you come to realize this quickly, more quickly, using TDD, because the first thing you do in writing the method is to write a test for it. When you write that test, you'll see that (for instance) you need to pass in a lot of members from the other class, and that will tell you - before you've even written the method - "Hey, this belongs over there!"
Use TDD to reduce the churn you're describing. It won't eliminate it, but it will reduce it, because you're doing design in the micro, on demand, as needed. It's Just-In-Time Design.

If you're at the point where you are still designing your domain model components at a relatively high level, then you shouldn't be writing unit tests yet. You need to have an understanding of the problem domain, and the responsibilities of your classes before you can start writing any code.
If you are at the point where you are fiddling with constructors and parameters in your classes, then TDD should not be thought of as a cost - it is helping you discover and fix problems in your design. It is an investment in a better object model and less maintenance down the road.
Also, if you're using a refactoring tool like ReSharper or CodeRush, I find that most of the early changes are really not that bad - just minor inconveniences.

I believe so. Part of Test Driven Development is that you're only building what you need to build - only what's required to make the tests pass. So, if you need to change something in the code due to a clearer understanding of the domain, you may have less to change with a TDD approach than without because you haven't spent time building unnecessary things.
There are other advantages as well - you know that the part of the code that you didn't change still works, since it's already got tests. Even if you rewrite 50% of the code, you'll know that your other 50% works. Also, you know that the code is testable - you've been designing it to be tested the whole time. It's often a huge pain to add unit tests to code that wasn't designed to have any - it's usually written in a way that's very difficult to test. Code that's made to be tested from the beginning is much better.

TDD should help in this situation, IMHO. Part of the usefulness of the unit tests is that they verify that you didn't break anything during refactoring. Yes, some changes in code will require changes to the tests, but that should help clarify what you're doing, not make it harder.

As stated above TDD is more about testing and proving your design and less about Unit Testing. TDD is NOT Unit Testing, but Unit Tests can evolve from tests created in your TDD.
TDD is absolutly helpful in understanding the Domain model. TDD can be used to help define the Domain model causei it is a design technique as Carl Manaster stated above. TDD will help you recognize when and where to implement a design pattern, if / when an object is being defined in the wrong domain etc.

the most change rate, the most TDD become useful. A project with requirements sets in stone would get less from doing TDD compared to a project with high change rate.

Related

SOLID vs. YAGNI [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
One of the most frequent arguments I hear for not adhering to the SOLID principles in object-oriented design is YAGNI (although the arguer often doesn't call it that):
"It is OK that I put both feature X and feature Y into the same class. It is so simple why bother adding a new class (i.e. complexity)."
"Yes, I can put all my business logic directly into the GUI code it is much easier and quicker. This will always be the only GUI and it is highly unlikely that significant new requirements will ever come in."
"If in the unlikely case of new requirements my code gets too cluttered I still can refactor for the new requirement. So your 'What if you later need to…' argument doesn't count."
What would be your most convincing arguments against such practice? How can I really show that this is an expensive practice, especially to somebody that doesn't have too much experience in software development.
Design is the management and balance of trade-offs. YAGNI and SOLID aren't conflicting: the former says when to add features, the latter says how, but they both guide the design process. My responses, below, to each of your specific quotes use principles from both YAGNI and SOLID.
It is three times as difficult to build reusable components as single use
components.
A reusable component should be tried out in three different
applications before it will be sufficiently general to accept into a reuse
library.
  — Robert Glass' Rules of Three, Facts and Fallacies of Software Engineering
Refactoring into reusable components has the key element of first finding the same purpose in multiple places, and then moving it. In this context, YAGNI applies by inlining that purpose where needed, without worrying about possible duplication, instead of adding generic or reusable features (classes and functions).
The best way, in the initial design, to show when YAGNI doesn't apply is to identify concrete requirements. In other words, do some refactoring before writing code to show that duplication is not merely possible, but already exists: this justifies the extra effort.
Yes, I can put all my business logic directly into the GUI code it is much easier and quicker. This will always be the only GUI and it is highly unlikely that signifcant new requirements will ever come in.
Is it really the only user interface? Is there a background batch mode planned? Will there ever be a web interface?
What is your testing plan, and will you be testing back-end functionality without a GUI? What will make the GUI easy for you to test, since you usually don't want to be testing outside code (such as platform-generic GUI controls) and instead concentrate on your project.
It is OK that I put both feature X and feature Y into the same class. It is so simple why bother adding a new class (i.e. complexity).
Can you point out a common mistake that needs to be avoided? Some things are simple enough, such as squaring a number (x * x vs squared(x)) for an overly-simple example, but if you can point out a concrete mistake someone made—especially in your project or by those on your team—you can show how a common class or function will avoid that in the future.
If, in the unlikely case of new requirements, my code gets too cluttered I still can refactor for the new requirement. So your "What if you later need to..." argument doesn't count.
The problem here is the assumption of "unlikely". Do you agree it's unlikely? If so, you're in agreement with this person. If not, your idea of the design doesn't agree with this person's—resolving that discrepancy will solve the problem, or at least show you where to go next. :)
I like to think about YAGNI in terms of "half, not half-assed", to borrow the phrase from 37signals (https://gettingreal.37signals.com/ch05_Half_Not_Half_Assed.php). It's about limiting your scope so you can focus on doing the most important things well. It's not an excuse to get sloppy.
Business logic in the GUI feels half-assed to me. Unless your system is trivial, I'd be surprised if your business logic and GUI haven't already changed independently, several times over. So you should follow the SRP ("S" in SOLID) and refactor - YAGNI doesn't apply, because you already need it.
The argument about YAGNI and unnecessary complexity absolutely applies if you're doing extra work today to accommodate hypothetical future requirements. When those "what if later we need to..." scenarios fail to materialize, you're stuck with higher maintenance costs from the abstractions that now get in the way of the changes you actually have. In this case, we're talking about simplifying the design by limiting scope -- doing half, rather than being half-assed.
It sounds like you're arguing with a brick wall. I'm a big fan of YAGNI, but at the same time, I also expect that my code will always be used in at least two places: the application, and the tests. That's why things like business logic in UI code don't work; you can't test business logic separate of UI code in that circumstance.
However, from the responses you're describing, it sounds like the person is simply uninterested in doing better work. At that point, no principle is going to help them; they only want to do the minimum possible. I'd go so far as to say that it's not YAGNI driving their actions, but rather laziness, and you alone aren't going to beat laziness (almost nothing can, except a threatening manager or the loss of a job).
There is no answer, or rather, there is an answer neither you nor your interlocutor might like: both YAGNI and SOLID can be wrong approaches.
Attempting to go for SOLID with an inexperienced team, or a team with tight delivery objectives pretty much guarantees you will end up with an expensive, over-engineered bunch of code... that will NOT be SOLID, just over-engineered (aka welcome to the real-world).
Attempting to go YAGNI for a long term project and hope you can refactor later only works to an extent (aka welcome to the real-world). YAGNI excels at proof-of-concepts and demonstrators, getting the market/contract and then be able to invest into something more SOLID.
You need both, at different points in time.
The correct application of these principles is often not very obvious and depends very much on experience. Which is hard to obtain if you didn't do it yourself. Every programmer should have had experiences of the consequences of doing it wrong, but of course it always should be "not my" project.
Explain to them what the problem is, if they don't listen and you're not in a position to make them listen, let them do the mistakes. If you're too often the one having to fix the problem, you should polish your resume.
In my experience, it's always a judgment call. Yes, you should not worry about every little detail of your implementation, and sometimes sticking a method into an existing class is an acceptable, though ugly solution.
It's true that you can refactor later. The important point is to actually do the refactoring. So I believe the real problem is not the occasional design compromise, but putting off refactoring once it becomes clear there's a problem. Actually going through with it is the hard part (just like with many things in life... ).
As to your individual points:
It is OK that I put both feature X
and feature Y into the same class. It
is so simple why bother adding a new
class (i.e. complexity).
I would point out that having everything in one class is more complex (because the relationship between the methods is more intimate, and harder to understand). Having many small classes is not complex. If you feel the list is getting to long, just organize them into packages, and you'll be fine :-). Personally, I have found that just splitting a class into two or three classes can help a lot with readability, without any further change.
Don't be afraid of small classes, they don't bite ;-).
Yes, I can put all my business logic
directly into the GUI code it is much
easier and quicker. This will always
be the only GUI and it is highly
unlikely that signifcant new
requirements will ever come in.
If someone can say "it is highly unlikely that signifcant new requirements will ever come in." with a straight face, I believe that person really, really needs a reality check. Be blunt, but gentle...
If in the unlikely case of new
requirements my code gets too
cluttered I still can refactor for the
new requirement. So your 'What if you
later need to ...' argument doesn't
count
That has some merit, but only if they actually do refactor later. So accept it, and hold them to their promise :-).
SOLID principles allow software to adapt to change - in both requirements and techical changes (new components, etc), two of your arguments are for unchanging requirements:
"it is highly unlikely that signifcant new requirements will ever come in."
"If in the unlikely case of new requirements"
Could this really be true?
There is no substitute for experience when it comes to the various expenses of development. For many practitioners I think doing things in the lousy, difficult to maintain way has never resulted in problems for them (hey! job security). Over the long term of a product I think these expenses become clear, but doing something about them ahead of time is someone else's job.
There are some other great answers here.
Understandable, flexible and capable of fixes and improvements are always things that you are going to need. Indeed, YAGNI assumes that you can come back and add new features when they prove necessary with relative ease, because nobody is going to do something crazy like bunging irrelevant functionality in a class (YAGNI in that class!) or pushing business logic to UI logic.
There can be times when what seems crazy now was reasonable in the past - sometimes the boundary lines of UI vs business or between different sets of responsibilities that should be in a different class aren't that clear, or even move. There can be times when 3hours of work is absolutely necessary in 2hours time. There are times when people just don't make the right call. For those reasons occasional breaks in this regard will happen, but they are going to get in the way of using the YAGNI principle, not be a cause of it.
Quality unit tests, and I mean unit tests not integration tests, need code that adheres to SOLID. Not necessarily 100%, in fact rarely so, but in your example stuffing two features into one class will make unit testing harder, breaks the single responsibility principle, and makes code maintenance by team newbies much harder (as it is much harder to comprehend).
With the unit tests (assuming good code coverage) you'll be able to refactor feature 1 safe and secure you won't break feature 2, but without unit tests and with the features in same class (simply to be lazy in your example) refactoring is risky at best, disastrous at best.
Bottom line: follow the KIS principle (keep it simple), or for the intellectual the KISS principle (kis stupid). Take each case on merit, there's no global answer but always consider if other coders need to read / maintain the code in the future and the benefit of unit tests in each scenario.
tldr;
SOLID assumes, you understand (somewhat atleast), the future changes to the code, wrt SRP. I will say that is being optimistic about capability to predict.
YAGNI on the other hand, assumes most of the times you don't know future direction of change, which is pessimistic about capability to predict.
Hence it follows that SOLID/SRP asks you to form classes for the code such that it will have single reason for change. E.g. a small GUI change or ServiceCall change.
YAGNI says (if you want to force apply it in this scenario), since you don't know WHAT is going to change, and if a GUI change will cause a GUI+ServiceCall change (similarly A backend change causing GUI+SeviceCall change), just put all that code in single class.
Long answer :
Read the book 'Agile Software Development, Principles, Patterns, and Practices'
I am putting short excerpt from it about SOLID/SRP :
"If,[...]the application is not changing in ways that cause the two responsibilities to change at different times, there is no need to separate them. Indeed, separating them would smell of needless complexity.
There is a corrolary here. An axis of change is an axis of change only if the changes occur. It is not wise to apply SRP—or any other principle, for that matter—if there is no symptom."

Compromising design & code quality to integrate with existing modules

Greetings!
I inherited a C#.NET application I have been extending and improving for a while now. Overall it was obviously a rush-job (or whoever wrote it was seemingly less competent than myself). The app pulls some data from an embedded device & displays and manipulates it. At the core is a communications thread in the main application form which executes a 600+ lines of code method which calls functions all over the place, implementing a state machine - lots of if-state-then-do type code. Interaction with the device is done by setting the state/mode globally and letting the thread do it's thing. (This is just one example of the badness of the code - overall it is not very OO-like, it reminds of the style of embedded C code the device firmware is written in).
My problem is that this piece of code is central to the application. The software, communications protocol or device firmware are not documented at all. Obviously to carry on with my work I have to interact with this code.
What I would like some guidance on, is whether it is worth scrapping this code & trying to piece together something more reasonable from the information I can reverse engineer? I can't decide! The reason I don't want to refactor is because the code already works, and changing it will surely be a long, laborious and unpleasant task. On the flip side, not refactoring means I have to sometimes compromise the design of other modules so that I may call my code from this state machine!
I've heard of "If it ain't broke don't fix it!", so I am wondering if it should apply when "it" is influencing the design of future code! Any advice would be appreciated!
Thanks!
Also, the longer you wait, the worse the codebase will smell. My suggestion would be first create a testsuite that you can evaluate your refactoring against. This makes it a lot easier to see if you are refactoring or just plain breaking things :).
I would definitely recommend you to refactor the code if you feel its junky. Yes, during the process of refactoring you may have some inconsistencies/problems at the start. But that is why we have iterations and testing. Since you are going to build up on this core engine in future, why not make the basement as stable as possible.
However, be very sure on what you are going to do. Because at times long lines of code does not necessarily mean evil. On the other hand they may be very efficient in running time. If/else blocks are not bad if you ask me, as they are very intelligent in branching from a microprocessor's perspective. So, you will have to be judgmental and very clear before you touch this.
But once you refactor the code, you will definitely have fine control over it. And don't forget to document it!! Tomorrow, someone might very well come and say about you on whatever you've told about this guy who have written that core code.
This depends on the constraints you are facing, it's a decision to be based on practical basis, not on theoretical ones. You need three things to consider.
Time: you need to have enough time to learn it, implement it, and test it, without too many other tasks interrupting you
Boss #1: if you are working for someone, he needs to know and approve the time and effort you will spend immediately, required to rebuild your solution
Boss #2: your boss also needs to know that the advantage of having new and clean software will come at the price of possible regressions, and therefore at the beginning of the deployment there may be unexpected bugs
If you have those three, then go ahead and refactor it. It will be surely be worth it!
First and foremost, get all the business logic out of the Form. Second, locate all the parts where the code interacts with the global state (e.g. accessing the embedded system). Delegate all this access to methods. Then, move these methods into a new class and create an instance in the class's constructor. Finally, inject an instance for the class to use.
Following these steps, you can move your embedded system logic ("existing module") to a wrapper class you write, so the interface can be nice and clean and more manageable. Then you can better tackle refactoring the monster method because there is less global state to worry about (only local state).
If the code works and you can integrate your part with minimal changes to it then let the code as it is and do your integration.
If the code is simply a big barrier in your way to add new functionality then it is best for you to refactor it.
Talk with other people that are responsible for the project, explain the situation, give an estimation explaining the benefits gained after refactoring the code and I'm sure (I hope) that the best choice will be made. It is best to speak about what you think, don't keep anything inside, especially if this affects your productivity, motivation etc.
NOTE: Usually rewriting code is out of the question but depending on situation and amount of code needed to be rewritten the decision may vary.
You say that this is having an impact on the future design of the system. In this case I would say it is broken and does need fixing.
But you do have to take into account the business requirements. Often reality gets in the way!
Would it be possible to wrap this code up in another class whose interface better suits how you want to take the system forward? (See adapter pattern)
This would allow you to move forward with your requirements without the poor design having an impact.
It gives you an interface that you understand which you could write some unit tests for. These tests can be based on what your design requires from this code. It ensures that your assumptions about what it is doing is correct. If you say that this code works, then any failing tests may be that your assumptions are incorrect.
Once you have these tests you can safely refactor - one step at a time, and when you have some spare time or when it is needed - as per business requirements.
Quite often I find the best way to truly understand a piece of code is to refactor it.
EDIT
On reflection, as this is one big method with multiple calls to the outside world, you are going to need some kind of inverse Adapter class to wrap this method. If you can inject dependencies into the method (see Dependency Inversion such that the method calls methods in your classes then you can route these to the original calls.

Do good tests enable sloppy coding?

Let's say you're coding, and you come across an opportunity for simple code resuse (e.g. pulling a common piece of code out to an accessible place like a Utility class or base class). You might find yourself thinking, "I know it's good to do this, but I have to get this done now, and if I need to make a change to this code, and forget to change it in the other place, my testing framework will let me know."
In other words, you let the awesome tests you (or another developer) has written to remind you to change the code in the other places too.
Is this a legitimate problem that we might find in ourselves or other developers?
You're asking whether unit tests encourage you to rely on them as a method of TODO list? Yes, but I don't think that's sloppy coding. You are, afterall, to start with unit tests failing and code to the test; if you refactor some code and then once again code to the test, that isn't sloppy coding -- it's doing what you're supposed to.
I think the problem with unit tests is simply that you can't cover every corner case in a unit test, and sometimes people assume that a working test means a working app, which isn't true.
In the example you provide, good tests are in fact enabling you to implement sloppy design, however in my experience, bad tests wouldn't have discouraged you from doing the same.
The fallacy in your argument centers around the premise that "getting this done now" means you will save time by implementing sloppy design. The truth of the matter is that you are incurring technical debt whether your tests are good or not. Making a change to that code is now a much more complex task, whether you have a good testing framework to remind you of that or not.
Although immature code may work fine
and be completely acceptable to the
customer, excess quantities will make
a program unmasterable, leading to
extreme specialization of programmers
and finally an inflexible product.
- Ward Cunningham
The strength of good testing practices may be in allowing you to incur that debt with some level of safety. As long as you continue to be aware that this area of the code is now weak, as a result of your choices, then it may be worth the tradeoff -- you ship your product sooner, at the cost of higher debt, with a lower risk of incurring bugs in the short run as a result.
If the tests are good and the code (sloppy or otherwise) pass them, all is good. It would be nice to have good code but sloppy working code is better than good broken code.
I don't use tests as my first option to finding the code that needs changes. I'll use my IDE's search (or refactoring) functionality and look for all the places that call the method in question.
The tests are just a nice addition in case I was accidentally sloppy or accidentally introduced a bug. Test don't make me sloppy from the start, they just reassure me once I think I'm done.
I would say that good tests enable you to fix sloppy coding.
You can certainly write incredibly sloppy code with or without tests. Unit testing makes it slightly easier to get away with it, but only in the short run.
If you have a set of logic copied in two places in your code (IMO the worst thing a developer can do), then you probably have inconsistent tests as well.
The most important job any programmer can do is ruthlessly refactor the code, removing ALL duplication. This almost always shows benefits on even a single iteration.
Why would you think if you had an error in copied code in 2 places that your tests would be any better?
It sounds more to me like sloppy developers and sloppy coding practices are what are leading to sloppy code in your example. The tests you described would prevent the sloppy code from ever getting to far.

Test driven development: What if the bug is in the interface?

I read the latest coding horror post, and one of the comments touched a nerve for me:
This is the type of situation that test driven design/refactoring are supposed to fix. If (big if) you have tests for the interfaces, rewriting the implementation is risk-free, because you will know whether you caught everything.
Now in theory I like the idea of test driven development, but all the times I've tried to make it work, it hasn't gone particularly well, I get out of the habit, and next thing I know all the tests that I had originally written not only don't pass, but they're no longer a reflection of the design of the system.
It's all well and good if you've been handed a perfect design from on high, straight from the start (which in my experience never actually happens), but what if halfway through the production of a system you notice that there's a critical flaw in the design? Then it's no longer a simple matter of diving in and fixing "the bug", but you also have to rewrite all the tests. A fundamental assumption was wrong, and now you have to change it. Now test driven development is no longer a handy thing, but it just means that there's twice as much work to do everything.
I've tried to ask this question before, both of peers, and online, but I've never heard a very satisfactory answer. ... Oh wait.. what was the question?
How do you combine test driven development with a design that has to change to reflect a growing understanding of the problem space? How do you make the TDD practice work for you instead of against you?
Update:
I still don't think I fully understand it all, so I can't really make a decision about which answer to accept. Most of my leaps in understanding have happened in the comments sections, not in the answers. Here' s a collection of my favorites so far:
"Anyone who uses terms like "risk-free"
in software development is indeed full
of shit. But don't write off TDD just
because some of its proponents are
hyper-susceptible to hype. I find it
helps me clarify my thinking before
writing a chunk of code, helps me to
reproduce bugs and fix them, and makes
me more confident about refactoring
things when they start to look ugly"
-Kristopher Johnson
"In that case, you rewrite the tests
for just the portions of the interface
that have changed, and consider
yourself lucky to have good test
coverage elsewhere that will tell you
what other objects depend on it."
-rcoder
"In TDD, the reason to write the tests
is to do design. The reason to make
the tests automated is so that you can
reuse them as the design and code
evolve. When a test breaks, it means
you've somehow violated an earlier
design decision. Maybe that's a
decision you want to change, but it's
good to get that feedback as soon as
possible."
-Kristopher Johnson
[about testing interfaces] "A test would insert some elements,
check that the size corresponds to the
number of elements inserted, check
that contains() returns true for them
but not for things that weren't
inserted, checks that remove() works,
etc. All of these tests would be
identical for all implementations, and
of course you would run the same code
for each implementation and not copy
it. So when the interface changes,
you'd only have to adjust the test
code once, not once for each
implementation."
–Michael Borgwardt
One of the practices of TDD is the use of Baby Steps (which could be very boring in the beggining) which is the use of really small steps in order for you to understand your problem space and make a good and satisfactory solution for your problem.
If you already know the design of your application you aren't doing TDD at all. We should design it while doing your tests.
So the suggestion I would give is for you to concentrate on the baby steps in order to get a proper testable design
I don't think any real practitioner of TDD will claim that it completely eliminates the possibility of error or regression.
Remember that TDD is fundamentally about design, not about testing or quality control. Saying "all my tests pass" does not mean "I'm finished."
If your requirements or high-level design change drastically, then you may need to throw away all your tests along with all the code. That's just how things are sometimes. It doesn't mean that TDD isn't helping you.
Properly applied, TDD should actually make your life a lot easier in the face of changing requirements.
In my experience, code that is easy to test is code that is orthogonal from other subsystems, and which has clearly defined interfaces. Given such a starting point, it is much easier to rewrite significant portions of your application, since you can work with confidence knowing that a) your changes will be isolated to a few subsystems, and b) any breakage will quickly show up as failing tests.
If, on the other hand, you're just slapping unit tests on your code after it has been designed, then you may well have problems when requirements change. There's a difference between tests that fail quickly when subsystems change (because they're effectively flagging regressions) and those that are brittle, because they depend on too many unrelated pieces of system state. The former should be fixable by a few lines of code, while the latter may leave you scratching your head for hours trying to unravel them.
The only true answer is it depends.
There are ways to do TDD wrong, such
that it doesn't fit in with your
environment and eats effort with
minimal benefit.
There are ways to do TDD right, such
that it both cuts costs and increases
quality.
There are ways to something
similar-but-different to TDD, which
may or may not get called TDD, and
may or may not be more appropriate in
your particular situation.
It's a strange quirk of the market for software tools and experts that, to maximise the revenue for those pushing them, they are always written as if they somehow apply to 'all software'.
Truth is, 'software' is every bit as diverse as 'hardware', and nobody would think of buying a book on bridge-making to design an electronic gadget or build a garden shed.
I think you have some misconceptions about TDD. For a good explanation and example of what it is and how to use it, I recommend reading Kent Beck's Test-Driven Development: By Example.
Here are a few further comments that may help you understand what TDD is and why some people swear by it:
"How do you combine test driven development with a design that has to change to reflect a growing understanding of the problem space?"
TDD is a technique for exploring a problem space and creating and evolving a design that meets your needs. TDD is not something you do in addition to doing design; it is doing design.
"How do you make the TDD practice work for you instead of against you?"
TDD is not "twice as much work" as not doing TDD. Yes, you'll write a lot of tests, but that doesn't really take much time, and the effort isn't wasted. You have to test your code somehow, right? Running automated tests are a lot quicker than manually testing whenever you change something.
A lot of TDD tutorials present highly detailed tests of every method of every class. In real life, people don't do this. It is silly to write a test for every setter, every getter, and so on. The Beck book does a good job of showing how to use TDD to quickly design and implement something, slowing down to "baby steps" only when things get tricky. See How Deep Are Your Unit Tests for more on this point.
TDD is not about regression testing. TDD is about thinking before you write code. But having regression tests is a nice side benefit. They don't guarantee that code will never break, but they help a lot.
When you make changes that cause tests to break, that's not a bad thing; it's valuable feedback. Designs do change, and your tests aren't written in stone. If your design has changed so much that some tests are no longer valid, then just throw them away. Write the new tests you need to be confident about the new design.
it's no longer a simple matter of
diving in and fixing "the bug", but
you also have to rewrite all the
tests.
A fundamental creed of TDD is to avoid duplication both in the production code AND in the test code. If a single design change means you have to rewrite everything, you weren't doing TDD (or not doing it correctly at all).
Ideally, in a well-designed system with proper separation of concerns, design changes are local, just like implementation changes. While the real world is rarely ideal, you still usually get something in between: you have to change some of the production code and some of the tests, but not everything, and the changes are mostly simple and may even be done automatically by refactoring tools.
Coding something without knowing what will work best in the UI, while at the same time writing unittests. That is very time consuming. It's better to start out making some prototypes of the GUI to get the interaction right.. and then rewrite it with unittests (if you employer allows you).
Continuous Integration (CI) is one key. If your tests run automatically every time you check in to source control (and everyone else sees it if they fail), it's easier to avoid "stale" tests and stay in the green.
As Mr. Dias mentioned, Baby Steps are important. You make a small refactoring, you run your tests. If tests break, you immediately determine if this is expected (design change) or a failed refactoring. When tests are truly independent (comes with practice), this is seldom very difficult. Evolve your design slowly.
See also http://thought-tracker.blogspot.com/2005/11/notes-on-pragmatic-unit-testing.html - and definitely buy the book!
EDIT: Perhaps I'm looking at this the wrong way. Say you had a legacy codebase that you wanted to redesign. The first thing I would try to do is add tests for the current behavior. Refactoring without tests is risky - you might change behavior. After that, I would start to clean up the design, in small steps, running my unit tests after each step. That would give me confidence that my changes weren't breaking anything.
At some point the API might change. This would be a breaking change - clients would have to be updated. The tests would tell me this - which is good, because I'd have to update any existing clients (including the tests).
Now that's not TDD. But the idea is the same - the tests are specifications of behavior (yes, I'm shading into BDD), and they give me the confidence to refactor the implementation while insuring that I preserve the behavior (as well as letting me know when I change the interface).
In practice, I've found TDD gives me immediate feedback on poor interface design. I'm my first client - I know when my API is hard to use.
We tend to do much less design up front with TDD, knowing it can change. I have taken projects through huge gyrations (it's a web app, no it's a RESTful server, no it's a bot). The tests provide me with the ability to refactor and restructure and evolve your code much more easily than untested code. Although it seems contradictory, it is true-- even though you have more code, you are able to make major changes and have confidence that nothing has broken in the existing functionality.
I understand your concern that fundamental assumptions changing make you throw out tests. This seems intuitive, but I personally haven't seen it. Some tests go, but most are still valid-- often a major change isn't as major as it seems at first. Plus, as you get better at writing tests, you tend to write less brittle ones, which helps.

What OOP coding practices should you always make time for?

I tend to do a lot of projects on short deadlines and with lots of code that will never be used again, so there's always pressure/temptation to cut corners. One rule I always stick to is encapsulation/loose coupling, so I have lots of small classes rather than one giant God class. But what else should I never compromise on?
Update - thanks for the great response. Lots of people have suggested unit testing, but I don't think that's really appropriate to the kind of UI coding I do. Usability / User acceptance testing seems much important. To reiterate, I'm talking about the BARE MINIMUM of coding standards for impossible deadline projects.
Not OOP, but a practice that helps in both the short and long run is DRY, Don't Repeat Yourself. Don't use copy/paste inheritance.
Not a OOP practice, but common sense ;-).
If you are in a hurry, and have to write a hack. Always add a piece of comment with the reasons. So you can trace it back and make a good solution later.
If you never had the time to come back, you always have the comment so you know, why the solution was chosen at the moment.
Use Source control.
No matter how long it takes to set up (seconds..), it will always make your life easier! (still it's not OOP related).
Naming. Under pressure you'll write horrible code that you won't have time to document or even comment. Naming variables, methods and classes as explicitly as possible takes almost no additional time and will make the mess readable when you must fix it. From an OOP point of view, using nouns for classes and verbs for methods naturally helps encapsulation and modularity.
Unit tests - helps you sleep at night :-)
This is rather obvious (I hope), but at the very least I always make sure my public interface is as correct as possible. The internals of a class can always be refactored later on.
no public class with mutable public variables (struct-like).
Before you know it, you refer to this public variable all over your code, and the day you decide this field is a computed one and must have some logic in it... the refactoring gets messy.
If that day is before your release date, it gets messier.
Think about the people (may even be your future self) who have to read and understand the code at some point.
Application of the single responsibility principal. Effectively applying this principal generates a lot of positive externalities.
Like everyone else, not as much OOP practices, as much as practices for coding that apply to OOP.
Unit test, unit test, unit test. Defined unit tests have a habit of keeping people on task and not "wandering" aimlessly between objects.
Define and document all hierarchical information (namespaces, packages, folder structures, etc.) prior to writing production code. This helps to flesh out object relations and expose flaws in assumptions related to relationships of objects.
Define and document all applicable interfaces prior to writing production code. If done by a lead or an architect, this practice can additionally help keep more junior-level developers on task.
There are probably countless other "shoulds", but if I had to pick my top three, that would be the list.
Edit in response to comment:
This is precisely why you need to do these things up front. All of these sorts of practices make continued maintenance easier. As you assume more risk in the kickoff of a project, the more likely it is that you will spend more and more time maintaining the code. Granted, there is a larger upfront cost, but building on a solid foundation pays for itself. Is your obstacle lack of time (i.e. having to maintain other applications) or a decision from higher up? I have had to fight both of those fronts to be able to adopt these kinds of practices, and it isn't a pleasant situation to be in.
Of course everything should be Unit tested, well designed, commented, checked into source control and free of bugs. But life is not like that.
My personal ranking is this:
Use source control and actually write commit comments. This way you have a tiny bit of documentation should you ever wonder "what the heck did I think when I wrote this?"
Write clean code or document. Clean well-written code should need little documentation, as it's meaning can be grasped from reading it. Hacks are a lot different. Write why you did it, what you do and what you'd like to do if you had the time/knowledge/motivation/... to do it right
Unit Test. Yes it's down on number three. Not because it's unimportant but because it's useless if you don't have the other two at least halfway complete. Writing Unit tests is another level of documentation what you code should be doing (among others).
Refactor before you add something. This might sound like a typical "but we don't have time for it" point. But as with many of those points it usually saves more time than it costs. At least if you have at least some experience with it.
I'm aware that much of this has already been mentioned, but since it's a rather subjective matter, I wanted to add my ranking.
[insert boilerplate not-OOP specific caveat here]
Separation of concerns, unit tests, and that feeling that if something is too complex it's probably not conceptualised quite right yet.
UML sketching: this has clarified and saved any amount of wasted effort so many times. Pictures are great aren't they? :)
Really thinking about is-a's and has-a's. Getting this right first time is so important.
No matter how fast a company wants it, I pretty much always try to write code to the best of my ability.
I don't find it takes any longer and usually saves a lot of time, even in the short-term.
I've can't remember ever writing code and never looking at it again, I always make a few passes over it to test and debug it, and even in those few passes practices like refactoring to keep my code DRY, documentation (to some degree), separation of concerns and cohesion all seem to save time.
This includes crating many more small classes than most people (One concern per class, please) and often extracting initialization data into external files (or arrays) and writing little parsers for that data... Sometimes even writing little GUIs instead of editing data by hand.
Coding itself is pretty quick and easy, debugging crap someone wrote when they were "Under pressure" is what takes all the time!
At almost a year into my current project I finally set up an automated build that pushes any new commits to the test server, and man, I wish I had done that on day one. The biggest mistake I made early-on was going dark. With every feature, enhancement, bug-fix etc, I had a bad case of the "just one mores" before I would let anyone see the product, and it literally spiraled into a six month cycle. If every reasonable change had been automatically pushed out it would have been harder for me to hide, and I would have been more on-track with regard to the stakeholders' involvement.
Go back to code you wrote a few days/weeks ago and spend 20 minutes reviewing your own code. With the passage of time, you will be able to determine whether your "off-the-cuff" code is organized well enough for future maintenance efforts. While you're in there, look for refactoring and renaming opportunities.
I sometimes find that the name I chose for a function at the outset doesn't perfectly fit the function in its final form. With refactoring tools, you can easily change the name early before it goes into widespread use.
Just like everybody else has suggested these recommendations aren't specific to OOP:
Ensure that you comment your code and use sensibly named variables. If you ever have to look back upon the quick and dirty code you've written, you should be able to understand it easily. A general rule that I follow is; if you deleted all of the code and only had the comments left, you should still be able to understand the program flow.
Hacks usually tend to be convoluted and un-intuitive, so some good commenting is essential.
I'd also recommend that if you usually have to work to tight deadlines, get yourself a code library built up based upon your most common tasks. This will allow you to "join the dots" rather than reinvent the wheel each time you have a project.
Regards,
Docta
An actual OOP practice I always make time for is the Single Responsibility Principle, because it becomes so much harder to properly refactor the code later on when the project is "live".
By sticking to this principle I find that the code I write is easily re-used, replaced or rewritten if it fails to match the functional or non-functional requirements. When you end up with classes that have multiple responsibilities, some of them may fulfill the requirements, some may not, and the whole may be entirely unclear.
These kinds of classes are stressful to maintain because you are never sure what your "fix" will break.
For this special case (short deadlines and with lots of code that will never be used again) I suggest you to pay attention to embedding some script engine into your OOP code.
Learn to "refactor as-you-go". Mainly from an "extract method" standpoint. When you start to write a block of sequential code, take a few seconds to decide if this block could stand-alone as a reusable method and, if so, make that method immediately. I recommend it even for throw-away projects (especially if you can go back later and compile such methods into your personal toolbox API). It doesn't take long before you do it almost without thinking.
Hopefully you do this already and I'm preaching to the choir.