Related
I have always wondered if I am thinking ahead too much or too little before I code something. This is especially true for me if I am not sure what possible future requirement changes I will be required to account for are. I don't know how flexible or abstracted I should make my classes. I'll give a quick example.
You want to write a program that plays blackjack against a computer and you're the type of person that likes to experiment. You begin to write the code for the deck, but then you realize blackjack could have 1, 2, 4, or any number of decks. You account for that, but then you realize that maybe the deck will be altered and not have any cards of value ten. You then decide that the deck should be completely versatile to allow any number of suits or ranks. You then decide that the rules for the deck should be able to be altered from the standard number of suits multiplied by the unique ranks to equal the total amount of cards in the deck... You can see where I am going here.
My question is this, are there any guidelines for how flexible a class should be?
Favor minimalism and encapsulation, avoiding functionalities you don't need.
It's of course good to design based on needs, but cluttering designs with things you do not use -- or could possibly use in the future -- should be minimized. It's fine to consider and implement what you are sure you will need.
When you understand and specify a 'future problem' (specifically, at that point in the future), you will often solve it different from today's solution.
Check out the great paper "On the Criteria to be Used in Decomposing Systems into Modules" by David Parnas, from back in 1972.
Generally speaking, you should try to identify areas of responsibility that can be pushed behind a very simple interface that hides useful functionality and complexity. You should strive to separate the what from the how in areas you feel are most likely to change (i.e. predicting variation).
Flexibility is indeed are a requirement for an application / system to be maintainable. Usually I find that a design following SOLID design principle, TDD and stateless business logic is easier to maintain.
Among all of SOLID principle, I find that [S]RP is the rule that makes the application maintainable. Following [S]RP, your system will be broken down to smaller pieces, with replaceable classes. Say that it can be broken to Deck, DeckRule, HitAction, etc.
Interface or inheritance will help, since you can easily swap your Deck with NoTenDeck or SpadeOnlyDeck. And you can swap the DeckRule to HardToWinDeckRule or ImpossibleWinDeckRule. With decorator or other design patterns such as composite will also help to make your system flexible. Don't forget Test Unit, it will help you to refactor the code.
And also sometimes you will need something like Breaking Change in which you need to tear down your current architecture and interfaces to be replaced with another design. Sometimes it is needed, but mostly not.
You can find several discussion at stackoverflow answer for DI vs Singleton and little about state or stateless.
I try to follow the agile principle of YAGNI ! - You Ain't Gonna Need It.
It isn't worth the effort of coming up with all these possible future requirements. There are an infinite number of possible future requirements. You can't account for all of them. Just do what you need to do to fulfill the requirements you already have.
If in the future you get new requirements, THEN change the system. (you do have good tests to make sure you don't break anything during refactoring right?)
Overall thoughts on flexibility
From your description of the problem I don't think your classes should be flexible as in "keep throwing every new aspect of the game rules into the same class". A class with too many responsibilities is fragile, hard to maintain and thus hard to change - ironically, treating a class as if it were flexible will eventually make it rigid ! Don't put all your eggs in one basket. Separate concern means separate class. Your overall design should be flexible, not so much your classes.
On the blackjack problem
Card games, especially complex and/or evolving ones, are generally most peculiar animals and thus probably not a good standard example to start experimenting with when trying to improve your design skills.
If you want real modularity, you'll probably need a pluggable rules engine that allows plugins to hook at different stages of a game, giving you access to relevant resources to alter anything from scores to the sequence of events in a turn to even other rules.
My take on this is
You already know your game will evolve in the future and you're going to need such an engine. To answer the "thinking ahead" part of your question, this means you'll start with a simple standard turn structure, a minimal rules engine and incrementally add to it as you implement each feature in your backlog. The thinking ahead you shouldn't do is trying to forecast every little detail in the engine upfront. In other words you'll use YAGNI as in "I ain't gonna need this rule/type of hook into the game" rather than "I ain't gonna need a rules engine" since you know you've got to have one anyway.
Or,
It's going to be, at least at first, a one-shot fixed-rules game. You'll need less raw flexibility in the game system here. Concentrate on use cases and try to make acceptance tests pass with the simplest possible technical solution. Take one little step at a time. Implement a working solution for just 1 deck with simple rules at first, then expand to more complex areas. This hopefully will lead you to a no-nonsense, well-designed system which may or may not involve some kind of rules engine.
You might also want to have a look at https://gamedev.stackexchange.com/ for game-specific design guidelines.
When writing code I tend to look for things that may be obvious additions later on. "Obvious is probably a word to define though. :-) It means things that you are sure will be in a future release. Other than that, I try not to worry about it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
One of the most frequent arguments I hear for not adhering to the SOLID principles in object-oriented design is YAGNI (although the arguer often doesn't call it that):
"It is OK that I put both feature X and feature Y into the same class. It is so simple why bother adding a new class (i.e. complexity)."
"Yes, I can put all my business logic directly into the GUI code it is much easier and quicker. This will always be the only GUI and it is highly unlikely that significant new requirements will ever come in."
"If in the unlikely case of new requirements my code gets too cluttered I still can refactor for the new requirement. So your 'What if you later need to…' argument doesn't count."
What would be your most convincing arguments against such practice? How can I really show that this is an expensive practice, especially to somebody that doesn't have too much experience in software development.
Design is the management and balance of trade-offs. YAGNI and SOLID aren't conflicting: the former says when to add features, the latter says how, but they both guide the design process. My responses, below, to each of your specific quotes use principles from both YAGNI and SOLID.
It is three times as difficult to build reusable components as single use
components.
A reusable component should be tried out in three different
applications before it will be sufficiently general to accept into a reuse
library.
— Robert Glass' Rules of Three, Facts and Fallacies of Software Engineering
Refactoring into reusable components has the key element of first finding the same purpose in multiple places, and then moving it. In this context, YAGNI applies by inlining that purpose where needed, without worrying about possible duplication, instead of adding generic or reusable features (classes and functions).
The best way, in the initial design, to show when YAGNI doesn't apply is to identify concrete requirements. In other words, do some refactoring before writing code to show that duplication is not merely possible, but already exists: this justifies the extra effort.
Yes, I can put all my business logic directly into the GUI code it is much easier and quicker. This will always be the only GUI and it is highly unlikely that signifcant new requirements will ever come in.
Is it really the only user interface? Is there a background batch mode planned? Will there ever be a web interface?
What is your testing plan, and will you be testing back-end functionality without a GUI? What will make the GUI easy for you to test, since you usually don't want to be testing outside code (such as platform-generic GUI controls) and instead concentrate on your project.
It is OK that I put both feature X and feature Y into the same class. It is so simple why bother adding a new class (i.e. complexity).
Can you point out a common mistake that needs to be avoided? Some things are simple enough, such as squaring a number (x * x vs squared(x)) for an overly-simple example, but if you can point out a concrete mistake someone made—especially in your project or by those on your team—you can show how a common class or function will avoid that in the future.
If, in the unlikely case of new requirements, my code gets too cluttered I still can refactor for the new requirement. So your "What if you later need to..." argument doesn't count.
The problem here is the assumption of "unlikely". Do you agree it's unlikely? If so, you're in agreement with this person. If not, your idea of the design doesn't agree with this person's—resolving that discrepancy will solve the problem, or at least show you where to go next. :)
I like to think about YAGNI in terms of "half, not half-assed", to borrow the phrase from 37signals (https://gettingreal.37signals.com/ch05_Half_Not_Half_Assed.php). It's about limiting your scope so you can focus on doing the most important things well. It's not an excuse to get sloppy.
Business logic in the GUI feels half-assed to me. Unless your system is trivial, I'd be surprised if your business logic and GUI haven't already changed independently, several times over. So you should follow the SRP ("S" in SOLID) and refactor - YAGNI doesn't apply, because you already need it.
The argument about YAGNI and unnecessary complexity absolutely applies if you're doing extra work today to accommodate hypothetical future requirements. When those "what if later we need to..." scenarios fail to materialize, you're stuck with higher maintenance costs from the abstractions that now get in the way of the changes you actually have. In this case, we're talking about simplifying the design by limiting scope -- doing half, rather than being half-assed.
It sounds like you're arguing with a brick wall. I'm a big fan of YAGNI, but at the same time, I also expect that my code will always be used in at least two places: the application, and the tests. That's why things like business logic in UI code don't work; you can't test business logic separate of UI code in that circumstance.
However, from the responses you're describing, it sounds like the person is simply uninterested in doing better work. At that point, no principle is going to help them; they only want to do the minimum possible. I'd go so far as to say that it's not YAGNI driving their actions, but rather laziness, and you alone aren't going to beat laziness (almost nothing can, except a threatening manager or the loss of a job).
There is no answer, or rather, there is an answer neither you nor your interlocutor might like: both YAGNI and SOLID can be wrong approaches.
Attempting to go for SOLID with an inexperienced team, or a team with tight delivery objectives pretty much guarantees you will end up with an expensive, over-engineered bunch of code... that will NOT be SOLID, just over-engineered (aka welcome to the real-world).
Attempting to go YAGNI for a long term project and hope you can refactor later only works to an extent (aka welcome to the real-world). YAGNI excels at proof-of-concepts and demonstrators, getting the market/contract and then be able to invest into something more SOLID.
You need both, at different points in time.
The correct application of these principles is often not very obvious and depends very much on experience. Which is hard to obtain if you didn't do it yourself. Every programmer should have had experiences of the consequences of doing it wrong, but of course it always should be "not my" project.
Explain to them what the problem is, if they don't listen and you're not in a position to make them listen, let them do the mistakes. If you're too often the one having to fix the problem, you should polish your resume.
In my experience, it's always a judgment call. Yes, you should not worry about every little detail of your implementation, and sometimes sticking a method into an existing class is an acceptable, though ugly solution.
It's true that you can refactor later. The important point is to actually do the refactoring. So I believe the real problem is not the occasional design compromise, but putting off refactoring once it becomes clear there's a problem. Actually going through with it is the hard part (just like with many things in life... ).
As to your individual points:
It is OK that I put both feature X
and feature Y into the same class. It
is so simple why bother adding a new
class (i.e. complexity).
I would point out that having everything in one class is more complex (because the relationship between the methods is more intimate, and harder to understand). Having many small classes is not complex. If you feel the list is getting to long, just organize them into packages, and you'll be fine :-). Personally, I have found that just splitting a class into two or three classes can help a lot with readability, without any further change.
Don't be afraid of small classes, they don't bite ;-).
Yes, I can put all my business logic
directly into the GUI code it is much
easier and quicker. This will always
be the only GUI and it is highly
unlikely that signifcant new
requirements will ever come in.
If someone can say "it is highly unlikely that signifcant new requirements will ever come in." with a straight face, I believe that person really, really needs a reality check. Be blunt, but gentle...
If in the unlikely case of new
requirements my code gets too
cluttered I still can refactor for the
new requirement. So your 'What if you
later need to ...' argument doesn't
count
That has some merit, but only if they actually do refactor later. So accept it, and hold them to their promise :-).
SOLID principles allow software to adapt to change - in both requirements and techical changes (new components, etc), two of your arguments are for unchanging requirements:
"it is highly unlikely that signifcant new requirements will ever come in."
"If in the unlikely case of new requirements"
Could this really be true?
There is no substitute for experience when it comes to the various expenses of development. For many practitioners I think doing things in the lousy, difficult to maintain way has never resulted in problems for them (hey! job security). Over the long term of a product I think these expenses become clear, but doing something about them ahead of time is someone else's job.
There are some other great answers here.
Understandable, flexible and capable of fixes and improvements are always things that you are going to need. Indeed, YAGNI assumes that you can come back and add new features when they prove necessary with relative ease, because nobody is going to do something crazy like bunging irrelevant functionality in a class (YAGNI in that class!) or pushing business logic to UI logic.
There can be times when what seems crazy now was reasonable in the past - sometimes the boundary lines of UI vs business or between different sets of responsibilities that should be in a different class aren't that clear, or even move. There can be times when 3hours of work is absolutely necessary in 2hours time. There are times when people just don't make the right call. For those reasons occasional breaks in this regard will happen, but they are going to get in the way of using the YAGNI principle, not be a cause of it.
Quality unit tests, and I mean unit tests not integration tests, need code that adheres to SOLID. Not necessarily 100%, in fact rarely so, but in your example stuffing two features into one class will make unit testing harder, breaks the single responsibility principle, and makes code maintenance by team newbies much harder (as it is much harder to comprehend).
With the unit tests (assuming good code coverage) you'll be able to refactor feature 1 safe and secure you won't break feature 2, but without unit tests and with the features in same class (simply to be lazy in your example) refactoring is risky at best, disastrous at best.
Bottom line: follow the KIS principle (keep it simple), or for the intellectual the KISS principle (kis stupid). Take each case on merit, there's no global answer but always consider if other coders need to read / maintain the code in the future and the benefit of unit tests in each scenario.
tldr;
SOLID assumes, you understand (somewhat atleast), the future changes to the code, wrt SRP. I will say that is being optimistic about capability to predict.
YAGNI on the other hand, assumes most of the times you don't know future direction of change, which is pessimistic about capability to predict.
Hence it follows that SOLID/SRP asks you to form classes for the code such that it will have single reason for change. E.g. a small GUI change or ServiceCall change.
YAGNI says (if you want to force apply it in this scenario), since you don't know WHAT is going to change, and if a GUI change will cause a GUI+ServiceCall change (similarly A backend change causing GUI+SeviceCall change), just put all that code in single class.
Long answer :
Read the book 'Agile Software Development, Principles, Patterns, and Practices'
I am putting short excerpt from it about SOLID/SRP :
"If,[...]the application is not changing in ways that cause the two responsibilities to change at different times, there is no need to separate them. Indeed, separating them would smell of needless complexity.
There is a corrolary here. An axis of change is an axis of change only if the changes occur. It is not wise to apply SRP—or any other principle, for that matter—if there is no symptom."
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 13 years ago.
It seems to me that Agile methodologies encourage us to keep things simple, and lean, and not add complexity and sophistication until its needed. But the pace and volume of technology change encourages the use of increasingly abstract, complex and sophisticated tools and patterns to solve problems that we may not have yet (and may never encounter) in complex ways with significant learning curves and significant investments of effort.
Are KISS and YAGNI at odds with the trends towards increasingly more sophisticated ...
A car has an accelator and a brake, and a steering wheel that can turn left and/or right: it's up to the drivers to decide which to use when.
I'll keep my answer short and let the experts lay it out better...
I think that KISS applies to everything you listed. You mention increasing abstraction and complexity, which, I think, balance eachother.
The systems we are developing today must be complex, because, most of the time, the solution to a complex problem is inherently complex. However, to keep things simple, we use abstraction. Even if our complex system is built with, say, eight layers, we can follow KISS by keeping each layer simple.
For instance, to pick an item or two off your list:
SOA is not complex because we can wrap service calls in a wrapper object. This object handles the connection and makes calls, which are pretty easy to do because they simply pass parameters on.
MVC is not complex because we clearly separate our logic. We have a simple controller for directing requests and setting up data, a simple model to represent our domain, and a simple view that displays whatever data is passed to it.
However, in both of these cases, the pattern as a whole (or the system, if you will) is complex and non-trivial. It is the fact that we consider small, simple parts one at a time, and then fit them together, that lets us maintain our mental model as we work.
I agree with ChrisW's answer.
The idea is to stick with KISS and YAGNI as much as possible, but when the need arises and you need a sophisticated / complex solution, stand on the shoulders of giants and use proven patterns to guide you. These patterns and practices are meant to simplify your work, if using them is harder than the hack alternative, you should stick with the hack. Just make sure you take into account maintainability etc.
As an example, when you build the 1st version of a website it may consist of 1 or 2 main functionalities and just a few pages. You probably don't need MVC for this (even though it might be nice to start that way)
BUT, after you add a few more features and you have dozens of pages to manage along with how to share functionality between them, it might become apparent that you need MVC to better structure your application.
Similarly, if from the get go you know you will have to deal with something like returning multiple views of a common piece of data, MVC simplifies your problem by laying out a pattern for you to follow.
In summary, YAGNI now, but if you need it later then KISS by using a known pattern / solution.
Sigh.
We must have increasingly sophisticated and abstract components to match the demand for increasingly sophisticated software.
Most of us have limited brain space. We must learn to cope with our limited brains by using more sophisticated abstractions.
The alternative is not using abstractions, limiting ourselves to machine code.
Please read http://www.cs.utexas.edu/~EWD/transcriptions/EWD01xx/EWD117.html
"In spite of all its deficiencies,
mathematical reasoning presents an
outstanding model of how to grasp
extremely complicated structures with
a brain of limited capacity. And it
seems worthwhile to investigate to
what extent these proven methods can
be transplanted to the art of computer
usage. In the design of programming
languages one can let oneself be
guided primarily by considering "what
the machine can do". Considering,
however, that the programming language
is the bridge between the user and the
machine --that it can, in fact, be
regarded as his tool-- it seems just
as important to take into
consideration "what Man can think". It
is in this vein that we shall continue
our investigations."
I'm going to make a subjective answer (so sue me). I think that if you program by acronyms then you are going to run into trouble.
At the end of the day you are trying to make money for a business, or hopefully yourself. As such each decision you make is an engineering decision based on cost, time and benefits. You have to evaluate the use of a technique on the cost of implementation, maintenance etc, and make the best choice.
I think the only fair answer is that the tools and techniques chosen have to match with the desired goal of the engineering.
Its a matter of the right tool for the right job. The problem is when architects and/or developers begin to believe that a particular methodology or technology is a "golden hammer." That is when things become religious, and religion and reason do not play nicely together ;)
Oh and by the way, "agile" does not necessarily mean you don't use some of the acronyms you mention, or some framework that implements them. Those decisions are usually made far in advance of implementing the sorts of things that developers have come to associate with agile, e.g. user stories, sprints, etc.
First off, the list of acronyms doesn't really necessarily make sense - there's not really much simpler than POCO, for example...
However, KISS and YAGNI are achieved most effectively, in many circumstances, by using concepts like IoC, MVC, and MVVM - provided you use the patterns correctly.
Patterns aren't complicated, in and of themselves. It may take a bit of learning to understand what the pattern is trying to accomplish, but often, a pattern exists purely to simplify either code, maintenance, or usability - and usually all of the above. This fits in perfectly with keep it simple, for example.
IMHO, you (generally) don't want to start out with a complicated design. Could this be a local method rather than a service? Do I need an IoC container yet? This is particularly relevant when it comes to design patterns.
However, as you test and refactor your code, certain patterns (such as Ioc) will help you to achieve goals such as testability and DRY (Don't Repeat Yourself). If you know design patterns well, you can apply them at the appropriate time.
yes
-- this space intentionally left blank --
I tend to do a lot of projects on short deadlines and with lots of code that will never be used again, so there's always pressure/temptation to cut corners. One rule I always stick to is encapsulation/loose coupling, so I have lots of small classes rather than one giant God class. But what else should I never compromise on?
Update - thanks for the great response. Lots of people have suggested unit testing, but I don't think that's really appropriate to the kind of UI coding I do. Usability / User acceptance testing seems much important. To reiterate, I'm talking about the BARE MINIMUM of coding standards for impossible deadline projects.
Not OOP, but a practice that helps in both the short and long run is DRY, Don't Repeat Yourself. Don't use copy/paste inheritance.
Not a OOP practice, but common sense ;-).
If you are in a hurry, and have to write a hack. Always add a piece of comment with the reasons. So you can trace it back and make a good solution later.
If you never had the time to come back, you always have the comment so you know, why the solution was chosen at the moment.
Use Source control.
No matter how long it takes to set up (seconds..), it will always make your life easier! (still it's not OOP related).
Naming. Under pressure you'll write horrible code that you won't have time to document or even comment. Naming variables, methods and classes as explicitly as possible takes almost no additional time and will make the mess readable when you must fix it. From an OOP point of view, using nouns for classes and verbs for methods naturally helps encapsulation and modularity.
Unit tests - helps you sleep at night :-)
This is rather obvious (I hope), but at the very least I always make sure my public interface is as correct as possible. The internals of a class can always be refactored later on.
no public class with mutable public variables (struct-like).
Before you know it, you refer to this public variable all over your code, and the day you decide this field is a computed one and must have some logic in it... the refactoring gets messy.
If that day is before your release date, it gets messier.
Think about the people (may even be your future self) who have to read and understand the code at some point.
Application of the single responsibility principal. Effectively applying this principal generates a lot of positive externalities.
Like everyone else, not as much OOP practices, as much as practices for coding that apply to OOP.
Unit test, unit test, unit test. Defined unit tests have a habit of keeping people on task and not "wandering" aimlessly between objects.
Define and document all hierarchical information (namespaces, packages, folder structures, etc.) prior to writing production code. This helps to flesh out object relations and expose flaws in assumptions related to relationships of objects.
Define and document all applicable interfaces prior to writing production code. If done by a lead or an architect, this practice can additionally help keep more junior-level developers on task.
There are probably countless other "shoulds", but if I had to pick my top three, that would be the list.
Edit in response to comment:
This is precisely why you need to do these things up front. All of these sorts of practices make continued maintenance easier. As you assume more risk in the kickoff of a project, the more likely it is that you will spend more and more time maintaining the code. Granted, there is a larger upfront cost, but building on a solid foundation pays for itself. Is your obstacle lack of time (i.e. having to maintain other applications) or a decision from higher up? I have had to fight both of those fronts to be able to adopt these kinds of practices, and it isn't a pleasant situation to be in.
Of course everything should be Unit tested, well designed, commented, checked into source control and free of bugs. But life is not like that.
My personal ranking is this:
Use source control and actually write commit comments. This way you have a tiny bit of documentation should you ever wonder "what the heck did I think when I wrote this?"
Write clean code or document. Clean well-written code should need little documentation, as it's meaning can be grasped from reading it. Hacks are a lot different. Write why you did it, what you do and what you'd like to do if you had the time/knowledge/motivation/... to do it right
Unit Test. Yes it's down on number three. Not because it's unimportant but because it's useless if you don't have the other two at least halfway complete. Writing Unit tests is another level of documentation what you code should be doing (among others).
Refactor before you add something. This might sound like a typical "but we don't have time for it" point. But as with many of those points it usually saves more time than it costs. At least if you have at least some experience with it.
I'm aware that much of this has already been mentioned, but since it's a rather subjective matter, I wanted to add my ranking.
[insert boilerplate not-OOP specific caveat here]
Separation of concerns, unit tests, and that feeling that if something is too complex it's probably not conceptualised quite right yet.
UML sketching: this has clarified and saved any amount of wasted effort so many times. Pictures are great aren't they? :)
Really thinking about is-a's and has-a's. Getting this right first time is so important.
No matter how fast a company wants it, I pretty much always try to write code to the best of my ability.
I don't find it takes any longer and usually saves a lot of time, even in the short-term.
I've can't remember ever writing code and never looking at it again, I always make a few passes over it to test and debug it, and even in those few passes practices like refactoring to keep my code DRY, documentation (to some degree), separation of concerns and cohesion all seem to save time.
This includes crating many more small classes than most people (One concern per class, please) and often extracting initialization data into external files (or arrays) and writing little parsers for that data... Sometimes even writing little GUIs instead of editing data by hand.
Coding itself is pretty quick and easy, debugging crap someone wrote when they were "Under pressure" is what takes all the time!
At almost a year into my current project I finally set up an automated build that pushes any new commits to the test server, and man, I wish I had done that on day one. The biggest mistake I made early-on was going dark. With every feature, enhancement, bug-fix etc, I had a bad case of the "just one mores" before I would let anyone see the product, and it literally spiraled into a six month cycle. If every reasonable change had been automatically pushed out it would have been harder for me to hide, and I would have been more on-track with regard to the stakeholders' involvement.
Go back to code you wrote a few days/weeks ago and spend 20 minutes reviewing your own code. With the passage of time, you will be able to determine whether your "off-the-cuff" code is organized well enough for future maintenance efforts. While you're in there, look for refactoring and renaming opportunities.
I sometimes find that the name I chose for a function at the outset doesn't perfectly fit the function in its final form. With refactoring tools, you can easily change the name early before it goes into widespread use.
Just like everybody else has suggested these recommendations aren't specific to OOP:
Ensure that you comment your code and use sensibly named variables. If you ever have to look back upon the quick and dirty code you've written, you should be able to understand it easily. A general rule that I follow is; if you deleted all of the code and only had the comments left, you should still be able to understand the program flow.
Hacks usually tend to be convoluted and un-intuitive, so some good commenting is essential.
I'd also recommend that if you usually have to work to tight deadlines, get yourself a code library built up based upon your most common tasks. This will allow you to "join the dots" rather than reinvent the wheel each time you have a project.
Regards,
Docta
An actual OOP practice I always make time for is the Single Responsibility Principle, because it becomes so much harder to properly refactor the code later on when the project is "live".
By sticking to this principle I find that the code I write is easily re-used, replaced or rewritten if it fails to match the functional or non-functional requirements. When you end up with classes that have multiple responsibilities, some of them may fulfill the requirements, some may not, and the whole may be entirely unclear.
These kinds of classes are stressful to maintain because you are never sure what your "fix" will break.
For this special case (short deadlines and with lots of code that will never be used again) I suggest you to pay attention to embedding some script engine into your OOP code.
Learn to "refactor as-you-go". Mainly from an "extract method" standpoint. When you start to write a block of sequential code, take a few seconds to decide if this block could stand-alone as a reusable method and, if so, make that method immediately. I recommend it even for throw-away projects (especially if you can go back later and compile such methods into your personal toolbox API). It doesn't take long before you do it almost without thinking.
Hopefully you do this already and I'm preaching to the choir.
When applying the Single Responsibility Principle and looking at a class's reason to change, how do you determine whether that reason too change is too granular, or not granular enough?
I don't know that there's a good answer to this one other than "apply your judgement, based on your experience." Failing that, get help, which I guess is what you're doing here ;)
Seriously, though, if you find that you're creating a gazillion classes to do what seems like a simple job, then you're probably being too granular. If your classes all seem collossal, then you're probably being too coarse. Please pardon me if that's a statement of the obvious.
I think this is one of those fuzzy, no-hard-and-fast-rules cases that show us why we need human programmers. Just try something, seeking balance, and refactor if you find you're going too far in one direction or the other. And remember: if it's worth doing, it's worth doing badly.
I wouldn't be too worried about granularity initially. I will just go with separation of concern at a broader level initially. Basic point is that we should avoid over-engineering here. But just enough. I agree with Lucas here, that this first step will improve with experience.
As the requirements change, as I am starting to get the 'smells', as my understanding of the problem improves I would refactor the design by factoring out the separate concerns as they become obvious. Basically separation of concern shall also be evolutionary as with overall design.