Related
How do you usually go about when you need to design a module? Till now I've taken care of how easy it is to use, how intuitive its API is, extendibility, performance and stuff like that.
But what seems fairly simple and straight-forward to me might seem over-complicated for other users. Although it doesn't happen that often, it does happen sometimes to all of us (I hope).
Are there any questions that you should ask yourself before designing a class hierarchy/API/whatever before you proceed with coding, other than the issues I already mentioned?
If you believe the question is better suited for a different section on SO please feel free to migrate it, but I'd still like an answer.
Cheers.
Your question is a very good one, and one that has answers, but so complicated that the answer is basically the experience in programming.
There are general principles to make software, but I think that here, in this short answer, I can give you one concept that you can apply. Software is a representation of a domain (such as a bank software is made to tailor the financial system, or a radar software is made to tailor the ideas and principles of radar detection). Software, therefore, is like a theory: it fits the current knowledge of your domain perfectly, allows inferences and extensions. If more knowledge becomes available, the theory should be extended, polished or made more general to accommodate this new knowledge, while still remaining valid for the previous knowledge.
Hence, all the concepts about theories apply:
satisfy the requirements imposed by your knowledge in a unified framework that sounds homogeneous and well integrated.
be simple, but look for patterns that you may make more general, and spotlight these patterns for better integration.
don't be too simple. If your software does not fit the requirements, your theory is too limited and must be extended.
allow your software to accommodate new requirements, software is not cast in stone. it mutates and evolves, accommodating the new requirements, or losing functionalities that are no longer needed.
So, software should be minimalistic but not too much, beautiful but practical.
When it comes to put into practice these directions, I suggest you to allow time for learning your domain. You can't model something that you don't understand. Learn the basics, and start from something simple, then progressively refine them. You will occasionally see that some things "feel" in the wrong place. Ask yourself questions such as
"who is responsible to do this operation?"
"Is this dependency logical and needed for this object to work, or is it just a spurious one due to bad code organization?"
"Is this high-level or low-level functionality?"
"Am I repeating this ?"
"can I change this object/layer/subsystem internally without the code outside knowing ?"
"can I extend this in the future without ruining or invalidating the past ?"
"can I test and probe this functionality easily for correct behavior ?"
"Is it easy and intuitive to understand and use ?"
"can I recombine what I already have easily and without touching to implement new behavior?"
"Is this functionality isolated so that I can show it to the outside world without the bulk of the rest of the code I manipulate ?"
You should consider SOLID Principles and here.
And about responsibility assigment apply GRASP Patterns
Let's say I've made a list of concepts I'll use to draw my Domain Model. Furthermore, I have a couple of Use Cases from which I did a couple of System Sequence Diagrams.
When drawing the Domain Model, I never know where to start from:
Designing the model as I believe the system to be. This is, if I am modelling a the human body, I start by adding the class concepts of Heart, Brain, Bowels, Stomach, Eyes, Head, etc.
Start by designing what the Use Cases need to get done. This is, if I have a Use Case which is about making the human body swallow something, I'd first draw the class concepts for Mouth, Throat, Stomatch, Bowels, etc.
The order in which I do things is irrelevant? I'd say probably it'd be best to try to design from the Use Case concepts, as they are generally what you want to work with, not other kind of concepts that although help describe the whole system well, much of the time might not even be needed for the current project. Is there any other approach that I am not taking in consideration here? How do you usually approach this?
Thanks
Whether DDD, or not, I would recommend with determining the ubiquitous language (UL) by interviewing the product owner(s). Establishing communication in a way that will have you and the product owners speaking the same language not only aides in communication, but being able to discuss the project in common terms tends to help the domain model define itself.
So, my answer is basically to discuss, listen, and learn. Software serves a need. Understanding the model from the viewpoint of the experts will lay the solid groundwork for the application.
I'd start by a drawing a class diagram with all the relationships and implement only the classes that are necessary according to the requirements of your application.
You can use an anemic approach (attributes plus getters and setters) to keep things simple and avoid the step of writing business logic in the same step. With an anemic model, the logic would go into a corresponding Service class. That way you can consider Use Cases later on.
I know some people don't appreciate this way of doing things but it does help with maintenance and avoids some dependency issues.
Answer to devoured elysium's question below:
In terms of analysis, starting with Use cases (What) and then proceeding to the class diagram (How) sounds like a good rule of thumb. Personally, I'd do the Sequence diagram (When and Who?) afterwards, as you'd need to know between which processes/objects messages need to be sent.
Beyond that my take on things is that UML is simply a way to model a system/project and not a methodology by itself (unlike Merise, RAD, RUP, Scrum, etc.). There is nothing stopping someone starting off with any diagram as long as they have the sufficient information to complete it. In fact, they should be done simultaneously since each of the diagrams is a different perspective of the same system/project.
So, all in all it depends on how you go about the analysis. During my studies I was taught the rigid waterfall approach, where you do a complete analysis from start to finish before producing some code. However, things can be different in practice, as the imperative might be to produce a working application in the least time possible.
For example, I was introduced to the Scrum methodology recently for an exercise involving the creation of a web site where people can post their fictions. As there was a time constraint and a clear vision of what should be achieved, we started right away with a bare bones class diagram to represent the domain model. The Use cases were then deduced from a series of mock screens we'd produced.
From memory, the classes were Story, Chapter, User and Category. This last class was phased out in favour of a more flexible Tag class. As you'd imagine, the complete class diagram of the existing project would be much more complex due to applying domain driven design and the specificities of the Java programming language.
This approach could be viewed as sloppy. However, a web site like this could easily be made in a couple of weeks using an iterative process and still be well designed. The advantage an iterative process has over the waterfall approach is that you can continually adjust requirements as you go. Frequent requirements changing is a reality, as people will often change their minds and the possibility of producing a working application after each iteration allows one to stay on course so to speak.
Of course, when you're presenting a project to a client, a complete analysis with UML diagrams and some mock screens would be preferable so they have an idea of what you're offering. This is where the UML comes in. Once you've explained some of the visual conventions, an individual should be able to understand the diagrams.
To finish off, if you're in the situation where you're trying to determine what a client wants, it's probably a good idea to gradually build up a questionnaire you can bring with you. Interviewing a person is the only way you can determine what concepts/features are really needed for an application, and you should expect to go back in order to clarify certain aspects. Another tip would be to do some quick research on the web when you're confronted with a subject matter you're unfamiliar with.
In your example, this would be to go through the basics of anatomy. Among other things, this will help you decide what the model should contain and what granularity it should have (What group of organs should be considered? How precise does it need to be? Do only the organs need to modeled or should they be decomposed into their constituents like tissues, cells, chemical composition, etc. ?).
I think the place to start would be whatever feels logical and comfortable. It's probably best to start with the use cases, as they give you clear direction and goals, and help you avoid YAGNI situations. Given that you should be trying to develop a strong domain model, it shouldn't really matter, as the whole picture of the domain is important.
I would like to share my experience for such type of situations. I usually start with writing tests and code. And try to cover one end to end use case. This gives me fair enough idea about problem and at the end I also have something working with me which I can show case to my client. Most of the time subsequent stories build on top of previous one, but it also happens to me that subsequent stories require changes in the previous model I came up with. But this does not impact me as I already have good test coverage. In this way I came up with the model which fits for the current problem, not the model which maps the real world.
You start with Business Requirements which can be formalized or not. If formalized you would use Use Case Diagrams.
For example here are use case diagrams for an e-commerce app:
http://askuml.com/blog/e-commerce/
http://askuml.com/files/2010/07/e-commerce-use-case.jpg
http://askuml.com/files/2010/07/e-commerce-use-case2b.jpg
From these use cases, you can naturally deduce the business entities: product, category of product, shopping cart, ... that is start to prepare class diagrams.
This is best practice in many methodologies but this is also just common sense and natural.
Short answer
Pick a use case, draw some collaboration diagram (and a class diagram) to realize the domain objects involved. Concentrate only on those objects participated in order to accomplish use case goal. Write TDD test case to set the expectation and gradually model your domain classes to meet the expectations. TDD is very helpful to understand the expected behaviors and it helps to get the cleaner domain model. You will see your domain evolve gradually along with the TDD expectations.
Long answer
My personal experience with DDD was not easy. That was because we didn't have necessary foundations in the first place. Our team had many weak points in different areas; requirements were not captured properly and we only had a customer representative who was not really helpful (not involved). We didn't have a proper release plan and developers had a lack of Object Oriented concepts, best principles and so on. The major problem we had was spending so much time on trying to understand the domain logic. We sketched many class diagrams and we never got the domain model right, so we stopped doing that and found out what went wrong. The problem was that we tried too hard to understand the domain logic and instead of communicating we made assumptions on the requirements. We decided to change our approach, we applied TDD, we started writing the expected behavior and coded the domain model to meet the TDD's expectations. Sometimes we got stuck writing TDD test cases because we didn't understand the domain. We straight away talked to the customer representative and tried to get more input. We changed our release strategy; applied agile methodology and release frequently so that we got real feedback from the end user. However, needed to ensure the end user expectation was set at the right level. We refactored based on the feedback, and in that way the domain model evolved gradually. Subsequently, we applied design patterns to improve reusability and maintainability. My point here is that DDD alone cannot survive, we have to build the ecosystem that embraces the domain, developers must have strong OOP concepts and must appreciate TDD and unit test. I would say DDD sits on top of all the OOP techniques and practices.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 13 years ago.
It seems to me that Agile methodologies encourage us to keep things simple, and lean, and not add complexity and sophistication until its needed. But the pace and volume of technology change encourages the use of increasingly abstract, complex and sophisticated tools and patterns to solve problems that we may not have yet (and may never encounter) in complex ways with significant learning curves and significant investments of effort.
Are KISS and YAGNI at odds with the trends towards increasingly more sophisticated ...
A car has an accelator and a brake, and a steering wheel that can turn left and/or right: it's up to the drivers to decide which to use when.
I'll keep my answer short and let the experts lay it out better...
I think that KISS applies to everything you listed. You mention increasing abstraction and complexity, which, I think, balance eachother.
The systems we are developing today must be complex, because, most of the time, the solution to a complex problem is inherently complex. However, to keep things simple, we use abstraction. Even if our complex system is built with, say, eight layers, we can follow KISS by keeping each layer simple.
For instance, to pick an item or two off your list:
SOA is not complex because we can wrap service calls in a wrapper object. This object handles the connection and makes calls, which are pretty easy to do because they simply pass parameters on.
MVC is not complex because we clearly separate our logic. We have a simple controller for directing requests and setting up data, a simple model to represent our domain, and a simple view that displays whatever data is passed to it.
However, in both of these cases, the pattern as a whole (or the system, if you will) is complex and non-trivial. It is the fact that we consider small, simple parts one at a time, and then fit them together, that lets us maintain our mental model as we work.
I agree with ChrisW's answer.
The idea is to stick with KISS and YAGNI as much as possible, but when the need arises and you need a sophisticated / complex solution, stand on the shoulders of giants and use proven patterns to guide you. These patterns and practices are meant to simplify your work, if using them is harder than the hack alternative, you should stick with the hack. Just make sure you take into account maintainability etc.
As an example, when you build the 1st version of a website it may consist of 1 or 2 main functionalities and just a few pages. You probably don't need MVC for this (even though it might be nice to start that way)
BUT, after you add a few more features and you have dozens of pages to manage along with how to share functionality between them, it might become apparent that you need MVC to better structure your application.
Similarly, if from the get go you know you will have to deal with something like returning multiple views of a common piece of data, MVC simplifies your problem by laying out a pattern for you to follow.
In summary, YAGNI now, but if you need it later then KISS by using a known pattern / solution.
Sigh.
We must have increasingly sophisticated and abstract components to match the demand for increasingly sophisticated software.
Most of us have limited brain space. We must learn to cope with our limited brains by using more sophisticated abstractions.
The alternative is not using abstractions, limiting ourselves to machine code.
Please read http://www.cs.utexas.edu/~EWD/transcriptions/EWD01xx/EWD117.html
"In spite of all its deficiencies,
mathematical reasoning presents an
outstanding model of how to grasp
extremely complicated structures with
a brain of limited capacity. And it
seems worthwhile to investigate to
what extent these proven methods can
be transplanted to the art of computer
usage. In the design of programming
languages one can let oneself be
guided primarily by considering "what
the machine can do". Considering,
however, that the programming language
is the bridge between the user and the
machine --that it can, in fact, be
regarded as his tool-- it seems just
as important to take into
consideration "what Man can think". It
is in this vein that we shall continue
our investigations."
I'm going to make a subjective answer (so sue me). I think that if you program by acronyms then you are going to run into trouble.
At the end of the day you are trying to make money for a business, or hopefully yourself. As such each decision you make is an engineering decision based on cost, time and benefits. You have to evaluate the use of a technique on the cost of implementation, maintenance etc, and make the best choice.
I think the only fair answer is that the tools and techniques chosen have to match with the desired goal of the engineering.
Its a matter of the right tool for the right job. The problem is when architects and/or developers begin to believe that a particular methodology or technology is a "golden hammer." That is when things become religious, and religion and reason do not play nicely together ;)
Oh and by the way, "agile" does not necessarily mean you don't use some of the acronyms you mention, or some framework that implements them. Those decisions are usually made far in advance of implementing the sorts of things that developers have come to associate with agile, e.g. user stories, sprints, etc.
First off, the list of acronyms doesn't really necessarily make sense - there's not really much simpler than POCO, for example...
However, KISS and YAGNI are achieved most effectively, in many circumstances, by using concepts like IoC, MVC, and MVVM - provided you use the patterns correctly.
Patterns aren't complicated, in and of themselves. It may take a bit of learning to understand what the pattern is trying to accomplish, but often, a pattern exists purely to simplify either code, maintenance, or usability - and usually all of the above. This fits in perfectly with keep it simple, for example.
IMHO, you (generally) don't want to start out with a complicated design. Could this be a local method rather than a service? Do I need an IoC container yet? This is particularly relevant when it comes to design patterns.
However, as you test and refactor your code, certain patterns (such as Ioc) will help you to achieve goals such as testability and DRY (Don't Repeat Yourself). If you know design patterns well, you can apply them at the appropriate time.
yes
-- this space intentionally left blank --
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I find that whenever I begin writing an app in Java/C#, things start off good, but over time, as the app becomes more complex, it just gets more and more complicated. I've become aware of the fact that I'm not very good at design and high level architecture. All my classes become fairly strongly coupled and the design isn't "elegant" at all. I'm fairly competent at "low level" programming. That is, I can get just about anything done within a function or a class, but my high level design is weak and I'd really like to improve it. Does anyone have pointers to techniques, books, etc. that would be helpful in making me a better software engineer?
I disagree about starting with a book on design patterns or refactoring.
In my opinion, for a solid OO design, you should first be familiar with the main OO design principles, then understand how your problem can be represented in those basic principles. Then you can start discovering opportunities for applying design patterns and refactoring techniques in order to achieve those fundamental principles.
I would start with this book:
Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin
In this book, Robert Martin describes the fundamental principles that make a good OO design, all of them related to encapsulation, coupling and modularity:
The Open/Closed Principle
Liskov Substitution
Dependency Inversion
Granularity
Common Closure
Reuse
No Cyclic Dependency
Stability Of Dependency
Abstraction And Stability
After all, almost every Design Pattern and Refactoring technique I have seen documented in GoF and Fowler is aimed at achieving one of several of these basic principles, depending on their relative priority for a given scenario.
Books:
Code Complete, by Steve McConnel
Design Patterns, by Gamma, et. al.
I would start by sketching my design. That sketch could be a box and arrow diagram to show relationships between classes or it could be a variation on UML (or perhaps even standard UML). But I find that sketches help me see that a design is good/bad and maybe even how to fix it.
I would also look at a book on design patterns.
Write a large project and let it spread as big as you can. Then study what you can do to improve your code.
Perhaps single large routines can be clean and understandable too, if they are well-structured.
There's no single good answer on good design. It's actually one of those valuable things a programmer can learn.
You can refactor mercilessly to improve the design of existing code.
The main idea is, at some point the code did make sense, when new features are bring into the code then probably some features or responsibilities must be moved around to another classes, that's fine. Then you stop developing new features and start refacoring your code.
I would recommend you to read:
Refactoring by Martin Fowler
use Object Oriented Design Principles (http://www.surfscranton.com/Architecture/ObjectOrientedDesignPrinciples.htm). also consider some oo design heursitics (http://www.cs.colorado.edu/~kena/classes/6448/s02/lectures/lecture27.pdf)
Try making program outlines and diagrams before you start, and have someone else review and critique it. Then as the program grows, continually update the outlines and diagrams to include the new functionality. Get it reviewed and critiqued by someone else. Eventually, assuming you are learning from the critiques, you will become better at designing programs.
Books and tutorials can only get you so far. While you do need to learn the tools and methods available, knowledge on its own won't help you here. Practice is what will make you better at design, along with having a mentor coach you from time to time to show you how you can better apply some of the knowledge you've gained from the books.
Read the books by all means, but don't feel bad if you write code that ends up having stupidities in it. Everybody does. The question is, can you refactor what you have to fix it? To be able to do that effectively and often, you need to use TDD and write lots of unit tests.
I would highly recommend you try Test Driven Development (TDD). You will find that to make your code testable, and not need to constantly perform rework of your tests, you will need to have a solid design. What you will find is that when you add \ change \ remove functionality, your better designs will require a very small set of changes to a specific set of tests. A poor design will wipe out a huge set of tests - because you have tight coupling, objects responsible for multiple concerns, etc, etc, etc ...
I have found that the better I get at TDD, the better my architecture is, and the better the end result is.
Be advised, TDD takes real mental discipline. You should not expect that you use it for 1-2 days and see immediate results. You will need to really want to do it, and really make the effort - otherwise you won't benefit and likely just end up hating it.
HTH ...
There are a couple of things that you can do
Use tools for high-level and
low level design before you
actually start programming. E.g.
Creating Class UML Diagrams will
help your mind visualize the
solution in a Diagramtic form rather
than Code form.
Familiarize yourself with Java
Design Patterns. E.g. Using
Inheritance Polymorphically to begin
with will warm you up to start using
the standard Java and J2EE design
patterns.
There are a tonne of books and websites pertaining to both the subjects I just pointed out here.
Browse through good API code. For instance Spring framework code.
Read some good books such as Design Patterns (like everyone else mentioned here) and some other books on good practices. For example in Java, Head First Design, Effective Java series, etc.
C++ - Effective C++ series
I would start with : Head first object-oriented analysis and design. and once you mastered it : Head first design patterns.
Obviously, reading some of the recommmended books will help. I think Head First Design Patterns is definitely less abstract than GoF book.
The primary question I ask is "Is this code doing something very specific that could be re-used anywhwere else?" If so, put in in a class in an assembly that allows for re-use.
If you truly are just starting then one thing I used to do was to consider each database table an 'object'. So each database table represents a class. The purists will tell you this is a disaster, but I found it a good way to get myself started thinking in object terms.
Read Head First Design Patterns.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I recently read an interesting comment on an OOP related question in which one user objected to creating a "Manager" class:
Please remove the word manager
from your vocabulary when talking
about class names. The name of the
class should be descriptive of its'
purpose. Manager is just another word
for dumping ground. Any
functionality will fit there. The word
has been the cause of many extremely
bad designs
This comment embodies my struggle to become a good object-oriented developer. I have been doing procedural code for a long time at an organization with only procedural coders. It seems like the main strategy behind the relatively little OO code we produce is to break the problem down into classes that are easily identifiable as discrete units and then put the left over/generalized bits in a "Manager" class.
How can I break my procedural habits (like the Manager class)? Most OO articles/books, etc. use examples of problems that are inherently easy to transform into object groups (e.g., Vehicle -> Car) and thus do not provide much guidance for breaking down more complex systems.
First of all, I'd stop acting like procedural code is wrong. It's the right tool for some jobs. OO is also the right tool for some jobs. So is functional. Each paradigm is just a different point of view of computation, and exists because it's convenient for certain problems, not because it's the only right way to program. In principle, all three paradigms are mathematically equivalent, so use whichever one best maps to the problem domain. IMHO, if using a multiparadigm language it's even ok to blend paradigms within a module if different subproblems are best modeled by different worldviews.
Secondly, I'd read up on design patterns. It's hard to understand OO without some examples of the real-world problems it's good for solving. Head First Design Patterns is a good read, as it answers a lot of the "why" of OO.
Becoming good at OO takes years of practice and study of good OO code, ideally with a mentor. Remember that OO is just one means to an end. That being said, here are some general guidelines that work for me:
Favor composition over inheritance. Read and re-read the first chapter of the GoF book.
Obey the Law of Demeter ("tell, don't ask")
Try to use inheritance only to achieve polymorphism. When you extend one class from another, do so with the idea that you'll be invoking the behavior of that class through a reference to the base class. ALL the public methods of the base class should make sense for the subclass.
Don't get hung up on modeling. Build a working prototype to inform your design.
Embrace refactoring. Read the first few chapters of Fowler's book.
The single responsibility principle helps me break objects into manageable classes that make sense.
Each object should do one thing, and do it well without exposing how it works internally to other objects that need to use it.
A 'manager' class will often:
Interogate something's state
Make a decision based on that state
As an antidote or contrast to that, Object-Oriented design would encourage you to design class APIs where you "tell don't ask" the class itself to do things itself (and to encapsulate its own state): for more about "tell don't ask" see e.g. here and here (and maybe someone else has a better explanation of "tell don't ask" but these are first two articles that Google found for me).
It seems like the main strategy the little OO code we produce is to break the problem down into classes that are easily identifiable as discrete units and then put the left over/generalized bits in a "Manager" class.
That may well be true even at the best of times. Coplien talked about this towards the end of his Advanced C++: Programming Styles and Idioms book: he said that in a system, you tend to have:
Self-contained objects
And, "transactions", which act on other objects
Take, for example, an airplane (and I'm sorry for giving you another vehicular example; I'm paraphrasing him):
The 'objects' might include the ailerons, the rudder, and the thrust
The 'manager' or autpilot would implement various commands or transactions
For example, the "turn right" transaction includes:
flaps.right.up()
flaps.left.down()
rudder.right()
thrust.increase()
So I think it's true that you have transactions, which cut across or use the various relatively-passive 'objects'; in an application, for example, the "whatever" user-command will end up being implemented by (and therefore, invoking) various objects from every layer (e.g. the UI, the middle layer, and the DB layer).
So I think it's true that to a certain extent you will have 'bits left over'; it's a matter of degree though: perhaps you ought to want as much of the code as possible to be self-contained, and encapsulating, and everything ... and the bits left over, which use (or depend on) everything else, should be given/using an API which hides as much as possible and which does as much as possible, and which therefore takes as much responsibility (implementation details) as possible away from the so-called manager.
Unfortunately I've only read of this concept in that one book (Advanced C++) and can't link you to something online for a clearer explanation than this paraphrase of mine.
Reading and then practicing OO principles is what works for me. Head First Object-Oriented Analysis & Design works you through examples to make a solution that is OO and then ways to make the solution better.
You can learn good object-oriented design principles by studying design patterns. Code Complete 2 is a good book to read on the topic. Naturally, the best way to ingrain good programming principles into your mind is to practice them constantly by applying them to your own coding projects.
How can I break my procedural habits (like the Manager class)?
Make a class for what the manager is managing (for example, if you have a ConnectionManager class, make a class for a Connection). Move everything into that class.
The reason "manager" is a poor name in OOP is that one of the core ideas in OOP is that objects should manage themselves.
Don't be afraid to make small classes. Coming from a procedural background, you may think it isn't worth the effort to make a class unless it's a thousand lines of code and is some core concept in your domain. Think smaller. A ten line class is totally valid. Make little classes where you see they make sense (a Date, a MailingAddress) and then work your way up by composing classes out of those.
As you start to partition little pieces of your codebase into classes, the remaining procedural code soup will shrink. In that shrinking pool, you'll start to see other things that can be classes. Continue until the pool is empty.
How many OOP programmers does it take to change a light bulb?
None, the light bulb changes itself.
;)
You can play around with an OO language that has very bad procedural support like Smalltalk. The message sending paradigm will force you into OO thinking.
i think you should start it with a good plan.
planning using CLASS Diagrams would be a good start.
you should identify the ENTITIES needed in the applicaiton,
then define each entitie's ATTRIBUTES, and METHODS.
if there are repeated ones, you could now re-define your entities
in a way that inheritance could be done, to avoid redundancy.
:D.
I have a three step process, this is one that I have gone through successfully myself. Later I met an ex-teacher turned programmer (now very experienced) who explained to me exactly why this method worked so well, there's some psychology involved but it's essentially all about maintaining control and confidence as you learn. Here it is:
Learn what test driven development (TDD) is. You can comfortably do this with procedural code so you don't need to start working with objects yet if you don't want to. The second step depends on this.
Pick up a copy of Refactoring: Improving the Design of Existing Code by Martin Fowler. It's essentially a catalogue of little changes that you can make to existing code. You can't refactor properly without tests though. What this allows you to do is to mess with the code without worrying that everything will break. Tests and refactoring take away the paranoia and sense that you don't know what will happen, which is incredibly liberating. You left to basically play around. As you get more confident with that start exploring mocks for testing the interactions between objects.
Now comes the big that most people, mistakenly start with, it's good stuff but it should really come third. At this point you can should reading about design patterns, code smells (that's a good one to Google) and object oriented design principles. Also learn about user stories or use cases as these give you good initial candidate classes when writing new applications, which is a good solution to the "where do I start?" problem when writing apps.
And that's it! Proven goodness! Let me know how it goes.
My eureka moment for understanding object-oriented design was when I read Eric Evans' book "Domain-Driven Design: Tackling Complexity in the Heart of Software". Or the "Domain Driven Design Quickly" mini-book (which is available online as a free PDF) if you are cheap or impatient. :)
Any time you have a "Manager" class or any static singleton instances, you are probably building a procedural design.