As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been programming in object-oriented languages for years now but secretly I look at some of the things my colleagues do with envy. A lot of them seem to have some inner OO instinct that I don't have - no matter how hard I try. I've read all the good books on OO but still can't seem to crack it. I feel like the guy who gave 110% to be a professional footballer but just didn't have the natural talent to make it. I'm at a loss and thinking of switching careers - what should do I?
I would say focus less on the OO programming and focus more on the OO design. Grab a paper and a pencil (or maybe a UML modelling tool), and get away from the screen.
By practicing how to design a system, you'll start to get a natural feel for object relationships. Code is just a by-product of design. Draw diagrams and model your application in a purely non-code form. What are the relationships? How do your models interact? Don't even think about the code.
Once you've spent time designing... then translate it to code. You'll be surprised at just how quickly the code can be written from a good OO design.
After a lot of design practice, you'll start seeing common areas that can be modularized or abstracted out, and you'll see an improvement in both your designs and your code.
The easiest way is to learn concepts such as SOLID, DRY, FIT, DDD, TDD, MVC, etc. As you look up these acronyms it will lead you down many other rabbit holes and once you are done with your reading you should have a good understanding of what better object-oriented programming is!
SOLID podcasts: http://www.hanselminutes.com/default.aspx?showID=168, http://www.hanselminutes.com/default.aspx?showID=163
SOLID breakdown: http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod
DRY: http://en.wikipedia.org/wiki/Don%27t_repeat_yourself
FIT: http://www.netwellness.org/question.cfm/38221.htm
DDD: http://dddcommunity.org/
DDD required reading: http://www.infoq.com/minibooks/domain-driven-design-quickly
TDD: http://en.wikipedia.org/wiki/Test-driven_development
MVC: http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller
And yes, rolling up your sleeves and coding is always a good idea. Make a small project to the best of your current abilities. Then read an article from above. Then refactor your code to meet the needs of what you just read. Repeat until you have refactored the hell out of your code. At the end you should not only know what OO is all about but you should be able to explain why it is important and how to get their the first time. Learning how to refactor is a key to good code too. What is right now is not right tomorrow.
Too many people think of coding first, objects, last.
You can read all the books you want but that's not going to teach you how to think in an object-oriented fashion--that takes practice and a certain methodology.
Here are a few methods that have
helped me: When you're away from
work and open-minded you can
practice by looking at everything as an object. Don't look at these
objects and wonder how you're going
to program them, look at them as
properties and functions only and
how they relate or inherit from each
other. For example, when you see a
person, they are an object and
therefore would represent a class.
They have properties like hair
color, skin tone, height, etc. They
do certain functions as well. They
walk, talk, sleep, etc. Some of the
functions these people do returns
results. For example, their working
function returns a dollar amount.
You can do this with everything you
see because everything is an object.
Bicycle, car, star, etc.
Before coding a project, design it by
using post-it notes and a dry-erase
board. This will make good practice
until you get the hang of this.
Think of your specific
object/function/property. Each of
those items will have its own
post-it note. Place them as a
hierarchy on the dry-erase board. In
this regard, function/properties
will be placed under the object. If
you have another object, do the same
for that one. Then ask yourself, do
any of these post it notes
(objects/functions/properties)
relate to each other. If two objects
use the same function, create a
parent object (post-it note) and put
it above the others with the
reusable function under the new
note. Draw a line using the
dry-erase marker from the two child
objects to the parent.
When all this is done, then worry
about the internals of how the class
works.
My suggestion would be to learn something different.
Learn functional programming, and apply what you learn from that to OOP. If you know C++, play around with generic programming.
Learn non-object-oriented languages.
Not just because you should use all these things as well (you should), or because they should completely replace OOP (they probably shouldn't), but because you can apply lessons from these to OOP as well.
The secret to OOP is that it doesn't always make sense to use it. Not everything is a class. Not every relationship or piece of behavior should be modeled as a class.
Blindly trying to apply OOP, or striving to write the best OOP code possible tends to lead to huge overengineered messes with far too many levels of abstraction and indirection and very little flexibility.
Don't try to write good OOP code. Try to write good code. And use OOP when it contributes to that goal.
In many fields there's a "eureka" moment where everything kind of comes together.
I remember feeling frustrated in high school geometry. I didn't know which theorem to apply on each step of the proof. But I kept at it. I learned each theorem in detail, and studied how they were applied in different example proofs. As I understood not only the definition of each theorem, but how to use it, I built up a "toolbox" of familiar techniques that I could pull out as needed.
I think it's the same in programming. That's why algorithms, data structures, and design patterns are studied and analyzed. It's not enough to read a book and get the abstract definition of a technique. You have to see it in action too.
So try reading more code, in addition to practicing writing it yourself. That's one beauty of open source, you can download lots of code to study. Not all of that code is good, but studying bad code can be just as educational as studying good code.
Learn a different language! Most developers using only Java (just as an example) have only a limited understanding of OO because they cannot separate language features and concepts. If you don't know it yet, have a look at python. If you know python, learn Ruby. Or choose one of the functional languages.
The aswer is in your question ;)
Practice, practice, practice.
Review your own code and learn from the mistakes.
TDD has helped me most in improving my overall skillset including OOP.
The more code you write, the more you will notice the pitfalls of certain programming practices. After enough time, and enough code, you will be able to identify the warning signs of these pitfalls and be able to avoid them. Sometimes when I write code, I will get this itch in the back of my mind telling me that there may be a better way to do this, even though it does what I need it to. One of my greatest programming weaknesses is "over-analyzing" things so much that it starts to dramatically slow down development time. I am trying to prevent these "itches" by spending a little more time on design, which usually results in a lot less time writing code.
...secretly I look at some of the things my colleagues do with envy. A lot of them seem to have some inner OO instinct that I don't have - no matter how hard I try...
I think you have answered your own question here. Reading good code is a good start, and understanding good code is even better, but understanding the steps to get to that good code is the best. When you see some code that you are envious of, perhaps you could ask the author how he/she arrived at that solution. This is entirely dependent on your work environment as well as the relationships with your colleagues. In any event, if anyone asks me the thought process behind any code I write, I don't hesitate to tell them because I know I would want them to do the same for me.
Language designers have interpreted "Object Oriented Programming" in different ways. For instance, see how Alan Kay, the man who first used the term OOP, defined it:
OOP to me means only messaging, local
retention and protection and hiding of
state-process, and extreme
late-binding of all things. It can be
done in Smalltalk and in LISP. There
are possibly other systems in which
this is possible, but I'm not aware of
them.
(Quoted from http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay_oop_en).
It might seem strange that he don't consider Java and C++ OOP languages! But as the designer of one of the first and best OOP languages (Smalltalk) he has his own valid reasons for that. Why did Alan Kay consider Lisp an Object Oriented language but not Java? That question demands serious consideration by anyone who claims to understand OOP.
Erlang has an altogether different implemntation of OOP, Scheme has another.
It is worth considering all these alternative views. If possible learn all these languages! That will give you a broader outlook, put some new and powerful tools in your hand and make you a better programmer.
I have summarized my experiments with implementing an OOP language, based on ideas borrowed from Smalltalk, Scheme and Erlang in this article.
public void MasteryOfOOP()
{
while(true)
/* My suggestion is: */
DO: find a lot of well-written object oriented code and read it. Then
try to use the insights from it on your own coding. Then do it again. Then
have a colleague who is a good OOP look at it and comment. Maybe post a chunk
of your code on SO and ask for how it could be improved.
Then read some more of those books. Maybe they make a little more
sense now...?
Now go back to the top of this post, and do it again.
Repeat Forever.
}
}
If you're lost as to how to design object-oriented systems, start with the data. Figure out what stuff you need to keep track of and what information naturally goes together (for example, all of the specs of a model of car group together nicely).
Each of these kinds of thing you decide to track becomes a class.
Then when you need to be able to execute particular actions (for example, marking a model of car as decommissioned) or ask particular questions (for example, asking how many of a given model of car were sold in a given year), you load that functionality onto the class it interacts with most heavily.
In general, there should always be a pretty natural place for a given bit of code to live in your class structure. If there isn't, that signals that there's a place where the structure needs to be built out.
There's too much information about objects. The most important thing is to master the basics and everything falls into place more easily.
Here's a way to think about objects. Think about data structures in procedural languages. They are a group of fields without behaviour. Think about functions that receive pointers to those data structures and manipulate the latter. Now, instead of having them separate, define the functions inside the definition of the the structures and assume the functions usually receive a pointer to the data structure to manipulate. That pointer is called this. In sum, think about objects as the combination of status (data) and behaviour (methods - the fancy name for functions in OOP).
This is the absolute basic. There are three more concepts you must absolutely master:
Inheritance - This is all about code reuse.
Encapsulation - This is all about hiding the implementation from the interface. Simply put, everything ought to be private until proven otherwise.
Polymorphism - It doesn't matter the type of the reference variable, but the type of the actual instance to know which behaviour (method) is called. Java doesn't make it easy to have this concept very visible because by definition everything is polymorphic. .Net makes it easier to understand as you decide what is polymorphic and what is not, hence noticing the difference in behaviour. This is achieved by the combination of virtual and override.
If these concepts are very well understood, you'll be fine.
One last final tip: You mention the best books. Have you read "Thinking in Java" by Bruce Eckel? I recommend this book even to people who are beginning in .Net, as the OOP concepts are clearly laid out.
Become more agile, learn junit testing and study about Domain Driven Design. I suggest the book Domain-Driven Design: Tackling Complexity in the Heart of Software although it's a bit tough at some points.
OOP skills comes over time. Reading 1, 2 ...10 books doesn't cut it. Practice writing some code. If you are working in a programming enviornment...that can be helpful. If not try getting into one. Offer to develop some application(s) for free. You have to get your hands dirty. Remember...no application is perfect from the ground up.That's why there is re-factoring.
Also...don't get carried away with the OOP too much...it somes over time. Worry about developing fully functional applications.
Try some programming in Self, one of the most pure OO languages around. So pure, in fact, that it doesn't even have classes, only objects. It also doesn't have variables, fields, statics, attributes, only methods. Also interesting is the fact that every object in the system is also an object on the screen and vice-versa.
Some of the interesting papers on Self are Prototype-Based Application Construction Using SELF 4.0 (the Self tutorial), Self: The Power of Simplicity and Organizing Programs Without Classes. Also, Self: The Video (Randall B. Smith; Dave Ungar) is terrific, having two of the language's designers explain Self's ideas.
This works for pretty much any concept, actually, at least for me: find the language which most purely embodies the concept you want to learn about and just use it.
OO finally clicked for me after I tried to program a bank-like program that handled transactions, calculated interest, and kept track of it all. I did it while I was learning Java. I would suggest just trying it, completing it, and then when you're done go look at a GOOD solution and see what you could've done better.
I also think OOP skills strenghten mostly with practice. Consider changing your company, if you've been there for more than 3 years. Certainly, this is not valid for all jobs, but often a man gets used to the projects and practices at a company and stops advancing as time passes.
Roll up your sleeves and code!
You said the answer yourself: practice. Best solution for this is to develop a game. Use the concepts you learnt in the books there.
Have you read the chapter on OO from the first edition of Scott Meyers "Effective C++" book? It didn't make it to later editions, but it was a great explanation. The title was basically "say what you mean, mean what you say" about suitable conventions.
Actually, you might like to see my answer to a similar question over here.
HTH
cheers,
OOP is not a thing you can master by reading thousands of books. Rather you have to feel the inner concepts. Read anything but try to feel what you read. Build a concept in the back of your mind and try to match those concepts when you face a new scenario. Verify and Update your concepts as you explore new things.
Good luck!
Plan things out. Ask yourself how you want your objects to relate to eachother and seek out how things can be changed and modularized.
Code things in such a way that if you wanted to change 1 piece of the code, you only have to change that 1 piece of code and not 50 instances of it.
beer helps. seriously. lie out on a couch with an A3 sized scribble pad, a pen and a beer. Lock the dog, cat and wife outside. And think about the problem while relaxed. Don't even dare draw an API on it!
Flowcharts, Responsibity cards (CRC) and beer (but not too much) go a long way.
Easiest way to refactor code is to not have to in the first place.
http://misko.hevery.com/code-reviewers-guide/
Those small simple rules will make you a better OO programmer. Follow the rules religiously as you code and you will find your code is better than it would otherwise be.
You'll also want to learn the Solid Principles: http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod
As much as these principles and ways of programming cause debate, they are the only way to truly write excellent code.
You may already write code this way and not know it-- if so, great. But if you need a goal to strive towards, these are the gold standard.
Give up! Why do you need that that OOP? Just write some usable app. Doesnt metter using OOP, procedual or functional approach.
Whataver approach you choose Python language should be sutable to practice it.
You're my target audience. Look at Building Skills in OO Design
Perhaps this can help.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
The question seems pretty simple, and so is the answer. I am a developer who recently started working. So far I had taken few bachelor and master level courses on OOP. And yet I am not comfirtable and confident with OOP concepts. Recently, I was searching for employment opportunities and I found that many employers were keen to know how much confident I am on OOP concepts.
I have a very strong theorotical knowledge on OOP concepts. Although this theorotical knowledge is helping me in clearing the interviews and getting a job but when it comes to implementation I am getting dumb. If you ask me what is reflection then you will get a perfect answer from me, but if someone asks me why and where do we use it, then I get fumbled.
Now I really want to know what I should do when I am not getting an opportunity to implement all or most of the OO concepts in my projects.
Also I really feel with all the latest development tools and programming environments, many of the programmers are getting pampered to use already built components, frameworks and libraries and this is might create a vacuum of good architects.
I want to become a successful architect and for that I think I must be very strong in this area.
Then I thought of learning NHibernate where you will be dealing with objects entirely.
Now what I need is few valuable tips that would help me in grasping all or most of the OOP concepts.
It sound like you're missing real programming experience. Nothing will substitute that.
Go working, exercise, read, learn from your more experienced colleagues. Eventually you'll get it.
As for very advanced tools, you are correct. They produce code monkeys in ever increasing amounts. If you see it right now you are on a good start. Just keep to the path. Good architects will always be needed and valued.
You want to start looking at design patterns. Knowing the when, how and why of using OOP is more valuable than knowing OOP itself.
Frameworks are great and I don't fault people for using them. But, there is still a lot of room for great architects in this space. Exploit the gap of programmers knowing how to use them, but not why or when. Frameworks quickly become a hammer looking for a nail for many developers. Open source is your friend here - dive into the source code and learn them from the inside out so you really understand what's being done and why.
In my experience, you learn the "conceptual" side of development from school and the "applications" side from real experience. There is no substitute for working on the job; no matter how much schooling I've had it never equates to what I've learned doing the real work. This is why it's also a good idea to get an internship in college if you're able.
As for the value of OOP itself, I find that it's most useful in large projects and in team projects. The whole point is to break down the solution into workable "conceptual" elements which makes intercommunication between team members easier as well as visualizing the solution. Visualization is the other big pro to OOP.
One thing to note about OOP IMHO is that entry level developers tend to overuse a lot of the OOP concepts. Not everything requires inheritance. Design patterns are extremely useful but also shouldn't be over applied. Look at your problem and first try to think of a solution on your own then compare it to known patterns and see if they provide a better answer. Simplicity can't be overrated.
Also, playing with tools like UML editors and Mind Mappers (such as XMind) are helpful in getting into the right frame of mind.
Check and see if there are any programming groups around you too; I find it's a good way to meet people that you can talk programming with and another advantage of OOP is its much easier to communicate programming ideas with.
Your next stop should be to look into design patterns (Applied OO). For an introductory text, check out Headfirst Design Patterns.
Interesting question. To some extent I've grown up with Object programming, I've evolved as the various frameworks have evolved, I'd never before considered how it would feel to come to a landscape where so many sophisticated frameworks already exist. Their very presence tends to inhibit that degree of fumbling and stumbling and generallly getting it wrong that leads to deeper understanding.
My perception though is that serious development is still a matter of good design, it's not all just fill-in-the-gaps, hey IOC framwork tell me what to do, programming.
You can enhance your theoretic knowlege by studying the "how" of the framworks you use. But I guess what you need is practical experience, can't comment upon what's open to you in your place of work, but if you can't get it there you may need to do some "hobbyist" or open source development.
One thing I would recommend is trying to get involved in design discussions, try to get your designs reviewed by experienced developers. With any luck they may even say: "hey why didn't you use reflection there ..."
It is the 'thinking' that is important, in OOP one needs to change thinking
while going about programming/developing in oop environment or using OOP
paradigm.
I have faced many a times this question myself: why use OOPL or
Object Oriented Programming Language when I can develop software in Procedural
Programming Language? Why use OOP methodology at all? What benefit does it have
what other non-oop doesn't?
I read from many sources (numerous books and articles on the subject to name!)
to trace the real reason, to hit the fundamental underlying idea or principle
for its being there as a paradigm of software engineering. I think what I found
is simple and that's why I suggest to bring a change in thinking.
If we look around we see things that surround us and the things we interact
with, directly or indirectly. We recognize them with names, we gave them.
Whatever the things are, either they exist in real plane or conceptual plane
and we 'know' them 'recognize' them and interact with them. And importantly we
'name' them. This naming is important because to interact with the 'things' and
for that knowledge of that interaction we need 'Names'.
What have you eaten today? Chocolate and coffee. So you have 'interacted' with
chocolate and coffee. Now Chocolate and coffee are edibles we have (humans)
have given names and with those names we recognize them. And also, we, in our
knowledge of our interaction with them - lets say keeping record of our
interaction with chocolate and coffee, know them with names as having
interacted with.
Interaction is a general term I am using here. Actually in our case, in the
example, you have performed an 'action' - eating. Through 'eating' action you
have interacted with chocolate and coffee. Now think this way, you, chocolate,
coffee are entities in the real world through an action came in interaction.
You may say a 'Process'.
What course Alice has enrolled for? Computer Science.
Computer Science does not have a real existence in the world in the sense a man
exists or a tree or a house, or coffee cup or other 'tangible things' exist. It
is a subject, 'conceptual thing'. The study of computer science has some
'topics' to be studied (or to have interaction with through our mental
faculty/processes)e.g discrete mathematics, design and analysis of algorithms,
Data Structure etc. Together they are named, as a subject of study, 'Computer
Science'. Now Alice 'study' (interaction) Computer Science.What is happening
here? OK, if We now think this way and say that Alice is a thing, an object.
Computer Science is a thing, an object.
Coffee is an object. Chocolate is an object. You, again, are an object. We find
that objects interacting with objects. Fantastic! One may exclaim! That's the
real world scenario! Actually it is a generalization reached through
Abstraction.
It is nothing but -at the surface level at least- naming with
meaning. Or you can say 'meaningful naming'. It is a process. It is so natural
and obvious to us that we simply overlook it.
In OOP we simply have to bring ourselves to this form of thinking process,
knowing and reminding ourselves that "Objects interact with Objects". Oh!
There are more than thinking only this. You have to remember that an object
may interact with itself! Think of you, what are you doing when you are
thinking? Yeah! And there is another very important thing I shall come to
in due time. Though I think it is obvious. But in due time. OK. What we
really do with computers? Actually we solve problems. Particularly those
problems which we try to do or solve in our minds. In broad sense we are
simulating mental processes in a machine, so designed by us. Remember AI is
still a far off thing in reality and there are debates both scientific and
philosophical, on whether a computer can become Intelligent at all. Another
way of putting it whether a computer can really simulate a real mental
process. But that's not for us to take here. Leave it!
If we want to solve problems in real life through a computing device we
would like to go as closely in representing the real life as possible.There
comes the term in OOP 'real life modelling'. It can be seen that in solving
real life problems, be it launching a space shuttle, or keeping the customer
and product sale information for processing we do abstraction and do
calculation, which is another form of abstract process, in turn we deal with
objects mentally, in our mind. So we represent real life objects (conceptual
objects such as numbers) in abstraction and deal with them with abstract
processes, as in mathematics. In computer too we would like to represent
objects and also like to represent processes in the form of objects. So here
comes the Object Orientation so to speak to software engineering. Now comes
that 'due time' to deal with another aspect of OO.
To go back to our example, What did you eat? Eating is an action, a form of
interaction. Which can be thought of as process which again can be thought of
as an Object, like a processes is thought of and represented as 'function' or
'routine' or 'procedure' in Non-OOPL. In OOP we can represent (abstract away)
eating as a process embodied as an object. Similarly studying is an object. In
the same line of thinking 'thing' and 'process' both be thought of as objects
and be represented in virtual plane which is computer memory. Therefore
Alice-an Object-Studies-an object-Computer Science-an Object is valid in OOP
parlance as far as our argument goes.
Can we write a piece of code here? Lets try.
class Alice {
private String name;
private String address;
private String stdID;
private Course courseOfStudy;
... other codes...
public void studies(Course sub) {
courseOfStudy=sub;
}
...
public Course getStudyCourse() {
return courseOfStudy;
}
}
class Course {
codes....
}
This way in OOP (here Java code) one can go about writing codes. I have just
given a simplistic coding. One can come up with better coding and design
approach depending on the software in mind to be written. In OOP design is very
important. So in thinking which I mentioned at the beginning the change should
be brought in. That's important! I prefer to go this way when it comes to OOPL
or OOAD, "everything is object".
Well that's what I wanted to say. You may or may not like it but comment and
say your mind.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been asked to begin teaching C# and OO concepts to a group of procedural programmers. I've searched for ideas on where to begin, but am looking for general consensus on topics to lead with in addition to topics to initially avoid.
Edit
I intend to present information in 30 minute installments weekly until it no longer makes sense to meet. These presentations are targeted at coworkers at a variety of skill levels from novice to expert.
The best thing you can do is: Have a ton of Q&A.
Wikipedia's procedural programming (PP) article really hits where you should start:
Whereas procedural programming uses
procedures to operate on data
structures, object-oriented
programming bundles the two together
so an "object" operates on its "own"
data structure.
Once this is understood, I think a lot will fall into place.
In general
OOP is one of those things that can take time to "get," and each person takes their own path to get there. When writing in C#, it's not like the code screams, "I am using OO principles!" in every line. It's more of a subtle thing, like a foreach loop, or string concatenation.
Design center
Always use something (repeatedly) before making it.
First, use an object, and demonstrate the basic differences from PP. Like:
static void Main(string[] args)
{
List<int> myList = new List<int>();
myList.Add(1);
myList.Add(7);
myList.Add(5);
myList.Sort();
for (int i = 0; i < myList.Count; i++)
{
Console.WriteLine(myList[i]);
}
}
Using objects (and other OO things) first -- before being forced to create their own -- leads people down the path of, "Ok, I'm making something like what I just used," rather than "WTF am I typing?"
Inheritance (it's a trap!)
I would NOT spend a lot of time on inheritance. I think it is a common pitfall for lessons to make a big deal about this (usually making a cliché animal hierarchy, as others pointed out). I think it's critical to know about inheritance, to understand how to use the .NET Framework, but its nuances aren't that big of a deal.
When I'm using .NET, I'm more likely to "run into inheritance" when I'm using the .NET Framework (i.e. "Does this control have a Content property?" or "I'll just call its ToString() method.") rather than when I'm creating my own class. Very (very (very)) rarely do I feel the need to make something mimicking the taxonomy structure of the animal kingdom.
Interfaces
Coding to an interface is a key mid-level concept. It's used everywhere, and OOP makes it easier. Examples of this are limitless. Building off the example I have above, one could demonstrate the IComparer<int> interface:
public int Compare(int x, int y)
{
return y.CompareTo(x);
}
Then, use it to change the sort order of the list, via myList.Sort(this). (After talking about this, of course.)
Best practices
Since there are some experienced developers in the group, one strategy in the mid-level classes would be to show how various best practices work in C#. Like, information hiding, the observer pattern, etc.
Have a ton of Q&A
Again, everyone learns slightly differently. I think the best thing you can do is have a ton of Q&A and encourage others in the group to have a discussion. People generally learn more when they're involved, and you have a good situation where that should be easier.
The leap from procedural to object oriented (even within a language - for four months I programmed procedural C++, and classes were uncomfortable for a while after) can be eased if you emphasize the more basic concepts that people don't emphasize.
For instance, when I first learned OOP, none of the books emphasized that each object has its own set of data members. I was trying to write classes for input validation and the like, not understanding that classes were to operate on data members, not input.
Get started with data structures right away. They make the OOP paradigm seem useful. People teach you how to make a "House" class, but since most beginning programmers want to do something useful right away, that seems like a useless detour.
Avoid polymorphism right away. Inheritance is alright, but teach when it is appropriate (instead of just adding to your base class).
Operator overloading is not essential when you are first learning, and the special ctors (default, dtor, copy ctor, and assignment operator all have their tricky aspects, and you might want to avoid that until they are grounded in basic class design).
Have them build a Stack or a Linked List. Don't do anything where traversal is tricky, like a binary tree.
Do it in stages.
High level concepts : Describe what an object is and relate it to real life.
Medium level concepts: Now that they got what object is, try compare and contrast. Show them why global variable is bad compared to an encapsulated value in a class. What advantage they might get from encapsulating. Start introducing the tennets of OOP (encapsulation, inheritance)
Low Level concepts: Go in further into polymorphism and abstraction. Show them how they can gain even better design through polymorphism and abstraction.
Advance concepts: SOLID, Interface programming, OO design patterns.
Perhaps you should consider a problem that is work related and start with a procedural implementation of it and then work through (session by session) how to make an OOP implementation of it. I find professionals often grasp concepts better if it is directly related to real examples from their own work place. The junk examples most textbooks use are often horrible for understanding because they leave the student wondering, why on earth would I ever want to do that. Give them a real life reason why they would want to do that and it makes more sense.
I would avoid the "a bicycle is a kind of veichle" approach and try to apply OO to an environment that is fairly specific and that they are already used to. Try to find a domain of problems that they all recognize.
Excercise the basics in that domain, but try to move towards some "wow!" or "aha!" experience relatively early; I had an experience like that while reading about "Replace Conditional with Polymorphism" in Fowlers Refactoring, that or similar books could be a good source of ideas. If I recall correctly, Michael Feathers Working effectively with legacy code contains a chapter about how to transform a procedural program into OO.
Teach Refactoring
Teach the basics, the bare minimum of OO principles, then teach Refactoring hands-on.
Traditional Way: Abstractions > Jargon Cloud > Trivial Implementation > Practical Use
(Can you spot the disconnect here? One of these transitions is harder than the others.)
In my experience most traditional education does not do a good job in getting programmers to actually grok OO principles. Instead they learn a bit of the syntax, some jargon they have a vague understanding of, and a couple canonical design examples that serve as templates for a lot of what they do. This is light years from the sort of thorough understanding of OO design and engineering one would desire competent students to obtain. The result tends to be that code gets broken down into large chunks in what might best be described as object-libraries, and the code is nominally attached to objects and classes but is very, very far from optimal. It's exceedingly common, for example, to see several hundred line methods, which is not very OO at all.
Provide Contrast To Sharpen The Focus on the Value of OO
Teach students by giving them the tools up front to improve the OO design of existing code, through refactoring. Take a big swath of procedural code, use extract method a bunch of times using meaningful method names, determine groups of methods that share a commonality and port them off to their own class. Replace switch/cases with polymorphism. Etc. The advantages of this are many. It gives students experience in reading and working with existing code, a key skill. It gives a more thorough understanding of the details and advantages of OO design. It's difficult to appreciate the merits of a particular OO design pattern in vacuo, but comparing it to a more procedural style or a clumsier OO design puts those merits in sharp contrast.
Build Knowledge Through Mental Models and Expressive Terminology
The language and terminology of refactoring help students in understanding OO design, how to judge the quality of OO designs and implementations through the idea of code smells. It also provides students a framework with which to discuss OO concepts with their peers. Without the models and terminology of, say, an automobile transmission, mechanics would have a difficult time communicating with each other and understanding automobiles. The same applies to OO design and software engineering. Refactoring provides abundant terminology and mental models (design patterns, code smells and corresponding favored specific refactorings, etc.) for the components and techniques of software engineering.
Build an Ethic of Craftsmanship
By teaching students that design is not set in stone you bolster students' confidence in their ability to experiment, learn, and discover. By getting their hands dirty they'll feel more empowered in tackling software engineering problems. This confidence and practical skill will allow them to truly own the design of their work (because they will always have the skills and experience to change that design, if they desire). This ownership will hopefully help foster a sense of responsibility, pride, and craftsmanship.
First, pick a language like C# or Java and have plenty of samples to demonstrate. Always show them the big picture or the big idea before getting into the finer details of OO concepts like abstraction or encapsulation. Be prepared to answer a lot of why questions with sufficient real world examples.
I'm kinda surprised there's any pure procedural programmers left ;-)
But, as someone who started coding back in the early 80s on procedural languages such as COBOL, C and FORTRAN, I remember the thing I had most difficulty with was instantiation. The concept of an object itself wasn't that hard as basically they are 'structures with attached methods' (looked at from a procedural perspective) but handling how and when I instantiated an object - and in those days without garbage collection - destroyed them caused me some trouble.
I think this arises because in some sense a procedural programmer can generally point to any variable in his code any say that's where that item of data is directly stored, whereas as soon as you instantiated an object and assign values to that then it's much less directly tangible (using pointers and memory allocation in C is of course similar, which may be a useful starting point also if your students have C experience). In essence I suppose it means that your procedural -> OOPS programmer has to learn to handle another level of abstraction in their code, and getting comfortable with this mental step is more difficult than it appears. By extension I'd therefore make sure that your students are completely comfortable with allocating and handling objects before looking at such potentially confusing concepts as static methods.
I'd recommend taking a look at Head First Design Patterns which has really nice and easy to understand examples of object oriented design which should really help. I wouldn't emphasize the 'patterns' aspect too much at this point though.
I'm a vb.net intermediate programmer, and I'm learning OOP. One of the things I find is the lecturing about the concepts over and over is unnerving. I think what would be perfect documentation would be a gradual transition from procedural programming to full blown OOP rather than trying to force them to understand the concepts then have them write exclusively OOP code using all the concepts. That way they can tinker with little projects like "hello world" without the intimidation of design.
For example (this is for VB.NET beginners not advanced procedural programmers).
I think the first chapters should always be about the general concepts, with just a few examples, but you should not force them to code strictly OOP right away, get them used to the language, so that it's natural for them. When I first started, I had to go back and read the manual over and over to remember HOW to write the code, but I had to wade through pages and pages of lecturing about concepts. Painful!
I just need to remember how to create a ReadOnly Property, or something. What would be real handy would be a section of the book that is a language reference so you can easily look in there to find out HOW to write the code.
Then you briefly explaining how forms, and all the objects are already objects, that have methods, and show how they behave, and example code.
Then show them how to create a class, and have them create a class that has properties, and methods, and the new construct. Then have them basically switch from them using procedural code in the form or modules, to writing methods for classes.
Then you just introduce more advance codes as you would any programming language.
Show them how inheritance works, etc. Just keep expanding, and let them use thier creativity to discover what can be done.
After they get used to writing and using classes, then show how thier classes could improve, introducing the concepts one by one in the code, modifying the existing projects and making them better. One good idea is to take an example project in procedural code, and transform it into a better application in OOP showing them all the limitations of OOP.
Now after that is the advanced part where you get into some really advanced OOP concepts, so that folks who are familar with OOP already get some value out of the book.
Define an object first, not using some silly animal, shape, vehicle example, but with something they already know. The C stdio library and the FILE structure. It's used as an opaque data structure with defined functions. Map that from a procedural use to an OO usage and go from there to encapsulation, polymorphism, etc.
If they are good procedural programmers and know what a structure and a pointer to a function are, the hardest part of the job is already done!
I think a low level lecture about how Object Oriented Programming can be implemented in procedural languages, or even assembler, could be cool. Then they will appreciate the amount of work that the compiler does for them; and maybe they will find coding patterns that they already knew and have used previously.
Then, you can talk about best practices in good Object Oriented design and introduce a bit of UML.
And a very important thing to keep in mind always is that they're not freshmen, don't spend much time with basic things because they'll get bored.
Show Design Patterns in Examples
There where some plenty good answers, alright. I also think, that you should use good languages, good, skillful examples, but I have an additional suggestion:
I have learned what OOP means, by studying Design Patterns. Of course, I have of course learned an OO-language before, but until I was working on Design Patterns, I did not understand the power of it all.
I also learned much from OO-Gurus like Robert C. Martin and his really great papers (to be found on his companies site).
Edit: I also advocate the use of UML (class diagrams) for teaching OO/Design-Pattern.
The thing that made it click for me was introducing Refactoring and Unit Testing. Most of my professional programming career has been in OO Languages, but I spent most of it writing procedural code. You call a function on an instance of class X, and it called a different method on an instance of class Y. I didn't see what the big deal about interfaces was, and thought that inheritance was simply a concept of convenience, and classes were by and large a way of helping us sort and categorize the massive code. If one was masochistic enough, they could have easily go through some of my old projects and inline everything until you get to one massive class. I'm still acutely embarrassed at how bad my code was, how naive my architecture was.
It half-clicked when we went through Martin Fowler's Refactoring book, and then fully clicked when started going through and writing Unit and Fitnesse tests for our code, forcing us to refactor. Start pushing refactoring, dependency injection, and separation of the code into distinct MVC models. Either it will sink in, or their heads will explode.
If someone truly doesn't get it, maybe they aren't cut out for working on OO, but I don't think anyone from our team got completely lost, so hopefully you'll have the same luck.
I'm an OO developer professionally, but have had had procedural developers on my development team (they were developing Matlab code, so it worked). One of the concepts that I like in OO programming is how objects can relate to your domain (http://en.wikipedia.org/wiki/Domain-driven_design - Eric Evans wrote a book on this, but it is not a beginner's book by any stretch).
With that said, I would start with showing OO concepts at a high level. Try to have them design a car for example. Most people would say a car has a body, engine, wheels, etc. Explain how those can relate to real world objects.
Once they seem to grasp that high level concept, then I would start in on the actual code part of it and concepts like inheritance vs aggregation, polymorphism, etc.
I learned about OOP during my post-secondary education. They did a fairly good job of explaining the concepts, but completely failed in explaining why and when. They way they taught OOP was that absolutely everything had to be an object and procedural programming was evil for some reason. The examples they were giving us seemed overkill to me, partly because objects didn't seem like the right solution to every problem, and partly because it seemed like a lot of unnecessary overhead. It made me despise OOP.
In the years since then, I've grown to like OOP in situations where it makes sense to me. The best example I can think of this is the most recent web app I wrote. Initially it ran off a single database of its own, but during development I decided to have it hook into another database to import information about new users so that I could have the application set them up automatically (enter employee ID, retrieves name and department). Each database had a collection of functions that retrieved data, and they depended on a database connection. Also, I wanted an obvious distinction which database a function belonged to. To me, it made sense to create an object for each database. The constructors did the preliminary work of setting up the connections.
Within each object, things are pretty much procedural. For example, each class has a function called getEmployeeName() which returns a string. At this point I don't see a need to create an Employee object and retrieve the name as a property. An object might make more sense if I needed to retrieve several pieces of data about an employee, but for the small amount of stuff I needed it didn't seem worth it.
Cost. Explain how when properly used the features of the language should allow software to be written and maintained for a lower cost. (e.g. Java's Foo.getBar() instead of the foo->bar so often seen in C/C++ code).Otherwise why are we doing it?
I found the book Concepts, Techniques, and Models of Computer Programming to be very helpful in understanding and giving me a vocabulary to discuss the differences in language paradigms. The book doesn't really cover Java or C# as 00-languages, but rather the concepts of different paradigms. If i was teaching OO i would start by showing the differences in the paradigms, then slowly the differences in the 00-languages, the practical stuff they can pickup by themselves doing coursework/projects.
When I moved from procedural to object oriented, the first thing I did was get familiarized with static scope.
Java is a good language to start doing OO in because it attempts to stay true to all the different OO paradigms.
A procedural programmer will look for things like program entry and exit points and once they can conceptualize that static scope on a throwaway class is the most familiar thing to them, the knowledge will blossom out from there.
I remember the lightbulb moment quite vividly. Help them understand the key terms abstract, instance, static, methods and you're probably going to give them the tools to learn better moving forward.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
How do I explain loose coupling and information hiding to a new programmer? I have a programmer who I write designs for, but who can't seem to grasp the concepts of loose coupling and information hiding.
I write designs with everything nicely broken down into classes by function (data access is separate, a class for requests, a controller, about 5 classes total). They come back with a modified design where half the classes inherit from the other half (and there is no "is-a" relationship), and many public variables.
How can I get across the idea that keeping things separate makes it easier to maintain?
Ask him if it's a good idea to let you borrow $10 by giving his wallet to you for a moment and taking the money yourself.
The problem is your expectations, not the developers lack of skill. You talk about loose coupling and information hiding as if these are simple facts or mechanical techniques - they are not. Software development is a craft and the only way to get better at a craft is to practice it and slowly and incrementally improve.
You are looking for a shortcut. You want the developer to experience an "ahah!" moment and suddenly see the wisdom in your design. I say, don't hold your breath.
Adopt the mindset of a mentor. If you want him to improve his design skills, don't "hand" him a design, let him to design it! Then review the design with him. This will give him experience, a deeper sense of ownership and more willingness to listen to your suggestions before he is knee deep in implementation.
An aside - I notice that people look for these shortcuts all the time with abstract skills but not with more "physical" skills. Take tennis for example. If you were a tennis coach and a new player kept hitting his forehands long, you wouldn't just show him a YouTube video of a Roger Federer forehand and expect him to "get it". A great forehand takes YEARS of experience as you learn the feeling and use it in different scenarios - its not your muscles learning, its your brain. It is no different with software design. You get good at it by doing it over and over again. You slowly learn from your mistakes and get better at appreciating the consequences of each individual design decision.
The best way to explain these kinds of concepts is to use analogies. Pick something non-programming related and use that to explain the abstract design concepts.
The following image is pretty good at explaining the need for loose coupling:
Try to come up with stuff like this that will amuse and pertain to your new programmer.
Theory will only get you so far.
Make him try to add new features to code he's written a while ago. Then rework the old code with him so it's loosely coupled and ask him to add the same features.
He will certainly understand the avantages of writing good code.
There's nothing like a physical analogy. Walk out to your car and point out how everything complicated, hot and dangerous is pretty well isolated from the fragile human. Sit in the driver's seat and point out some of the important gauges; for example, the coolant temperature, tachometer and speedometer. Note how the gauges are remarkably similar: they all take a scalar value (from somewhere) and represent it by moving a needle to a position between min and max.
However, if you think about what's being measured, the strong motivation to maintain that isolation (aka loose coupling or information hiding) becomes a lot more obvious.
"How would you like to measure the coolant temperature? By looking at a gauge or by sticking your finger into near-boiling liquid?"
"How would you like to measure the engine rotational speed? By looking at a gauge or by letting a multi-thousand RPM crankshaft rip the flesh from your bones as you try to estimate it by hand?"
"How would you like to measure the car's speed? By looking at a gauge or by dragging your foot on the ground as you're roaring down the highway?"
From there, you can build on the concepts of "your coolant temperature gauge is-a gauge. It isn't-a boiling liquid" and so forth to more complicated concepts.
Loose coupling: The parts of a watch may be replaced by others with out breaking the whole watch. For instance you can remove one hand and it will still work.
Information hiding: The clock hands doesn't know that behind them there's a machinery.
Additional concept
High cohesion: All the elements in the watch "module" are strongly related. In this specific scenario, a battery would belong to another module or namespace.
Show him this presentation. Though it's mainly about DI, it's downright brilliant and up to the point.
I would try sitting down with him and working through a couple of peieces of code with him looking over your sholder and you explaining why you are doing what you are doing as you go along. I've found this is normally the best way to explain things.
Ask him to make a change you know it will be hard because of his design and show him how that would happen using yours.
If he complains, tell him the truth: business will ask more bizarre changes, it's a matter of time he will see that.
Just don't talk to him. That should teach him about information hiding. ;-)
I like a credit card for an example.
You have a credit card.
A credit card represents your credit history. It has a balance and a APR. It has permissions and an entire financial state. It has a id, an expiration date, and a security code. The magnetic strip summarizes all of this.
When you go to your local credit-card-accepting establishment, they don't need to know that. They don't want to know that, and it is often very dangerous if they do know that. What they need to "know" is that there is a magnetic strip which will take care of all of this, and (sometimes) that the person holding the card has id to match the name printed on the card.
Giving them more information is either (in the best case) useless, or (in the worst case) dangerous. Requiring them to know which bank to check with, making sure they know how to calculate your particular APR, or allowing them to check your balance is simply silly at that point.
If he's misinterpreting your designs, perhaps a couple pair-programming sessions will be enough to get them on track. I do have to agree with #ThomasD and will expand upon it -- the encapsulation going on here could be you. It could be a simple case of misinterpretation instead of them not understanding the concepts.
I think that OO concepts really need to be learnt practically. One really needs to do these things to understand. I go to school (engineering) and most of my peers don't really get the concept. They know in general that loose coupling is 'good' but not why. They also don't know how to achieve loose coupling. I am working on my final year project now and I got through to them by making them part of the design process. (It helped that they really did understand how and why and had an inkling of its importance)
Given your situation, here is what I would suggest:
1. Make them follow your design exactly (at least for a couple of weeks). If they want to deviate, have them discuss what and why with you. [Time constraints may not permit this].
2. Sit with them on whichever part of the design you are doing next and explain some o0f your design choices to them, with examples. Somethings that are obvious to you may need to be pointed out to them.
3. Be on the lookout for examples, both of good design and bad design and show them how that works better.
The most important task here is of delegation. You have to show them what good code looks like, maybe train them for a couple of hours. Then you agree on when to review and how you can help them (whithin your limited free time (?)) do the task well. The main thing is to get them to identify with and understand good design. Doing these things will help them 'buy-in' to the design. Once they feel it is their design, I am sure they will do better work.
Overall, I think you need to put your foot down and get them to code it right, without stifling their creativity.
I don't really have too much experience in the area. I am just giving my opinion on the subject, based on what worked for me. I hope this helps.
Note
I'd like to add that OO concepts can be learnt from books, while their application can be learnt only by practice. I have added this note in response to a comment by Christopher W. Allen-Poole.
Well if you have to explain it to them then I'm forced to ask: are they really a programmer?
Tell them they need a "college do over"?
That's a hard one because it's such a basic concept. Personally I wouldn't want to handle it because it's like someone is getting paid to learn stuff they should already know but life isn't always ideal.
I'd approach it by finding a problem that's simple enough to solved relatively simple. Public variables are usually handled best by trying to find the source of a problem when you can't see what's changin gthe variables. You can design a scenario for that.
The over-inheritance may not be their fault. They may have learnt in a course designed in the 90s that's trapped in the idea that "everything must inherit". I remember the old examples of Manager extends Employee. It's just BAD. Thing is people get taught this nonsense. Still.
For C++ the Scott Meyer Effective C++ series is probably worth poniting them to (assuming they can be bothered to read something). For Java, Josh Bloch's Effective Java ("favor composition") is along the same lines. Not sure about C#.
These sorts of books give a better approach to inheritance vs composition and also give some good examples of why inheritance is (as Josh Bloch puts it) an "implementation detail". I've also heard it described as "inheritance breaks encapsulation".
I saw a good example once of inheritance vs composition with extending the capabilities of a List in Java and how inheritance required you to know implementation details of the parent to do correct whereas composition didn't. I'll see if I can find it.
If you do Unit Testing, explain it in terms of test-writing. Alternatively, Abstract Classes and Interfaces both use loose coupling and information hiding to great effect. If you explain it to him in terms of other concepts he may already have a handle on, he'll be more likely to appreciate the concept quickly.
Programs are systems of interacting parts.
For a system of interacting parts to work together requires connections between these parts.
The more connections, the more costly the program.
For a fixed number of parts, a system whose parts are unnecessarily connected is more costly than a system whose parts are necessarily connected.
Unneccessary connections can only be formed in a system whose parts are unnecessarily exposed to connections from other parts.
Minimising unneccessary exposure of parts to connection from other parts is fundamental to cost-effective program development.
Loose coupling and information hiding are the fundamentals of connection-exposure minimisation.
This is not optional knowledge for a programmer.
This is fundamental.
You cannot be a cost-conscious programmer without this knowledge.
Asking how to explain loose coupling and information hiding to a new programmer is like asking how to explain surgery to a new surgeon? Or to explain architecture to a new architect? Or how to explain flying to a pilot.
If your, 'New programmers,' don't know loose coupling and information hiding, then they are not, 'New programmers;' they are potential programmers.
Curiously, it probably won't help to tell them to read the original two papers:
i)Loose coupling: 'Structured design,' by W.P. Stevens, G.J. Myers and L.L. Constantine.
ii)Information hiding: http://www.cs.umd.edu/class/spring2003/cmsc838p/Design/criteria.pdf
Just like the move from 16 bit to 32 bit windows applications where processes were given their own address space. This stopped any other process from being able to kill your application when it "accidently" walked over your data.
Moving processes to different address spaces was like treating each process as a class, and hiding the memory internally and decoupling the processes by forcing interprocess communication to only happen via an expected interface ( eg Windows Messages ).
loose coupling means the external code should use the object of derived non abstract class through abstract base class. if any change occur in set of class on which it depend then not neccessory to change in external code i.e. external code really exhibit loose coupling.
I've been studying OOP for quite a while now and I have a good grasp of the theory. I read the Head First book on OOP and, while it reinforced a lot of the theory, I found the case studies to be somewhat trivial.
I find that I'm applying OOP principles to my code each day, but I'm not sure if I'm applying them correctly. I need to get to the point where I am able to look at my code and know whether I'm using inheritance appropriately, whether my object is cohesive enough, etc.
Does anyone have any good recommendations (books, online guides, blogs, walk-throughs, etc.) for taking the next step in developing solid OOP skills?
I am working primarily in .NET (visual basic), but I welcome suggestions that incorporate various platforms.
Read Refactoring by Martin Fowler, and apply it to your own work.
It will take you through a litany of malodorous characteristics of software code that describe how to detect improperly constructed classes, and even more importantly, how to fix them.
Consider looking into Design Patterns. Although it seems like they aren't commonly used in enterprise applications (I've seen them more commonly used in API's and Frameworks than embedded into enterprise code), they could be applied to make software simpler or more robust in a lot of situations if only developers knew how to apply them.
The key is to understand the design patterns first, then with experience you'll learn how to apply them.
There is a Head First book on design patterns that teaches the concept pretty simply, although if you want a book that really covers design patterns in detail, check out the Gang of Four design patterns book, which is basically what made design patterns mainstream and is referred to almost every time the topic is brought up.
Design patterns can be applied in pretty much any object-oriented language to some degree or another, although some patterns can be overkill or over engineering in some cases.
EDIT:
I also want to add, you should check out the book Code Complete 2. It's a very influential book in the world of software development. It covers a lot of different concepts and theories. I learn something new every time I read it. It's such a good book that if I read it every 6 months to a year, I look at it from a different perspective that makes me a better programmer just by re-reading it. No matter how much you might think you know, this book will make you realize just how little you really know. It's really a great book. I can't stress how much you should own this book.
If you already have the basics, I believe only experience will get you further. You say you are not sure if you are applying the principles correctly, but there is no one correct way. Code you write today, you'll look at in 6 months time, and wonder why you wrote it that way, and probably know of a better, cleaner way of doing it. I also guarantee that after 10 years, you'll still be learning new techniques and tricks. Don't worry too much about it, it will come, just read as much as you can, and try and apply what you read in small chunks.
I am currently half-way through the following book:
http://www.amazon.com/Applying-UML-Patterns-Introduction-Object-Oriented/dp/0131489062
I cannot recommend this book strongly enough in terms of learning a real-life, professional-grade, practical approach to drafting and applying a well-formed and iterative design strategy before diving into code.
I, too, read the "Head First" book and felt that I was much better off for having read it.
After having a few years of working-world experience, I now view the Craig Larman book that I am recommending to be a perfect "next step" for me.
About the Presence of "UML" in this Book Title:
Whether you have positive feelings or negative feelings about UML notation, please do not let that influence your decision to buy the book (ISBN 0131489062) in either direction.
The prominence of "UML" in the title is misleading. While the author does use and explain UML notation, these explanations are extremely well-woven into relevant design discussions, and at no time does this book read like a boring UML spec.
In fact, here is a quote taken directly from the book:
What's important is knowing how to think and design in objects, which is a very different and much more valuable skill than knowing UML notation. While drawing a diagram, we need to answer key questions: What are the responsibilities of the object? Who does it collaborate with? What design patterns should be applied? Far more important than knowing the difference between UML 1.4 and 2.0 !
This book at times seems like it is "speaking to" a lead architect or a project manager. What I mean to say by that is that it assumes that the reader has significant control over the planning and direction of a software project.
Nonetheless, even if you are only responsible for some very small piece of your company's projects and products, I would still recommend this book and encourage you to apply some "scaled down" modifications of the book's advice to your piece of the project.
My OOP epiphany came from Grady Booch's book, way long time ago. Suddenly I realized why objects were good.
While polymorphism is cool, encapsulation is 75% of why objects are cool. It is sort of like an interface: you see the buttons but not the wiring. Before objects, only the most disciplined coders kept their grubby fingers off the internal bits of other people's procedures (it was called "structured programming").
Object make it easy to Do the Right Thing. Inheritance and polymorphism are little bonuses.
One way to learn about objects is to read other peoples' code. I learned a lot by reading the source code for the Delphi VCL framework. Even just looking at the documentation for Java will help you see what a single object class should do and how it is designed to be used by other objects.
Start a project of your own and pay attention when you want to sub-class your own classes and find that you have to go back and break up some protected methods so you can override just one piece of a process instead of replacing all of it. See how ancestors talk to descendants by calling abstract functions. In other words, go make a lot of mistakes and learn from them.
Enjoy!
Frankly, re-reading old David Parnas papers on information hiding helps me get in the right state of mind. The case studies may not be directly applicable but you should be able to get some useful generalizations out of them.
My epiphany happened when I tried to implement a very OO problem (dynamically and recursively building SQL statements) in VB6. The best way to understand polymorphism or inheritance is to need it and not be able to use it.
One thing that will definitely help you is working on a well-known, respected open source project. Either dig through the source code and see how things are done or try to make some additions / modifications. You'll find that there isn't one style or one right answer for most problems, but by looking at several projects, you'll be able to get a wide view of how things can be done. From there, you'll begin to develop your own style and will hopefully make some contributions to open source in the process.
I think you have to attempt and fail at implementing OO solutions. That's how I did it anyway. What I mean by fail is that you end up writing smelly code while successfully delivering a working solution. After it's written you'll get a feel for where things didn't quite feel right. You may have some epiphanies, and/or you may go and hunt for a slicker solution from other programmers. Undoubtedly you'll implement some variation of standard design patterns by accident. In hindsight, a light will click on (oh! so that's what a visitor is for), and then understanding will accelerate.
As others have said, I think tooling through some good OO open source code is a good idea. So is working with more experienced programmers who would be willing to critique your work. However understanding comes through doing.
You might want to try to read (and write) some Smalltalk for a while. Squeak is a free implementation that can show you the power of a fully object-oriented environment (unlike java or .net). All library code source is included. The language itself is incredibly simple. You'll find that java and c# are slowly adding the features well-known to Smalltalk since 1980.
Tortoise HG is extrodanarily well designed piece of OO open source software (written in Python).
If you already understand the basics, building something from scratch in a fully object oriented language will be a good step in fully understanding OOP software architecture. If you don't know Python, Python Essential Reference will take you through the language in full in a few days to a week.
After you understand the language take a look through the software above and you'll have all sorts of epiphanies.
To understand basically anything thoroughly, you need to have a decent knowledge of at least one abstraction level above and one level below it. In the case of OO, others have mentioned design patterns as the layer above OO. This helps a lot to illustrate why OO is useful.
As far as the layer below OO, try to play around with higher-order functions/late binding for a while and get a feel for how these relatively simple constructs are used. Also, try to understand how OO is implemented under the hood (vtables, etc.) and how it can be done in pure C. Once you grok the value of using higher order functions and late binding, you'll quickly realize that OO is just a convenient syntax for passing around a set of related functions and the data they operate on.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I recently read an interesting comment on an OOP related question in which one user objected to creating a "Manager" class:
Please remove the word manager
from your vocabulary when talking
about class names. The name of the
class should be descriptive of its'
purpose. Manager is just another word
for dumping ground. Any
functionality will fit there. The word
has been the cause of many extremely
bad designs
This comment embodies my struggle to become a good object-oriented developer. I have been doing procedural code for a long time at an organization with only procedural coders. It seems like the main strategy behind the relatively little OO code we produce is to break the problem down into classes that are easily identifiable as discrete units and then put the left over/generalized bits in a "Manager" class.
How can I break my procedural habits (like the Manager class)? Most OO articles/books, etc. use examples of problems that are inherently easy to transform into object groups (e.g., Vehicle -> Car) and thus do not provide much guidance for breaking down more complex systems.
First of all, I'd stop acting like procedural code is wrong. It's the right tool for some jobs. OO is also the right tool for some jobs. So is functional. Each paradigm is just a different point of view of computation, and exists because it's convenient for certain problems, not because it's the only right way to program. In principle, all three paradigms are mathematically equivalent, so use whichever one best maps to the problem domain. IMHO, if using a multiparadigm language it's even ok to blend paradigms within a module if different subproblems are best modeled by different worldviews.
Secondly, I'd read up on design patterns. It's hard to understand OO without some examples of the real-world problems it's good for solving. Head First Design Patterns is a good read, as it answers a lot of the "why" of OO.
Becoming good at OO takes years of practice and study of good OO code, ideally with a mentor. Remember that OO is just one means to an end. That being said, here are some general guidelines that work for me:
Favor composition over inheritance. Read and re-read the first chapter of the GoF book.
Obey the Law of Demeter ("tell, don't ask")
Try to use inheritance only to achieve polymorphism. When you extend one class from another, do so with the idea that you'll be invoking the behavior of that class through a reference to the base class. ALL the public methods of the base class should make sense for the subclass.
Don't get hung up on modeling. Build a working prototype to inform your design.
Embrace refactoring. Read the first few chapters of Fowler's book.
The single responsibility principle helps me break objects into manageable classes that make sense.
Each object should do one thing, and do it well without exposing how it works internally to other objects that need to use it.
A 'manager' class will often:
Interogate something's state
Make a decision based on that state
As an antidote or contrast to that, Object-Oriented design would encourage you to design class APIs where you "tell don't ask" the class itself to do things itself (and to encapsulate its own state): for more about "tell don't ask" see e.g. here and here (and maybe someone else has a better explanation of "tell don't ask" but these are first two articles that Google found for me).
It seems like the main strategy the little OO code we produce is to break the problem down into classes that are easily identifiable as discrete units and then put the left over/generalized bits in a "Manager" class.
That may well be true even at the best of times. Coplien talked about this towards the end of his Advanced C++: Programming Styles and Idioms book: he said that in a system, you tend to have:
Self-contained objects
And, "transactions", which act on other objects
Take, for example, an airplane (and I'm sorry for giving you another vehicular example; I'm paraphrasing him):
The 'objects' might include the ailerons, the rudder, and the thrust
The 'manager' or autpilot would implement various commands or transactions
For example, the "turn right" transaction includes:
flaps.right.up()
flaps.left.down()
rudder.right()
thrust.increase()
So I think it's true that you have transactions, which cut across or use the various relatively-passive 'objects'; in an application, for example, the "whatever" user-command will end up being implemented by (and therefore, invoking) various objects from every layer (e.g. the UI, the middle layer, and the DB layer).
So I think it's true that to a certain extent you will have 'bits left over'; it's a matter of degree though: perhaps you ought to want as much of the code as possible to be self-contained, and encapsulating, and everything ... and the bits left over, which use (or depend on) everything else, should be given/using an API which hides as much as possible and which does as much as possible, and which therefore takes as much responsibility (implementation details) as possible away from the so-called manager.
Unfortunately I've only read of this concept in that one book (Advanced C++) and can't link you to something online for a clearer explanation than this paraphrase of mine.
Reading and then practicing OO principles is what works for me. Head First Object-Oriented Analysis & Design works you through examples to make a solution that is OO and then ways to make the solution better.
You can learn good object-oriented design principles by studying design patterns. Code Complete 2 is a good book to read on the topic. Naturally, the best way to ingrain good programming principles into your mind is to practice them constantly by applying them to your own coding projects.
How can I break my procedural habits (like the Manager class)?
Make a class for what the manager is managing (for example, if you have a ConnectionManager class, make a class for a Connection). Move everything into that class.
The reason "manager" is a poor name in OOP is that one of the core ideas in OOP is that objects should manage themselves.
Don't be afraid to make small classes. Coming from a procedural background, you may think it isn't worth the effort to make a class unless it's a thousand lines of code and is some core concept in your domain. Think smaller. A ten line class is totally valid. Make little classes where you see they make sense (a Date, a MailingAddress) and then work your way up by composing classes out of those.
As you start to partition little pieces of your codebase into classes, the remaining procedural code soup will shrink. In that shrinking pool, you'll start to see other things that can be classes. Continue until the pool is empty.
How many OOP programmers does it take to change a light bulb?
None, the light bulb changes itself.
;)
You can play around with an OO language that has very bad procedural support like Smalltalk. The message sending paradigm will force you into OO thinking.
i think you should start it with a good plan.
planning using CLASS Diagrams would be a good start.
you should identify the ENTITIES needed in the applicaiton,
then define each entitie's ATTRIBUTES, and METHODS.
if there are repeated ones, you could now re-define your entities
in a way that inheritance could be done, to avoid redundancy.
:D.
I have a three step process, this is one that I have gone through successfully myself. Later I met an ex-teacher turned programmer (now very experienced) who explained to me exactly why this method worked so well, there's some psychology involved but it's essentially all about maintaining control and confidence as you learn. Here it is:
Learn what test driven development (TDD) is. You can comfortably do this with procedural code so you don't need to start working with objects yet if you don't want to. The second step depends on this.
Pick up a copy of Refactoring: Improving the Design of Existing Code by Martin Fowler. It's essentially a catalogue of little changes that you can make to existing code. You can't refactor properly without tests though. What this allows you to do is to mess with the code without worrying that everything will break. Tests and refactoring take away the paranoia and sense that you don't know what will happen, which is incredibly liberating. You left to basically play around. As you get more confident with that start exploring mocks for testing the interactions between objects.
Now comes the big that most people, mistakenly start with, it's good stuff but it should really come third. At this point you can should reading about design patterns, code smells (that's a good one to Google) and object oriented design principles. Also learn about user stories or use cases as these give you good initial candidate classes when writing new applications, which is a good solution to the "where do I start?" problem when writing apps.
And that's it! Proven goodness! Let me know how it goes.
My eureka moment for understanding object-oriented design was when I read Eric Evans' book "Domain-Driven Design: Tackling Complexity in the Heart of Software". Or the "Domain Driven Design Quickly" mini-book (which is available online as a free PDF) if you are cheap or impatient. :)
Any time you have a "Manager" class or any static singleton instances, you are probably building a procedural design.