Database Guy Asks: Object-Oriented Design Theory? - oop

I've worked with designing databases for a loooong time, and these days I'm working in C# too. OO makes sense to me, but I don't feel that I have a good grounding in the deep theory of OO design.
In database land, there's a lot of theory around how to design the structure of a database, the main notion being normalisation. Normalisation directly steers the structure of a database and to some extent dictates how to arrange entities in a database.
Are there any similar concepts behind how to design the structure of an Object-Oriented program?
What I'm reaching for is one or more underlying theoretical principles which naturally guide the developer into the "correct" design for the solution to a given problem.
Where can I look to find out more?
Is there a go-to work I should read?
Update:
Thanks to everyone for their answers.
What I'm reading seems to say that there is no "Grand Theory of OO Design", but there are a bunch of important principles - which are largely exemplified by design patterns.
Thanks again for your answers :)

Be careful some of the design patterns literature.
There are are several broad species of class definitions. Classes for persistent objects (which are like rows in relational tables) and collections (which are like the tables themselves) are one thing.
Some of the "Gang of Four" design patterns are more applicable to active, application objects, and less applicable to persistent objects. While you wrestle through something like Abstract Factory, you'll be missing some key points of OO design as it applies to persistent objects.
The Object Mentor What is Object-Oriented Design? page has mich of you really need to know to transition from relational design to OO design.
Normalization, BTW, isn't a blanket design principle that always applies to relational databases. Normalization applies when you have update transactions, to prevent update anomalies. It's a hack because relational databases are passive things; you either have to add processing (like methods in a class) or you have to pass a bunch of rules (normalization). In the data warehouse world, where updates are rare (or non-existent) that standard normalization rules aren't as relevant.
Consequently, there's no "normalize like this" for object data models.
In OO Design, perhaps the most important rule for designing persistent objects is the Single Responsibility Principle.
If you design your classes to have good fidelity to real-world objects, and you allocate responsibilities to those classes in a very focused way, you'll be happy with your object model. You'll be able to map it to a relational database with relatively few complexities.
Turns out, that when you look at things from a responsibility point of view, you find that 2NF and 3NF rules fit with sound responsibility assignment. Unique keys still matter. And derived data becomes the responsibility of a method function, not a persistent attribute.

The book "Design Patterns" is your next step.
http://www.amazon.com/Design-Patterns-Object-Oriented-Addison-Wesley-Professional/dp/0201633612
But, you don't have to use an OO approach to everything. Don't be religious about it. If a more procedural approach feels more straitforward, then go with that. People new to OO tend to overdue it for a while.

I think Agile Software Development, Principles, Patterns, and Practices is quite good.
It provides a lot of in-depth disccusion of OO principles listed here:
The principles of Object Oriented Design and Dependency Management
SRP — The Single Responsibility Principle
OCP — The Open Closed Principle
LSP — The Liskov Substitution Principle
DIP — The Dependency Inversion Principle
ISP — The Interface Segregation Principle
REP — The Reuse Release Equivalency Principle
CCP — The Common Closure Principle
CRP — The Common Reuse Principle
ADP — The Acyclic Dependencies Principle
SDP — The Stable Dependencies Principle
SAP — The Stable Abstractions Principle

If you're used to building normalized databases, then Object Oriented design should come naturally to you. Your class structures will end up looking a lot like your data structure, with the obvious exception that association tables turn into lists and lookup tables turn into enums within your classes.
All together, I'd say you're a lot better off coming into OO design with a background in Relational Databases than you would be going the other direction.

If you want to really get to grips with O-O, go play with Smalltalk. ST is a pure OO language, and quite in-your-face about it. Once you get over the paradigm hump you've learned OO as you can't really do Smalltalk without it. This is how I first learned OO.

Check the results of this. Learn from each question.

I really liked Head First Design Patterns, which is very approachable, and the excellent Object oriented Design Heuristics by Arthur J. Riel

This site lists 101 title... design patterns, refactoring and other... Have a look at it.. It will be a good starting point...

Go for Object Thinking by David West. An interesting read..
You're from the dark side though.. as per the book;) Database thinking has been the curse of OO programmers all over. They're opposite ends of a spectrum. For instance
Database thinking values the data attribues over everything else.. normalization and creating types based on how they fit into the DB Schema OR the ER diagram.. OO thinking creates types based on behavior and collaboration and does not recognize the data attributes as all important.
Databases come from the scientific people who value formalization and method over everything else. OO comes from the people who use heuristics and rules of thumb and value individuality and social interaction over a hard and fast process.
The point being a COBOL programmer can write COBOL programs even after moving onto a OO Language. Check out any book like Thinking in Java for the first section which invariably details out the tenets of OO (Apprentice).. Follow it up with Object Thinking (journeyman) and in due time.. a master.

Model your objects by keeping real world objects in mind.
We are currently developing automation software for machines. One of those machines has two load ports for feeding it raw material, while all others have only one. In all modules so far, we had the information of the ports (current settings, lot number currently assigned to it etc) as members in the class representing the machine.
We decided to create a new class that holds the information of the ports, and add two LoadPort members to this MachineXY class. If we had thought about it before, we would have done the same for all those single port machines...

You should look at UML, which is an entire process given to OOD.
I'd recommend getting a book (or a couple), because the theory is quite large, most people pick and choose the techniques most appropriate for the project at hand.

Start reading about design patters, from say Martin Fowler. :)
They are the most practical use of OOP.

I am guess you mean OO in the database world.
Object-oriented databases which store objects never did really catch one so you are currently looking mapping objects to relational database. ORM or Object-relational mapping is the term used to describe the software that does this mapping. Ideally this gives you the best of both worlds where developers can internact with the objects and in the database everything is stored in relational tables where standard tuning can take place.

in DBA slang: object-oriented design is nothing else but properly normalized data behind safe operation interfaces, safe meaning, look at the operations, not the data directly

Related

Why is the object oriented model so occupying/monopolizing? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
Don't get me wrong - OOP currently is the best thing to structure large code bases.
But why do people try to stuff anything into an OO view?
For example: each text book about OOP contains an "introducing example" that tries to express a small view of our real world in an OO inheritance and composition and aggregation construct. And - meanwhile - we all know that it never results in the almighty OO construct that the OO model itself promised! The authors just created an illusion.
My personal opinion is, that OO is nice to structure code, but it is not suited to represent real world data and its relations. IMHO the relational model is superior, probably any other model is superior.
In OO design it became practice to recommend composition over inheritance - whenever possible. So that mighty looking model of an all-inheritance-based-world-of-objects thing that first class books suggest is just an illusion. So, OO itself may be an illusion? And current composition-centric OO models are nothing more than plain data structures with some standardized syntactic sugar - that's not much different than in pre-OOP approaches.
Another example: imagine a really f***ing complex model of our real-world. Besides anything else, there are stone blocks and humans. In an OO model, humans are mammals are animals are organic lifeforms and so on (the strictly rigid inheritance hierarchy OO imposes, you know..). The stone blocks are non-organic things, maybe they are rigid bodies or whatever, it doesn't matter.
If you are an artist and you have to find a stone block that makes a good "template" (?) for a statue of a human with given width, height an thickness, then you have to write a bunch of special case OO code to retrieve these attributes from the human model and from the stone block model. Or alternatively, your whole world model was build to support geometric queries - then it would be easy! But that leads to the conclusion that OOP sucks at representing data in a way that allows us to use it in different use cases. OOP just allows us to represent data for exactly those use cases that we have designed beforehand. Not much more. Any use besides those predeterminated cases can only be done with much fiddling. The relational model at least tries to represent data in a re-useable way. (re-useable: OOP once even occupied that word)
Why all that hate?
I work on a project that uses an ORM - and it just sucks. It started when modeling the database (because of ORM limitations), then came the time to learn the ins and outs of the ORM (and its bugs and further limitations), then came the fear of implicitly happening stuff (new thing(); thing->save() creates a new row, but where is "thing" rooted? why do people try to make objects as "independent" as possible but in the back create much more deeply rooted dependencies on freaking per-table-singletons, that communicate with connection-singletons.. oh my god .. I digress).
So many things that could have been done in a few lines of SQL and a sweet tiny query API were done in hundreds or maybe thousands of lines of "business logic" code (of course in the application layer, not in the database where the data is, and where aggregation functions like count() or sum() would be cheap). I think the people just feel better when they can work in OOP. But that is just stupid.
And the creators of ORM just want to keep the users away from the "dirty stuff". But exactly those people should not write ORM - the perfect example: I strongly believe that the ORM-creator type of people do not even know that a database table can contain compound primary keys! ;-)
So, why is OOP so occupying? It is just a half-baked abstraction, but people swear on it for everything, if you ask some, they may even tell you that OOP will create world peace.
Why is OOP so f***ing occupying/monopolizing?
It seems to me that if your business logic implementation is driving the design of your database, then somebody put the cart before the horse. I thought the idea was to develop a rational (no, I didn't say relational) data model and then implement whatever logic queries and updates the data.
My experience has been that although relational databases are great for storing and querying data from the user's perspective, trying to extend the relational model into a structured programming or OOP paradigm is difficult in the extreme. There's always a translation layer. Today everybody thinks that ORM is the solution. And although technically any translation layer that sits between a relational data store and an object oriented data access layer is ORM, when I see people talking about ORM these days they seem to be talking about some automated way of generating that ORM layer.
I'm not convinced that a generalized ORM solution exists. Every one I've seen is fraught with peril. It's a pain in the neck, but the only reliable ORM layers I've ever seen were hand coded. Granted, I haven't worked much with database stuff in the last five years or so, so things may have changed.
I'll agree with you that OOP is largely just a lot of syntactic sugar around a solid structured design. However, it's good syntactic sugar. It formalizes a lot of things that were considered "best practices" in structured programming, and adds some things (inheritance, interfaces, polymorphism, among others) that were very difficult or impossible to express in purely structured languages. We certainly could have added some or all of those features to structured languages without going all the way to OOP, but why? OOP was the obvious next step in the evolution of procedural programming languages.
OOP is just another tool in the toolbox, but keep in mind that OO has been the focus of mainstream programming languages for over 20 years now - starting with C++ and moving on to Java and C#. This probably has more to do with why the model is currently "so occupying" than anything else.
The point of OOP isn't necessarily to represent every single facet of every single object in the world. The point is to represent the stuff you care about. For instance, assume there's a house. Someone doing real estate stuff would care about the location, selling price, etc. A builder might care about the blueprint ID or something, i dunno. The point is, you model the important stuff and ignore the rest. Add to it later if you find you need more info.
Yes, this makes a "House" class tailored to the app being built, and possibly unsuitable for others. OOP's overall goal isn't to reuse classes, though sometimes that happens. The point is to bundle data with the actions that can affect that data, and to thus reduce a problem conceptually from hundreds of variables and functions to a few objects with known and tested interfaces and related behaviors.
The Human and StoneBlock should both inherit from MaterialObject, which has height width and depth attributes and even implements a biggerThan() method, when given another MaterialObject.
OO is "occupying" because it has shown to be an excellent choice for many programming problems. Similarly, the relational model has shown to be an excellent choice for many data storage and retrieval problems. When I say "many" I mean "so many that all other pale in comparison". In fact, both coupled together are an excellent combo, but there is complexity in mapping where these two paradigms meet, thus ORM.
I almost thought your question was insincere but then decided it was lack of experience (not intended as an insult, just guessing from the questions/assertions). You will find that there are problems so complex that OO is the only feasible way to model them. Not everything is a database-backed web site or reporting tool. Many systems are mostly "business logic" where the best solution is an OO solution (example from my experience: controlling and monitoring robotic aircraft and their various payloads). That being said, many of the popular database-backed web frameworks are OO+RDBMS (Rails, Grails, Java+Spring+Hibernate, etc.) because the combination is so powerful.
While there are certainly fads and sticky but outmoded paradigms, I suggest that when there are many choices (OO, functional programming, RDBMS-centric, etc.) people almost always choose what is the most productive. For at least 10 years, that has been OO for a large portion of software problems.
Because at the end of the day it's a good approximation of the way we ourselves model things. The counter that is often raised to this is that computers have no concept of objects, it's all 1s and 0s, but that analysis is as empty as saying all human thought is nothing more than neurons and electrical impulses (which it probably is, but it's just not a useful way of looking at things).
So you don't like inheritance? Nor do I. Inheritance of behaviour is the poor man's code reuse. Inheritance of interface, on the other hand, is great as it gives us polymorphism.
You don't like ORMs? No one is forcing you to use them. There is a conflict between OOP and RDMS that I don't think is easily fixed and most ORMs attempts to resolve this are quite naive. The limitations of ORM are not a flaw in OOP.

Are ORM's counterproductive to OO design?

In OOD, design of an object is said to be characterized by its identity and behavior.
Having used ORM's in the past, the primary purpose, in my opinion, revolves around the ability to store/retrieve data. That is to say, ORM objects are not design by behavior, but rather data (i.e. database tables). Case and point: Many ORM tools come with a point-to-a-database-table-and-click-object-generator.
If objects are no longer characterized by behavior this will, in my opinion, muddy the identity and responsibility of the objects. Subsequently, if objects are not defined by a responsibility this could lend a hand to having tightly coupled classes and overall poor design.
Furthermore, I would think that in an application setting, you would be heading towards scalability issues.
So, my question is, do you think that ORM's are counterproductive to OO design? Perhaps the underlying question would be whether or not they are counterproductive to application development.
There's a well-known and oft-ignored impedance mismatch between the requirements of good database design and the requirements of good OO design. Most developers (in my experience) either do not understand this impedance mismatch or do not care. Since it's more common to start with the database and generate the objects from it (rather than the reverse), then yes, you'll end up with objects that are great as a persistence layer but sub-optimal from an OO perspective. (The reverse, generating the database from the object model, makes me want to stab my eyes out.)
Why are they sub-optimal from an OO perspective? Because the objects produced by an ORM are not business objects, even with partial classes and the like. Business objects model behavior. ORM objects model persistence. I'm not going to spend ten paragraphs arguing this distinction. It's something Rocky Lhotka has covered quite well in his books on Business Objects and his CSLA framework. Whether or not you like or use CSLA, I think his arguments are solid ones.
If objects are no longer characterized by behavior this will, in my opinion, muddy the
identity and responsibility of the objects.
The objects in question do have database reading and writing as defined behavior. They just don't have much other than that.
The reality of the situation is pretty simple: object orientation isn't an end in itself, it's a means to an end -- but in some cases it just doesn't do much to improve the end result. A lot of uses for ORM form a case in point -- they are thousands of variations of CRUD applications that don't need or want to attach any real behavior to most of the data they process.
The application as a whole gains flexibility by not encoding much (if any) of the data's "behavior" into the code of the application itself. Instead, they're often better off with that as "dumb" data, that they simply pass through from UI to database and back out to reports and such. With a bit of care, this can allow a substantial level of user customization that's almost impossible to match when you try to treat the data as real objects with real behavior encoded into the application proper.
Of course, there's another side to that: it can make it substantially more difficult to ensure the integrity of the data or that the data is only used appropriately -- I've seen code that accidentally used the wrong field in a calculation, so they were averaging the office numbers instead of office sizes in square feet. Both were user-defined fields that just said the contents should be numeric. The application had no way to know that one made perfect sense, and the other didn't at all.
Case and point: Many OR/M tools come with a point-to-a-database-table-and-click-object-generator.
Yes but there are equal, if not greater numbers or ORM solutions that base themselves off your objects and generate your database tables.
If you start with data and then tramp down forcing auto-generated objects to have object behaviours, yes, you might get confused... But if you start with the object and generate the database as a secondary layer, you end up with something a lot more usable, even if the database isn't perhaps as optimised as it could be.
If you're looking for an excuse not to use ORM, don't use it. I personally find it saves me thousands of lines of code doing trivial things that the ORM does just great.
I don't believe ORM are counterproductive to OO design, unless you want to insist that persistence is an integral part of behavior.
I'd separate persistence from business behavior. You're certainly free to add busines behavior to any object that an ORM generates for you.
I've also seen ORM systems which tend to go from the OO model and generate the database.
If anything, I would say ORMs are more biased towards producing good OO code than they are to producing good database code.
Ideally, a successful ORM bridges the two worlds and your application code would be great from a business domain problem-solving and implementation perspective and your database code and model would be great from a normalization, performance and ETL/reporting/replication whatever perspective.
On systems where it separates your data from behavior it's absolutely counterproductive.
Orm systems tend to analyze existing database tables to create stupid "Objects" in your language. These things are not true objects because they do not contain business behavior, but since people have these structures, they tend to want to use them.
Ruby on Rails (Active Record) actually binds your data to a "Live" class--this is much better.
I've never seen a system I really liked--ActiveRecord is close but it makes a few rubiesque assumptions that I'm not quite comfortable with--the biggest being supplying public setters & getters by default.
But to sum up--I've seen a lot of good OO programmers write screwed up code because of ORM.
As with most questions of this type, it depends on your usage.
The main ways to use ORM tools are:
Define object by data, use this object throughout application code (BAD BAD BAD)
Use ORM objects only for data access, define your own objects with an interpretive layer between (Much Better)
If starting from scratch a 3rd method is to design data from your object model. (Best if possible)
So yes, if you define the object by the data tables and use that throughout your code you will not be using OOD and introducing very poor design and maintenance issues.
But if you only use the ORM objects as a data access tool (replacing ADO) then you are free to use good OOD and ORM together. Sure, more code is required to build the interpretation layer, but enables much better practices, with not much more code required than old ADO code.
I'm answering this from a C# perspective, since that is where I do the majority of my development....
I see your point, but I think with the ability to create partial classes you can still create objects with any behavior you like and still get the power that an OR/M brings to the table for data retrieval.
From what I've seen of ORMs they do not go against OO principles - fairly orthogonal to them in fact. (for info - pretty new to ORM technology, Java perspective)
My reasoning is that ORMs help you store the data members of a class to a persistent store without having to couple to that store and write that code yourself. You still decide on the data members and write the behaviours of a class.
I guess you could abuse ORMs to break OO principles, but then you can do that with anything. You might use tooling to create skeletal data classes from a pre-existing table, but you would still create methods etc.

How to teach object oriented programming to procedural programmers? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been asked to begin teaching C# and OO concepts to a group of procedural programmers. I've searched for ideas on where to begin, but am looking for general consensus on topics to lead with in addition to topics to initially avoid.
Edit
I intend to present information in 30 minute installments weekly until it no longer makes sense to meet. These presentations are targeted at coworkers at a variety of skill levels from novice to expert.
The best thing you can do is: Have a ton of Q&A.
Wikipedia's procedural programming (PP) article really hits where you should start:
Whereas procedural programming uses
procedures to operate on data
structures, object-oriented
programming bundles the two together
so an "object" operates on its "own"
data structure.
Once this is understood, I think a lot will fall into place.
In general
OOP is one of those things that can take time to "get," and each person takes their own path to get there. When writing in C#, it's not like the code screams, "I am using OO principles!" in every line. It's more of a subtle thing, like a foreach loop, or string concatenation.
Design center
Always use something (repeatedly) before making it.
First, use an object, and demonstrate the basic differences from PP. Like:
static void Main(string[] args)
{
List<int> myList = new List<int>();
myList.Add(1);
myList.Add(7);
myList.Add(5);
myList.Sort();
for (int i = 0; i < myList.Count; i++)
{
Console.WriteLine(myList[i]);
}
}
Using objects (and other OO things) first -- before being forced to create their own -- leads people down the path of, "Ok, I'm making something like what I just used," rather than "WTF am I typing?"
Inheritance (it's a trap!)
I would NOT spend a lot of time on inheritance. I think it is a common pitfall for lessons to make a big deal about this (usually making a cliché animal hierarchy, as others pointed out). I think it's critical to know about inheritance, to understand how to use the .NET Framework, but its nuances aren't that big of a deal.
When I'm using .NET, I'm more likely to "run into inheritance" when I'm using the .NET Framework (i.e. "Does this control have a Content property?" or "I'll just call its ToString() method.") rather than when I'm creating my own class. Very (very (very)) rarely do I feel the need to make something mimicking the taxonomy structure of the animal kingdom.
Interfaces
Coding to an interface is a key mid-level concept. It's used everywhere, and OOP makes it easier. Examples of this are limitless. Building off the example I have above, one could demonstrate the IComparer<int> interface:
public int Compare(int x, int y)
{
return y.CompareTo(x);
}
Then, use it to change the sort order of the list, via myList.Sort(this). (After talking about this, of course.)
Best practices
Since there are some experienced developers in the group, one strategy in the mid-level classes would be to show how various best practices work in C#. Like, information hiding, the observer pattern, etc.
Have a ton of Q&A
Again, everyone learns slightly differently. I think the best thing you can do is have a ton of Q&A and encourage others in the group to have a discussion. People generally learn more when they're involved, and you have a good situation where that should be easier.
The leap from procedural to object oriented (even within a language - for four months I programmed procedural C++, and classes were uncomfortable for a while after) can be eased if you emphasize the more basic concepts that people don't emphasize.
For instance, when I first learned OOP, none of the books emphasized that each object has its own set of data members. I was trying to write classes for input validation and the like, not understanding that classes were to operate on data members, not input.
Get started with data structures right away. They make the OOP paradigm seem useful. People teach you how to make a "House" class, but since most beginning programmers want to do something useful right away, that seems like a useless detour.
Avoid polymorphism right away. Inheritance is alright, but teach when it is appropriate (instead of just adding to your base class).
Operator overloading is not essential when you are first learning, and the special ctors (default, dtor, copy ctor, and assignment operator all have their tricky aspects, and you might want to avoid that until they are grounded in basic class design).
Have them build a Stack or a Linked List. Don't do anything where traversal is tricky, like a binary tree.
Do it in stages.
High level concepts : Describe what an object is and relate it to real life.
Medium level concepts: Now that they got what object is, try compare and contrast. Show them why global variable is bad compared to an encapsulated value in a class. What advantage they might get from encapsulating. Start introducing the tennets of OOP (encapsulation, inheritance)
Low Level concepts: Go in further into polymorphism and abstraction. Show them how they can gain even better design through polymorphism and abstraction.
Advance concepts: SOLID, Interface programming, OO design patterns.
Perhaps you should consider a problem that is work related and start with a procedural implementation of it and then work through (session by session) how to make an OOP implementation of it. I find professionals often grasp concepts better if it is directly related to real examples from their own work place. The junk examples most textbooks use are often horrible for understanding because they leave the student wondering, why on earth would I ever want to do that. Give them a real life reason why they would want to do that and it makes more sense.
I would avoid the "a bicycle is a kind of veichle" approach and try to apply OO to an environment that is fairly specific and that they are already used to. Try to find a domain of problems that they all recognize.
Excercise the basics in that domain, but try to move towards some "wow!" or "aha!" experience relatively early; I had an experience like that while reading about "Replace Conditional with Polymorphism" in Fowlers Refactoring, that or similar books could be a good source of ideas. If I recall correctly, Michael Feathers Working effectively with legacy code contains a chapter about how to transform a procedural program into OO.
Teach Refactoring
Teach the basics, the bare minimum of OO principles, then teach Refactoring hands-on.
Traditional Way: Abstractions > Jargon Cloud > Trivial Implementation > Practical Use
(Can you spot the disconnect here? One of these transitions is harder than the others.)
In my experience most traditional education does not do a good job in getting programmers to actually grok OO principles. Instead they learn a bit of the syntax, some jargon they have a vague understanding of, and a couple canonical design examples that serve as templates for a lot of what they do. This is light years from the sort of thorough understanding of OO design and engineering one would desire competent students to obtain. The result tends to be that code gets broken down into large chunks in what might best be described as object-libraries, and the code is nominally attached to objects and classes but is very, very far from optimal. It's exceedingly common, for example, to see several hundred line methods, which is not very OO at all.
Provide Contrast To Sharpen The Focus on the Value of OO
Teach students by giving them the tools up front to improve the OO design of existing code, through refactoring. Take a big swath of procedural code, use extract method a bunch of times using meaningful method names, determine groups of methods that share a commonality and port them off to their own class. Replace switch/cases with polymorphism. Etc. The advantages of this are many. It gives students experience in reading and working with existing code, a key skill. It gives a more thorough understanding of the details and advantages of OO design. It's difficult to appreciate the merits of a particular OO design pattern in vacuo, but comparing it to a more procedural style or a clumsier OO design puts those merits in sharp contrast.
Build Knowledge Through Mental Models and Expressive Terminology
The language and terminology of refactoring help students in understanding OO design, how to judge the quality of OO designs and implementations through the idea of code smells. It also provides students a framework with which to discuss OO concepts with their peers. Without the models and terminology of, say, an automobile transmission, mechanics would have a difficult time communicating with each other and understanding automobiles. The same applies to OO design and software engineering. Refactoring provides abundant terminology and mental models (design patterns, code smells and corresponding favored specific refactorings, etc.) for the components and techniques of software engineering.
Build an Ethic of Craftsmanship
By teaching students that design is not set in stone you bolster students' confidence in their ability to experiment, learn, and discover. By getting their hands dirty they'll feel more empowered in tackling software engineering problems. This confidence and practical skill will allow them to truly own the design of their work (because they will always have the skills and experience to change that design, if they desire). This ownership will hopefully help foster a sense of responsibility, pride, and craftsmanship.
First, pick a language like C# or Java and have plenty of samples to demonstrate. Always show them the big picture or the big idea before getting into the finer details of OO concepts like abstraction or encapsulation. Be prepared to answer a lot of why questions with sufficient real world examples.
I'm kinda surprised there's any pure procedural programmers left ;-)
But, as someone who started coding back in the early 80s on procedural languages such as COBOL, C and FORTRAN, I remember the thing I had most difficulty with was instantiation. The concept of an object itself wasn't that hard as basically they are 'structures with attached methods' (looked at from a procedural perspective) but handling how and when I instantiated an object - and in those days without garbage collection - destroyed them caused me some trouble.
I think this arises because in some sense a procedural programmer can generally point to any variable in his code any say that's where that item of data is directly stored, whereas as soon as you instantiated an object and assign values to that then it's much less directly tangible (using pointers and memory allocation in C is of course similar, which may be a useful starting point also if your students have C experience). In essence I suppose it means that your procedural -> OOPS programmer has to learn to handle another level of abstraction in their code, and getting comfortable with this mental step is more difficult than it appears. By extension I'd therefore make sure that your students are completely comfortable with allocating and handling objects before looking at such potentially confusing concepts as static methods.
I'd recommend taking a look at Head First Design Patterns which has really nice and easy to understand examples of object oriented design which should really help. I wouldn't emphasize the 'patterns' aspect too much at this point though.
I'm a vb.net intermediate programmer, and I'm learning OOP. One of the things I find is the lecturing about the concepts over and over is unnerving. I think what would be perfect documentation would be a gradual transition from procedural programming to full blown OOP rather than trying to force them to understand the concepts then have them write exclusively OOP code using all the concepts. That way they can tinker with little projects like "hello world" without the intimidation of design.
For example (this is for VB.NET beginners not advanced procedural programmers).
I think the first chapters should always be about the general concepts, with just a few examples, but you should not force them to code strictly OOP right away, get them used to the language, so that it's natural for them. When I first started, I had to go back and read the manual over and over to remember HOW to write the code, but I had to wade through pages and pages of lecturing about concepts. Painful!
I just need to remember how to create a ReadOnly Property, or something. What would be real handy would be a section of the book that is a language reference so you can easily look in there to find out HOW to write the code.
Then you briefly explaining how forms, and all the objects are already objects, that have methods, and show how they behave, and example code.
Then show them how to create a class, and have them create a class that has properties, and methods, and the new construct. Then have them basically switch from them using procedural code in the form or modules, to writing methods for classes.
Then you just introduce more advance codes as you would any programming language.
Show them how inheritance works, etc. Just keep expanding, and let them use thier creativity to discover what can be done.
After they get used to writing and using classes, then show how thier classes could improve, introducing the concepts one by one in the code, modifying the existing projects and making them better. One good idea is to take an example project in procedural code, and transform it into a better application in OOP showing them all the limitations of OOP.
Now after that is the advanced part where you get into some really advanced OOP concepts, so that folks who are familar with OOP already get some value out of the book.
Define an object first, not using some silly animal, shape, vehicle example, but with something they already know. The C stdio library and the FILE structure. It's used as an opaque data structure with defined functions. Map that from a procedural use to an OO usage and go from there to encapsulation, polymorphism, etc.
If they are good procedural programmers and know what a structure and a pointer to a function are, the hardest part of the job is already done!
I think a low level lecture about how Object Oriented Programming can be implemented in procedural languages, or even assembler, could be cool. Then they will appreciate the amount of work that the compiler does for them; and maybe they will find coding patterns that they already knew and have used previously.
Then, you can talk about best practices in good Object Oriented design and introduce a bit of UML.
And a very important thing to keep in mind always is that they're not freshmen, don't spend much time with basic things because they'll get bored.
Show Design Patterns in Examples
There where some plenty good answers, alright. I also think, that you should use good languages, good, skillful examples, but I have an additional suggestion:
I have learned what OOP means, by studying Design Patterns. Of course, I have of course learned an OO-language before, but until I was working on Design Patterns, I did not understand the power of it all.
I also learned much from OO-Gurus like Robert C. Martin and his really great papers (to be found on his companies site).
Edit: I also advocate the use of UML (class diagrams) for teaching OO/Design-Pattern.
The thing that made it click for me was introducing Refactoring and Unit Testing. Most of my professional programming career has been in OO Languages, but I spent most of it writing procedural code. You call a function on an instance of class X, and it called a different method on an instance of class Y. I didn't see what the big deal about interfaces was, and thought that inheritance was simply a concept of convenience, and classes were by and large a way of helping us sort and categorize the massive code. If one was masochistic enough, they could have easily go through some of my old projects and inline everything until you get to one massive class. I'm still acutely embarrassed at how bad my code was, how naive my architecture was.
It half-clicked when we went through Martin Fowler's Refactoring book, and then fully clicked when started going through and writing Unit and Fitnesse tests for our code, forcing us to refactor. Start pushing refactoring, dependency injection, and separation of the code into distinct MVC models. Either it will sink in, or their heads will explode.
If someone truly doesn't get it, maybe they aren't cut out for working on OO, but I don't think anyone from our team got completely lost, so hopefully you'll have the same luck.
I'm an OO developer professionally, but have had had procedural developers on my development team (they were developing Matlab code, so it worked). One of the concepts that I like in OO programming is how objects can relate to your domain (http://en.wikipedia.org/wiki/Domain-driven_design - Eric Evans wrote a book on this, but it is not a beginner's book by any stretch).
With that said, I would start with showing OO concepts at a high level. Try to have them design a car for example. Most people would say a car has a body, engine, wheels, etc. Explain how those can relate to real world objects.
Once they seem to grasp that high level concept, then I would start in on the actual code part of it and concepts like inheritance vs aggregation, polymorphism, etc.
I learned about OOP during my post-secondary education. They did a fairly good job of explaining the concepts, but completely failed in explaining why and when. They way they taught OOP was that absolutely everything had to be an object and procedural programming was evil for some reason. The examples they were giving us seemed overkill to me, partly because objects didn't seem like the right solution to every problem, and partly because it seemed like a lot of unnecessary overhead. It made me despise OOP.
In the years since then, I've grown to like OOP in situations where it makes sense to me. The best example I can think of this is the most recent web app I wrote. Initially it ran off a single database of its own, but during development I decided to have it hook into another database to import information about new users so that I could have the application set them up automatically (enter employee ID, retrieves name and department). Each database had a collection of functions that retrieved data, and they depended on a database connection. Also, I wanted an obvious distinction which database a function belonged to. To me, it made sense to create an object for each database. The constructors did the preliminary work of setting up the connections.
Within each object, things are pretty much procedural. For example, each class has a function called getEmployeeName() which returns a string. At this point I don't see a need to create an Employee object and retrieve the name as a property. An object might make more sense if I needed to retrieve several pieces of data about an employee, but for the small amount of stuff I needed it didn't seem worth it.
Cost. Explain how when properly used the features of the language should allow software to be written and maintained for a lower cost. (e.g. Java's Foo.getBar() instead of the foo->bar so often seen in C/C++ code).Otherwise why are we doing it?
I found the book Concepts, Techniques, and Models of Computer Programming to be very helpful in understanding and giving me a vocabulary to discuss the differences in language paradigms. The book doesn't really cover Java or C# as 00-languages, but rather the concepts of different paradigms. If i was teaching OO i would start by showing the differences in the paradigms, then slowly the differences in the 00-languages, the practical stuff they can pickup by themselves doing coursework/projects.
When I moved from procedural to object oriented, the first thing I did was get familiarized with static scope.
Java is a good language to start doing OO in because it attempts to stay true to all the different OO paradigms.
A procedural programmer will look for things like program entry and exit points and once they can conceptualize that static scope on a throwaway class is the most familiar thing to them, the knowledge will blossom out from there.
I remember the lightbulb moment quite vividly. Help them understand the key terms abstract, instance, static, methods and you're probably going to give them the tools to learn better moving forward.

The limit of OOP Paradigm in really complex system? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I asked a question previously about Dataset vs Business Objects
.NET Dataset vs Business Object : Why the debate? Why not combine the two?
and I want to generalize the question here: where is the proof that OOP is really suitable for very complex problems ? Let's take a MMO Game Engine for example. I'm not specialist at all but as I read this article, it clearly stands that OOP is far from being enough:
http://t-machine.org/index.php/2007/11/11/entity-systems-are-the-future-of-mmog-development-part-2/
It concludes:
Programming well with Entity Systems is very close to programming with a Relational Database. It would not be unreasonable to call ES’s a form of “Relation Oriented Programming”.
So isn't OOP trying to get rid off something that is here to stay ?
OOP is non-linear, Relational is linear, both are necessary depending on the part of a system so why try to eliminate Relational just because it isn't "pure" Object. Is OOP an end by itself ?
My question is not is OOP usefull. OOP is usefull, my question is rather why the purists want to do "pure" OOP ?
As the author of the linked post, I thought I'd throw in a couple of thoughts.
FYI: I started seriously (i.e. for commercial work) using OOP / ORM / UML in 1997, and it took me about 5 years of day to day usage to get really good at it IMHO. I'd been programming in ASM and non-OOP languages for about 5 years by that point.
The question may be imperfectly phrased, but I think it's a good question to be asking yourself and investigating - once you understand how to phrase it better, you'll have learnt a lot useful about how this all hangs together.
"So isn't OOP trying to get rid off something that is here to stay ?"
First, read Bjarne's paper here: http://www.stroustrup.com/oopsla.pdf
IMHO, no-one should be taught any OOP without reading that paper (and re-reading after they've "learnt" OOP). So many many people misunderstand what they're dealing with.
IME, many university courses don't teach OOP well; they teach people how to write methods, and classes, and how to use objects. They teach poorly why you would do these things, where the ideas come from, etc. I think much of the mis-usage comes from that: almost a case of the blind leading the blind (they aren't blind in "how" to use OOP, they're just blind in "why" to use OOP).
To quote from the final paragraphs of the paper:
"how you support good programming techniques and good design techniques matters more than labels and buzz words. The fundamental idea is simply to improve design and programming through abstraction. You want to hide details, you want to exploit any commonality in a system, and you want to make this affordable.
I would like to encourage you not to make object-oriented a meaningless term. The notion of ‘‘object-oriented’’ is too frequently debased:
– by equating it with good,
– by equating it with a single language, or
– by accepting everything as object-oriented.
I have argued that there are–and must be–useful techniques beyond object-oriented programming and design. However, to avoid being totally misunderstood, I would like to emphasize that I wouldn’t attempt a serious project using a programming lan-
guage that didn’t at least support the classical notion of object-oriented programming. In addition to facilities that support object-oriented programming, I want –and C++ provides features that go beyond those in their support for direct expression of concepts and relationships."
Now ... I'd ask you ... of all the OOP programmers and OOP projects you've seen, how many of them can honestly claim to have adhered to what Bjarne requests there?
IME, less than the majority.
Bjarne states that:
"The fundamental idea is simply to improve design and programming through abstraction"
...and yet many people invent for themselves a different meaning, something like:
"The fundamental idea is that OOP is good, and everything-not-OOP is inferior"
Programmers who have programmed sequentially with ASM, then later ASM's, then pascal, then C, then C++, and have been exposed to the chaos that was programming pre-encapsulation etc tend to have better understanding of this stuff. They know why OOP came about, what it was trying to solve.
Funnily enough, OOP was not trying to solve every programming problem. Who'd have htought it, to say how it's talked about today?
It was aimed at a small number of problems that were hugely dangerous the bigger your project got, and which it turned out to be somewhere between "good" and "very good" at solving.
But even some of them it isn't any better than merely "good" at solving; there are other paradigms that are better...
All IMHO, of course ;)
Systems of any notable complexity are not linear. Even if you worked really hard to make a system one linear process, you're still relying on things like disks, memory and network connections that can be flaky, so you'll need to work around that.
I don't know that anyone thinks OOP is the final answer. It's just a way of dealing with complexity by trying to keep various problems confined to the smallest possible sphere so the damage they do when they blow up is minimized. My problem with your question is that it assumes perfection is possible. If it were, I could agree OOP isn't necessary. It is for me until someone comes up with a better way for me to minimize the number of mistakes I make.
Just read yr article about Entity Systems, which compares ES to OOP, and it is flagrantly wrong about several aspects of OOP. for e.g., When there are 100 instances of a class, OOP does not mandate that there be 100 copies of the classes methods loaded in memory, only one is necessary. Everything that ES purports to be able to do "better" than OOP because it has "Components", and "Systems", OOP supports as well using interfaces and static classes, (and/or Singletons).
And OOP more naturally fits with the real-world, as any real or imagined Problem Domain, consisting of multiple physical and/or non-physical items and abstractions, and the relationships between them, can be modeled with an appropriately designed hiearchical OOP class structure.
What we try to do is put an OO style on top of a relational system. In C# land this gets us a strongly typed system so that everything from end to end can be compiled and tested. The database has a hard time being tested, refactored, etc. OOP allows us to organize our application into layers and hiearchies which relational doesn't allow.
Well you've got a theoretical question.
Firstly let me agree with you that OOP is not a solve-all solution. It's good for somethings, it's not good for others. But that doesn't mean it doesn't scale up. Some horribly complex and huge systems have been designed using OOP.
I think OOP is so popular because it deserves to be. It solves some problems rather wonderfully, it is easy to think in terms of Objects because we can do that without re-programming ourselves.
So until we can all come up with a better alternatives that actually works in practical life, I think OOP is a pretty good idea and so are relational databases.
There is really no limit to what OOP can deal with - just as there is no real limit to what C can deal with, or assembler for that matter. All are Turing-complete, which is all you really need.
OOP simply gives you a higher-level way of breaking down the program, just as C is a higher-level than assembler.
The article about entity systems does not say that OO cannot do this - in fact, it sounds like they are using OOP to implement their Entities, Components, etc. In any complex domain there will be different ways of breaking it down, and using OOP you can break it down to the object/class level at some point. This does not preclude having higher-level conceptual frameworks which are used to design the OOP system.
The problem isn't the object oriented approach in most situations, the problem is performance and actual development of the underlying hardware.
The OO paradigm approach software development by providing us with a metaphor of the real world, were we have concepts which defines the common accepted and expected properties and behaivour of real objects in the world. Is the way that humans model things and we're able to solve most of the problems with it.
In theory you can define every aspect of a game, system or whatever using OO. In practice if you do, your program will simply behave too slow so the paradigm is messed up by optimizations which trade the simplicity of the model from performance.
In that way, relational databases are not object oriented so we build an object oriented layer between our code and the database... by doing so you lost some of the performance of the database and some of its expressiveness because, from the point of view of OO paradigm a relational database is a full class, is an very complex object that provides information.
From my point of view OO is an almost perfect approach in the theoretical sense of the word, as it maps closely to the way we, humans, think, but it doesn't fit well with the limited resources of the computational development... so we take shortcuts. At the and, performance is far more important than theoretical organization or clearness so this shortcuts become standards or usual practices.
That is, we are adapting the theoretical model to our current limitations. In the times of cobol in the late 70's object oriented was simply impossible... it would imply to many aspects and too little performance so we used a simplified approach, so simplified you didn't have objects or class, you had variables ... but the concept was, in that time, the same. Groups of variables described related concepts, properties that today will feet into an object. Control sequences based on a variable value where used to replace class hierarchies and so on.
I think we've been using OOP for a long time and that we'll continue using it for a long time. As hardware capabilities improve we'll be able to unsimplify the model so that it becomes more adaptable. If I describe perfectly (almost) the concept of a cat (which involves a lot of describing for a lot of concepts involved) that concept will be able to be reused everywhere... the problem here is not, as I've said, with the paradigm itself but with our limitations to implement it.
EDIT: To answer the question about why use pure OO. Every "science" wants to have a complete model to represent things. We have two physic models to describe nature, one at the microscopic level and one for the macroscopic one, and we want to have just one because it simplifies things it provides us with a better way to prove, test and develop things. With OO the same process applies. You can't analytically test and prove a system if the system doesn't follow a precise set of rules. If you are changing between paradigms in a program then your program cannot be properly analized, it has to be disected in each one, analized and then analized again to see that the interactions are correct. It makes a lot more difficult to understand a system because in fact you have two or three system that interact in different ways.
Guys, isn't the question more about ORM than OOP? OOP is a style of programming - the thing that actually gets compared is a Relational Database mapped onto objects.
OOP is actually more than just the ORM! It's also not just the inheritance and polymorphism! It's an extremly wide range of design patterns and above all it's the way we think about programming itself.
Jorge: it's ok that you've pointed out the opitimization part - what you didn't add is that this step should be done last and in 99% cases the slow part is not the OOP.
Now plain and simple: the OOP style with all the principals added to it (clean code, use of design patterns, not to deep inheritance structures and let's not forget unit testing!) it a way to make more people understand what you wrote. That in turn is needed for companies to keep their bussiness secure. That's also a recepie for small teams to have better understanding with the community. It's like a common meta language on top of the programming language itself.
It's always easier to talk about concepts from a purists point of view. Once you're faced with a real life problem things get trickier and the world is no longer just black and white. Just like the author of the article is very thorough in pointing out that they're not doing OOP the "OOP purist" tells you that OOP is the only way to go. The truth is somewhere in between.
There is no single answer, as long as you understand the different ways (OOP, entity systems, functional programming and many more) of doing things and can give good reason for why you're choosing one over the other in any given situation you're more likely to succeed.
About Entity Systems. It's an interesting conception but it brings nothing really new. For example it states:
OOP style would be for each Component to have zero or more methods, that some external thing has to invoke at some point. ES style is for each Component to have no methods but instead for the continuously running system to run it’s own internal methods against different Components one at a time.
But isn't it same as Martin Fowler's anti-pattern called "Anemic Domain Model" (which is extensively used nowadays, in fact) link ?
So basically ES is an "idea on the paper". For people to accept it, it MUST be proven with working code examples. There is not a single word in the article on how to implement this idea on practice. Nothing said about scalability concerns. Nothing said about fault tolerance...
As for your actual question I don't see how Entity Systems described in article can be similar to relational databases. Relational databases have no such thing as "aspects" that are described in the article. In fact, relational - based on tables data structure - is very limited when it comes to working with hierarchical data, for example. More limited than for example object databases...
Could you clarify what exactly you are trying to compare and prove here? OOP is a programming paradigm, one of the many. It's not perfect. It's not a silver bullet.
What does "Relation Oriented Programming" mean? Data-centric? Well, Microsoft was moving towards more data-centric style of programming until they given up on Linq2Sql and fully focused on their O/RM EntityFramework.
Also relational databases isn't everything. There is many different kinds of database architectures: hierarchical databases, network databases, object databases ect. And those can be even more efficient than relational. Relational are so popular for nearly the same reasons why OOP is so popular: it's simple, very easy to understand and most often efficient enough.
Ironically when oo programming arrived made it much easier to build larger systems, this was reflected in the ramp up in software to market.
Regarding scale and complexity, with good design you can build pretty complex systems.
see ddd Eric Evans for some principle patterns on handling complexity in oo.
However not all problem domains are best suited to all languages, if you have the freedom to choose a language choose one that suits your problem domain. or build a dsl if that's more appropriate.
We are software engineers after all, unless there is someone telling you how to do your job, just use the best tools for the job, or write them :)

How do I break my procedural coding habits? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I recently read an interesting comment on an OOP related question in which one user objected to creating a "Manager" class:
Please remove the word manager
from your vocabulary when talking
about class names. The name of the
class should be descriptive of its'
purpose. Manager is just another word
for dumping ground. Any
functionality will fit there. The word
has been the cause of many extremely
bad designs
This comment embodies my struggle to become a good object-oriented developer. I have been doing procedural code for a long time at an organization with only procedural coders. It seems like the main strategy behind the relatively little OO code we produce is to break the problem down into classes that are easily identifiable as discrete units and then put the left over/generalized bits in a "Manager" class.
How can I break my procedural habits (like the Manager class)? Most OO articles/books, etc. use examples of problems that are inherently easy to transform into object groups (e.g., Vehicle -> Car) and thus do not provide much guidance for breaking down more complex systems.
First of all, I'd stop acting like procedural code is wrong. It's the right tool for some jobs. OO is also the right tool for some jobs. So is functional. Each paradigm is just a different point of view of computation, and exists because it's convenient for certain problems, not because it's the only right way to program. In principle, all three paradigms are mathematically equivalent, so use whichever one best maps to the problem domain. IMHO, if using a multiparadigm language it's even ok to blend paradigms within a module if different subproblems are best modeled by different worldviews.
Secondly, I'd read up on design patterns. It's hard to understand OO without some examples of the real-world problems it's good for solving. Head First Design Patterns is a good read, as it answers a lot of the "why" of OO.
Becoming good at OO takes years of practice and study of good OO code, ideally with a mentor. Remember that OO is just one means to an end. That being said, here are some general guidelines that work for me:
Favor composition over inheritance. Read and re-read the first chapter of the GoF book.
Obey the Law of Demeter ("tell, don't ask")
Try to use inheritance only to achieve polymorphism. When you extend one class from another, do so with the idea that you'll be invoking the behavior of that class through a reference to the base class. ALL the public methods of the base class should make sense for the subclass.
Don't get hung up on modeling. Build a working prototype to inform your design.
Embrace refactoring. Read the first few chapters of Fowler's book.
The single responsibility principle helps me break objects into manageable classes that make sense.
Each object should do one thing, and do it well without exposing how it works internally to other objects that need to use it.
A 'manager' class will often:
Interogate something's state
Make a decision based on that state
As an antidote or contrast to that, Object-Oriented design would encourage you to design class APIs where you "tell don't ask" the class itself to do things itself (and to encapsulate its own state): for more about "tell don't ask" see e.g. here and here (and maybe someone else has a better explanation of "tell don't ask" but these are first two articles that Google found for me).
It seems like the main strategy the little OO code we produce is to break the problem down into classes that are easily identifiable as discrete units and then put the left over/generalized bits in a "Manager" class.
That may well be true even at the best of times. Coplien talked about this towards the end of his Advanced C++: Programming Styles and Idioms book: he said that in a system, you tend to have:
Self-contained objects
And, "transactions", which act on other objects
Take, for example, an airplane (and I'm sorry for giving you another vehicular example; I'm paraphrasing him):
The 'objects' might include the ailerons, the rudder, and the thrust
The 'manager' or autpilot would implement various commands or transactions
For example, the "turn right" transaction includes:
flaps.right.up()
flaps.left.down()
rudder.right()
thrust.increase()
So I think it's true that you have transactions, which cut across or use the various relatively-passive 'objects'; in an application, for example, the "whatever" user-command will end up being implemented by (and therefore, invoking) various objects from every layer (e.g. the UI, the middle layer, and the DB layer).
So I think it's true that to a certain extent you will have 'bits left over'; it's a matter of degree though: perhaps you ought to want as much of the code as possible to be self-contained, and encapsulating, and everything ... and the bits left over, which use (or depend on) everything else, should be given/using an API which hides as much as possible and which does as much as possible, and which therefore takes as much responsibility (implementation details) as possible away from the so-called manager.
Unfortunately I've only read of this concept in that one book (Advanced C++) and can't link you to something online for a clearer explanation than this paraphrase of mine.
Reading and then practicing OO principles is what works for me. Head First Object-Oriented Analysis & Design works you through examples to make a solution that is OO and then ways to make the solution better.
You can learn good object-oriented design principles by studying design patterns. Code Complete 2 is a good book to read on the topic. Naturally, the best way to ingrain good programming principles into your mind is to practice them constantly by applying them to your own coding projects.
How can I break my procedural habits (like the Manager class)?
Make a class for what the manager is managing (for example, if you have a ConnectionManager class, make a class for a Connection). Move everything into that class.
The reason "manager" is a poor name in OOP is that one of the core ideas in OOP is that objects should manage themselves.
Don't be afraid to make small classes. Coming from a procedural background, you may think it isn't worth the effort to make a class unless it's a thousand lines of code and is some core concept in your domain. Think smaller. A ten line class is totally valid. Make little classes where you see they make sense (a Date, a MailingAddress) and then work your way up by composing classes out of those.
As you start to partition little pieces of your codebase into classes, the remaining procedural code soup will shrink. In that shrinking pool, you'll start to see other things that can be classes. Continue until the pool is empty.
How many OOP programmers does it take to change a light bulb?
None, the light bulb changes itself.
;)
You can play around with an OO language that has very bad procedural support like Smalltalk. The message sending paradigm will force you into OO thinking.
i think you should start it with a good plan.
planning using CLASS Diagrams would be a good start.
you should identify the ENTITIES needed in the applicaiton,
then define each entitie's ATTRIBUTES, and METHODS.
if there are repeated ones, you could now re-define your entities
in a way that inheritance could be done, to avoid redundancy.
:D.
I have a three step process, this is one that I have gone through successfully myself. Later I met an ex-teacher turned programmer (now very experienced) who explained to me exactly why this method worked so well, there's some psychology involved but it's essentially all about maintaining control and confidence as you learn. Here it is:
Learn what test driven development (TDD) is. You can comfortably do this with procedural code so you don't need to start working with objects yet if you don't want to. The second step depends on this.
Pick up a copy of Refactoring: Improving the Design of Existing Code by Martin Fowler. It's essentially a catalogue of little changes that you can make to existing code. You can't refactor properly without tests though. What this allows you to do is to mess with the code without worrying that everything will break. Tests and refactoring take away the paranoia and sense that you don't know what will happen, which is incredibly liberating. You left to basically play around. As you get more confident with that start exploring mocks for testing the interactions between objects.
Now comes the big that most people, mistakenly start with, it's good stuff but it should really come third. At this point you can should reading about design patterns, code smells (that's a good one to Google) and object oriented design principles. Also learn about user stories or use cases as these give you good initial candidate classes when writing new applications, which is a good solution to the "where do I start?" problem when writing apps.
And that's it! Proven goodness! Let me know how it goes.
My eureka moment for understanding object-oriented design was when I read Eric Evans' book "Domain-Driven Design: Tackling Complexity in the Heart of Software". Or the "Domain Driven Design Quickly" mini-book (which is available online as a free PDF) if you are cheap or impatient. :)
Any time you have a "Manager" class or any static singleton instances, you are probably building a procedural design.