OOP design of tool for teaching formal languages & automata [closed] - oop

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I am thinking of using some spare time to play around with designing and implementing a teaching tool for a course on formal languages and automata theory. I am trying to decide whether an OOP implementation would be appropriate and, if so, whether anyone can suggest high-level improvements to the design I outline below.
There are lots of potential classes revealed in the linguistic analysis. Some (and please let me know if I've missed anything fundamental) are: grammar; nonterminal; terminal; production; regular grammar; context-free grammar; context-sensitive grammar; unrestricted grammar; automaton; state; symbol; transition; DFA; NFA; NFA-Lambda; DPDA; PDA; LBA; Turing machine.
Question 1: Should each kind of grammar get its own class in the implementation, or should an over-arching grammar class have methods to determine what kind of grammar it is (e.g., "isRegular()", "isContextFree()", etc.) (more generally, should classes which differ only a little in the domain model, and only in terms of behavior, be represented via inheritance in the implementation, or is it better to simply push different kinds of behavior into the parent class?)
Question 2: Should things like "Symbol", "State", "Nonterminal", etc. get their own classes in the implementation, or should these be dominated by their containers? (more generally, should very simple classes in the domain model be given their own classes in the implementation - e.g. for extensibility - or should that be pushed into the container class?)
Question 3: Should Transition be its own class in the implementation and, if so, will I need to subclass it in order to support each kind of automaton (since, besides differing in terms of the state, they also differ in terms of what happens during transitions)? (more generally, is it good practice to have two abstract parent classes where there is a bijection between the children of one and the children of another... coupling?)
I realize that, at the end of the day, a lot of these decisions are simply design decisions, but I would like to know what you guys think about best practices in OOP design. Moreover, and the reason I'm not just asking the "more generally" questions as pure OOP design questions is that I'd like special perspective from people who have experience with this kind of domain (languages & automata).
Any help is greatly appreciated.

more generally, should classes which differ only a little in the domain model, and only in terms of behavior, be represented via inheritance in the implementation, ...
Behavior is not "only". Behavior is most important part of objects.
or is it better to simply push different kinds of behavior into the parent class?
Certainly not. That would be violation of Liskov substitution principle.
Inheritance should not be used for "easier access to common stuff".
Content of parent class muss be completely ubiquitous with child classes, avoid inheritance and use composition if child does not comply w/ it.
more generally, should very simple classes in the domain model be given their own classes in the implementation - e.g. for extensibility - or should that be pushed into the container class?
It kind a depends on how "deep" Your logic is going to go.
Usually it's a good idea to start decomposition only when you hit some limits.
Some people call it evolutionary design.
E.g. - I have "class must not be larger than ~200 lines of code", "object must not talk with more than 5-7 other objects", "method should not be larger than ~10 lines of code", "same logic should be written once only" etc...
Also - it's easy to underestimate semantics. if order.isOverdue() is much more easily readable and understandable than if order.dueDate<date.Now(). Introducing classes that reflect concepts of domain greatly "humanizes" code - increases abstraction level (think "asm vs java").
But decomposition for the sake of decomposition leads to unnecessary complexity.
It always must be justified.
Should Transition be its own class in the implementation and, if so, will I need to subclass it in order to support each kind of automaton (since, besides differing in terms of the state, they also differ in terms of what happens during transitions)?
There is nothing wrong with that as long as it complies with Your domain.
Creation of abstractions is artistic activity that highly depends on Your domain (e.g. advises from experts at formal languages & automata might be severely over-complicated if requirement that Your code is supposed to be as a teaching tool for a course is forgotten).

Good Idea. Done similar stuff, first with Procedural programming, first, O.O., later.
Answer 1: Both. Some grammars or tokens will have a specific methods or attributes, while other should be shared among all grammars.
Answer 2: They should have their own classes, altought may share common ancestors.
Answer 3: It can be handled both ways, altought defining a specific class could be useful. I agree that intersection or association between other classes / objects exists, but, they are difficult to model. The "proxy" design pattern is an example of that.
Teaching language design with LEX, Bison, yacc are difficult. I "heard" that ANTLR is a good designing tool for teaching compiler related stuff.
What do you want to do ?
A object oriented parser / scanner ?
There are some already, I have trouble understanding how to use them. Some of them declare new grammars like defining new classes, but, in this case, I find functional programming (syntax) or logical programming (syntax) more suitable for declaring rules.
Visual related tools:
http://www.ust-solutions.com/ultragram.aspx
http://www.sand-stone.com/
http://antlr.org/
http://antlr.org/works/index.html
Good Luck ;-)

If you want to go with an existing solution instead of writing one, some friends of mine at NCSU wrote a tool called ProofChecker that might fit the bill.
It's written in Java, so that might satisfy your OO angle as well.

Related

Why is the object oriented model so occupying/monopolizing? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
Don't get me wrong - OOP currently is the best thing to structure large code bases.
But why do people try to stuff anything into an OO view?
For example: each text book about OOP contains an "introducing example" that tries to express a small view of our real world in an OO inheritance and composition and aggregation construct. And - meanwhile - we all know that it never results in the almighty OO construct that the OO model itself promised! The authors just created an illusion.
My personal opinion is, that OO is nice to structure code, but it is not suited to represent real world data and its relations. IMHO the relational model is superior, probably any other model is superior.
In OO design it became practice to recommend composition over inheritance - whenever possible. So that mighty looking model of an all-inheritance-based-world-of-objects thing that first class books suggest is just an illusion. So, OO itself may be an illusion? And current composition-centric OO models are nothing more than plain data structures with some standardized syntactic sugar - that's not much different than in pre-OOP approaches.
Another example: imagine a really f***ing complex model of our real-world. Besides anything else, there are stone blocks and humans. In an OO model, humans are mammals are animals are organic lifeforms and so on (the strictly rigid inheritance hierarchy OO imposes, you know..). The stone blocks are non-organic things, maybe they are rigid bodies or whatever, it doesn't matter.
If you are an artist and you have to find a stone block that makes a good "template" (?) for a statue of a human with given width, height an thickness, then you have to write a bunch of special case OO code to retrieve these attributes from the human model and from the stone block model. Or alternatively, your whole world model was build to support geometric queries - then it would be easy! But that leads to the conclusion that OOP sucks at representing data in a way that allows us to use it in different use cases. OOP just allows us to represent data for exactly those use cases that we have designed beforehand. Not much more. Any use besides those predeterminated cases can only be done with much fiddling. The relational model at least tries to represent data in a re-useable way. (re-useable: OOP once even occupied that word)
Why all that hate?
I work on a project that uses an ORM - and it just sucks. It started when modeling the database (because of ORM limitations), then came the time to learn the ins and outs of the ORM (and its bugs and further limitations), then came the fear of implicitly happening stuff (new thing(); thing->save() creates a new row, but where is "thing" rooted? why do people try to make objects as "independent" as possible but in the back create much more deeply rooted dependencies on freaking per-table-singletons, that communicate with connection-singletons.. oh my god .. I digress).
So many things that could have been done in a few lines of SQL and a sweet tiny query API were done in hundreds or maybe thousands of lines of "business logic" code (of course in the application layer, not in the database where the data is, and where aggregation functions like count() or sum() would be cheap). I think the people just feel better when they can work in OOP. But that is just stupid.
And the creators of ORM just want to keep the users away from the "dirty stuff". But exactly those people should not write ORM - the perfect example: I strongly believe that the ORM-creator type of people do not even know that a database table can contain compound primary keys! ;-)
So, why is OOP so occupying? It is just a half-baked abstraction, but people swear on it for everything, if you ask some, they may even tell you that OOP will create world peace.
Why is OOP so f***ing occupying/monopolizing?
It seems to me that if your business logic implementation is driving the design of your database, then somebody put the cart before the horse. I thought the idea was to develop a rational (no, I didn't say relational) data model and then implement whatever logic queries and updates the data.
My experience has been that although relational databases are great for storing and querying data from the user's perspective, trying to extend the relational model into a structured programming or OOP paradigm is difficult in the extreme. There's always a translation layer. Today everybody thinks that ORM is the solution. And although technically any translation layer that sits between a relational data store and an object oriented data access layer is ORM, when I see people talking about ORM these days they seem to be talking about some automated way of generating that ORM layer.
I'm not convinced that a generalized ORM solution exists. Every one I've seen is fraught with peril. It's a pain in the neck, but the only reliable ORM layers I've ever seen were hand coded. Granted, I haven't worked much with database stuff in the last five years or so, so things may have changed.
I'll agree with you that OOP is largely just a lot of syntactic sugar around a solid structured design. However, it's good syntactic sugar. It formalizes a lot of things that were considered "best practices" in structured programming, and adds some things (inheritance, interfaces, polymorphism, among others) that were very difficult or impossible to express in purely structured languages. We certainly could have added some or all of those features to structured languages without going all the way to OOP, but why? OOP was the obvious next step in the evolution of procedural programming languages.
OOP is just another tool in the toolbox, but keep in mind that OO has been the focus of mainstream programming languages for over 20 years now - starting with C++ and moving on to Java and C#. This probably has more to do with why the model is currently "so occupying" than anything else.
The point of OOP isn't necessarily to represent every single facet of every single object in the world. The point is to represent the stuff you care about. For instance, assume there's a house. Someone doing real estate stuff would care about the location, selling price, etc. A builder might care about the blueprint ID or something, i dunno. The point is, you model the important stuff and ignore the rest. Add to it later if you find you need more info.
Yes, this makes a "House" class tailored to the app being built, and possibly unsuitable for others. OOP's overall goal isn't to reuse classes, though sometimes that happens. The point is to bundle data with the actions that can affect that data, and to thus reduce a problem conceptually from hundreds of variables and functions to a few objects with known and tested interfaces and related behaviors.
The Human and StoneBlock should both inherit from MaterialObject, which has height width and depth attributes and even implements a biggerThan() method, when given another MaterialObject.
OO is "occupying" because it has shown to be an excellent choice for many programming problems. Similarly, the relational model has shown to be an excellent choice for many data storage and retrieval problems. When I say "many" I mean "so many that all other pale in comparison". In fact, both coupled together are an excellent combo, but there is complexity in mapping where these two paradigms meet, thus ORM.
I almost thought your question was insincere but then decided it was lack of experience (not intended as an insult, just guessing from the questions/assertions). You will find that there are problems so complex that OO is the only feasible way to model them. Not everything is a database-backed web site or reporting tool. Many systems are mostly "business logic" where the best solution is an OO solution (example from my experience: controlling and monitoring robotic aircraft and their various payloads). That being said, many of the popular database-backed web frameworks are OO+RDBMS (Rails, Grails, Java+Spring+Hibernate, etc.) because the combination is so powerful.
While there are certainly fads and sticky but outmoded paradigms, I suggest that when there are many choices (OO, functional programming, RDBMS-centric, etc.) people almost always choose what is the most productive. For at least 10 years, that has been OO for a large portion of software problems.
Because at the end of the day it's a good approximation of the way we ourselves model things. The counter that is often raised to this is that computers have no concept of objects, it's all 1s and 0s, but that analysis is as empty as saying all human thought is nothing more than neurons and electrical impulses (which it probably is, but it's just not a useful way of looking at things).
So you don't like inheritance? Nor do I. Inheritance of behaviour is the poor man's code reuse. Inheritance of interface, on the other hand, is great as it gives us polymorphism.
You don't like ORMs? No one is forcing you to use them. There is a conflict between OOP and RDMS that I don't think is easily fixed and most ORMs attempts to resolve this are quite naive. The limitations of ORM are not a flaw in OOP.

How to teach object oriented programming to procedural programmers? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been asked to begin teaching C# and OO concepts to a group of procedural programmers. I've searched for ideas on where to begin, but am looking for general consensus on topics to lead with in addition to topics to initially avoid.
Edit
I intend to present information in 30 minute installments weekly until it no longer makes sense to meet. These presentations are targeted at coworkers at a variety of skill levels from novice to expert.
The best thing you can do is: Have a ton of Q&A.
Wikipedia's procedural programming (PP) article really hits where you should start:
Whereas procedural programming uses
procedures to operate on data
structures, object-oriented
programming bundles the two together
so an "object" operates on its "own"
data structure.
Once this is understood, I think a lot will fall into place.
In general
OOP is one of those things that can take time to "get," and each person takes their own path to get there. When writing in C#, it's not like the code screams, "I am using OO principles!" in every line. It's more of a subtle thing, like a foreach loop, or string concatenation.
Design center
Always use something (repeatedly) before making it.
First, use an object, and demonstrate the basic differences from PP. Like:
static void Main(string[] args)
{
List<int> myList = new List<int>();
myList.Add(1);
myList.Add(7);
myList.Add(5);
myList.Sort();
for (int i = 0; i < myList.Count; i++)
{
Console.WriteLine(myList[i]);
}
}
Using objects (and other OO things) first -- before being forced to create their own -- leads people down the path of, "Ok, I'm making something like what I just used," rather than "WTF am I typing?"
Inheritance (it's a trap!)
I would NOT spend a lot of time on inheritance. I think it is a common pitfall for lessons to make a big deal about this (usually making a cliché animal hierarchy, as others pointed out). I think it's critical to know about inheritance, to understand how to use the .NET Framework, but its nuances aren't that big of a deal.
When I'm using .NET, I'm more likely to "run into inheritance" when I'm using the .NET Framework (i.e. "Does this control have a Content property?" or "I'll just call its ToString() method.") rather than when I'm creating my own class. Very (very (very)) rarely do I feel the need to make something mimicking the taxonomy structure of the animal kingdom.
Interfaces
Coding to an interface is a key mid-level concept. It's used everywhere, and OOP makes it easier. Examples of this are limitless. Building off the example I have above, one could demonstrate the IComparer<int> interface:
public int Compare(int x, int y)
{
return y.CompareTo(x);
}
Then, use it to change the sort order of the list, via myList.Sort(this). (After talking about this, of course.)
Best practices
Since there are some experienced developers in the group, one strategy in the mid-level classes would be to show how various best practices work in C#. Like, information hiding, the observer pattern, etc.
Have a ton of Q&A
Again, everyone learns slightly differently. I think the best thing you can do is have a ton of Q&A and encourage others in the group to have a discussion. People generally learn more when they're involved, and you have a good situation where that should be easier.
The leap from procedural to object oriented (even within a language - for four months I programmed procedural C++, and classes were uncomfortable for a while after) can be eased if you emphasize the more basic concepts that people don't emphasize.
For instance, when I first learned OOP, none of the books emphasized that each object has its own set of data members. I was trying to write classes for input validation and the like, not understanding that classes were to operate on data members, not input.
Get started with data structures right away. They make the OOP paradigm seem useful. People teach you how to make a "House" class, but since most beginning programmers want to do something useful right away, that seems like a useless detour.
Avoid polymorphism right away. Inheritance is alright, but teach when it is appropriate (instead of just adding to your base class).
Operator overloading is not essential when you are first learning, and the special ctors (default, dtor, copy ctor, and assignment operator all have their tricky aspects, and you might want to avoid that until they are grounded in basic class design).
Have them build a Stack or a Linked List. Don't do anything where traversal is tricky, like a binary tree.
Do it in stages.
High level concepts : Describe what an object is and relate it to real life.
Medium level concepts: Now that they got what object is, try compare and contrast. Show them why global variable is bad compared to an encapsulated value in a class. What advantage they might get from encapsulating. Start introducing the tennets of OOP (encapsulation, inheritance)
Low Level concepts: Go in further into polymorphism and abstraction. Show them how they can gain even better design through polymorphism and abstraction.
Advance concepts: SOLID, Interface programming, OO design patterns.
Perhaps you should consider a problem that is work related and start with a procedural implementation of it and then work through (session by session) how to make an OOP implementation of it. I find professionals often grasp concepts better if it is directly related to real examples from their own work place. The junk examples most textbooks use are often horrible for understanding because they leave the student wondering, why on earth would I ever want to do that. Give them a real life reason why they would want to do that and it makes more sense.
I would avoid the "a bicycle is a kind of veichle" approach and try to apply OO to an environment that is fairly specific and that they are already used to. Try to find a domain of problems that they all recognize.
Excercise the basics in that domain, but try to move towards some "wow!" or "aha!" experience relatively early; I had an experience like that while reading about "Replace Conditional with Polymorphism" in Fowlers Refactoring, that or similar books could be a good source of ideas. If I recall correctly, Michael Feathers Working effectively with legacy code contains a chapter about how to transform a procedural program into OO.
Teach Refactoring
Teach the basics, the bare minimum of OO principles, then teach Refactoring hands-on.
Traditional Way: Abstractions > Jargon Cloud > Trivial Implementation > Practical Use
(Can you spot the disconnect here? One of these transitions is harder than the others.)
In my experience most traditional education does not do a good job in getting programmers to actually grok OO principles. Instead they learn a bit of the syntax, some jargon they have a vague understanding of, and a couple canonical design examples that serve as templates for a lot of what they do. This is light years from the sort of thorough understanding of OO design and engineering one would desire competent students to obtain. The result tends to be that code gets broken down into large chunks in what might best be described as object-libraries, and the code is nominally attached to objects and classes but is very, very far from optimal. It's exceedingly common, for example, to see several hundred line methods, which is not very OO at all.
Provide Contrast To Sharpen The Focus on the Value of OO
Teach students by giving them the tools up front to improve the OO design of existing code, through refactoring. Take a big swath of procedural code, use extract method a bunch of times using meaningful method names, determine groups of methods that share a commonality and port them off to their own class. Replace switch/cases with polymorphism. Etc. The advantages of this are many. It gives students experience in reading and working with existing code, a key skill. It gives a more thorough understanding of the details and advantages of OO design. It's difficult to appreciate the merits of a particular OO design pattern in vacuo, but comparing it to a more procedural style or a clumsier OO design puts those merits in sharp contrast.
Build Knowledge Through Mental Models and Expressive Terminology
The language and terminology of refactoring help students in understanding OO design, how to judge the quality of OO designs and implementations through the idea of code smells. It also provides students a framework with which to discuss OO concepts with their peers. Without the models and terminology of, say, an automobile transmission, mechanics would have a difficult time communicating with each other and understanding automobiles. The same applies to OO design and software engineering. Refactoring provides abundant terminology and mental models (design patterns, code smells and corresponding favored specific refactorings, etc.) for the components and techniques of software engineering.
Build an Ethic of Craftsmanship
By teaching students that design is not set in stone you bolster students' confidence in their ability to experiment, learn, and discover. By getting their hands dirty they'll feel more empowered in tackling software engineering problems. This confidence and practical skill will allow them to truly own the design of their work (because they will always have the skills and experience to change that design, if they desire). This ownership will hopefully help foster a sense of responsibility, pride, and craftsmanship.
First, pick a language like C# or Java and have plenty of samples to demonstrate. Always show them the big picture or the big idea before getting into the finer details of OO concepts like abstraction or encapsulation. Be prepared to answer a lot of why questions with sufficient real world examples.
I'm kinda surprised there's any pure procedural programmers left ;-)
But, as someone who started coding back in the early 80s on procedural languages such as COBOL, C and FORTRAN, I remember the thing I had most difficulty with was instantiation. The concept of an object itself wasn't that hard as basically they are 'structures with attached methods' (looked at from a procedural perspective) but handling how and when I instantiated an object - and in those days without garbage collection - destroyed them caused me some trouble.
I think this arises because in some sense a procedural programmer can generally point to any variable in his code any say that's where that item of data is directly stored, whereas as soon as you instantiated an object and assign values to that then it's much less directly tangible (using pointers and memory allocation in C is of course similar, which may be a useful starting point also if your students have C experience). In essence I suppose it means that your procedural -> OOPS programmer has to learn to handle another level of abstraction in their code, and getting comfortable with this mental step is more difficult than it appears. By extension I'd therefore make sure that your students are completely comfortable with allocating and handling objects before looking at such potentially confusing concepts as static methods.
I'd recommend taking a look at Head First Design Patterns which has really nice and easy to understand examples of object oriented design which should really help. I wouldn't emphasize the 'patterns' aspect too much at this point though.
I'm a vb.net intermediate programmer, and I'm learning OOP. One of the things I find is the lecturing about the concepts over and over is unnerving. I think what would be perfect documentation would be a gradual transition from procedural programming to full blown OOP rather than trying to force them to understand the concepts then have them write exclusively OOP code using all the concepts. That way they can tinker with little projects like "hello world" without the intimidation of design.
For example (this is for VB.NET beginners not advanced procedural programmers).
I think the first chapters should always be about the general concepts, with just a few examples, but you should not force them to code strictly OOP right away, get them used to the language, so that it's natural for them. When I first started, I had to go back and read the manual over and over to remember HOW to write the code, but I had to wade through pages and pages of lecturing about concepts. Painful!
I just need to remember how to create a ReadOnly Property, or something. What would be real handy would be a section of the book that is a language reference so you can easily look in there to find out HOW to write the code.
Then you briefly explaining how forms, and all the objects are already objects, that have methods, and show how they behave, and example code.
Then show them how to create a class, and have them create a class that has properties, and methods, and the new construct. Then have them basically switch from them using procedural code in the form or modules, to writing methods for classes.
Then you just introduce more advance codes as you would any programming language.
Show them how inheritance works, etc. Just keep expanding, and let them use thier creativity to discover what can be done.
After they get used to writing and using classes, then show how thier classes could improve, introducing the concepts one by one in the code, modifying the existing projects and making them better. One good idea is to take an example project in procedural code, and transform it into a better application in OOP showing them all the limitations of OOP.
Now after that is the advanced part where you get into some really advanced OOP concepts, so that folks who are familar with OOP already get some value out of the book.
Define an object first, not using some silly animal, shape, vehicle example, but with something they already know. The C stdio library and the FILE structure. It's used as an opaque data structure with defined functions. Map that from a procedural use to an OO usage and go from there to encapsulation, polymorphism, etc.
If they are good procedural programmers and know what a structure and a pointer to a function are, the hardest part of the job is already done!
I think a low level lecture about how Object Oriented Programming can be implemented in procedural languages, or even assembler, could be cool. Then they will appreciate the amount of work that the compiler does for them; and maybe they will find coding patterns that they already knew and have used previously.
Then, you can talk about best practices in good Object Oriented design and introduce a bit of UML.
And a very important thing to keep in mind always is that they're not freshmen, don't spend much time with basic things because they'll get bored.
Show Design Patterns in Examples
There where some plenty good answers, alright. I also think, that you should use good languages, good, skillful examples, but I have an additional suggestion:
I have learned what OOP means, by studying Design Patterns. Of course, I have of course learned an OO-language before, but until I was working on Design Patterns, I did not understand the power of it all.
I also learned much from OO-Gurus like Robert C. Martin and his really great papers (to be found on his companies site).
Edit: I also advocate the use of UML (class diagrams) for teaching OO/Design-Pattern.
The thing that made it click for me was introducing Refactoring and Unit Testing. Most of my professional programming career has been in OO Languages, but I spent most of it writing procedural code. You call a function on an instance of class X, and it called a different method on an instance of class Y. I didn't see what the big deal about interfaces was, and thought that inheritance was simply a concept of convenience, and classes were by and large a way of helping us sort and categorize the massive code. If one was masochistic enough, they could have easily go through some of my old projects and inline everything until you get to one massive class. I'm still acutely embarrassed at how bad my code was, how naive my architecture was.
It half-clicked when we went through Martin Fowler's Refactoring book, and then fully clicked when started going through and writing Unit and Fitnesse tests for our code, forcing us to refactor. Start pushing refactoring, dependency injection, and separation of the code into distinct MVC models. Either it will sink in, or their heads will explode.
If someone truly doesn't get it, maybe they aren't cut out for working on OO, but I don't think anyone from our team got completely lost, so hopefully you'll have the same luck.
I'm an OO developer professionally, but have had had procedural developers on my development team (they were developing Matlab code, so it worked). One of the concepts that I like in OO programming is how objects can relate to your domain (http://en.wikipedia.org/wiki/Domain-driven_design - Eric Evans wrote a book on this, but it is not a beginner's book by any stretch).
With that said, I would start with showing OO concepts at a high level. Try to have them design a car for example. Most people would say a car has a body, engine, wheels, etc. Explain how those can relate to real world objects.
Once they seem to grasp that high level concept, then I would start in on the actual code part of it and concepts like inheritance vs aggregation, polymorphism, etc.
I learned about OOP during my post-secondary education. They did a fairly good job of explaining the concepts, but completely failed in explaining why and when. They way they taught OOP was that absolutely everything had to be an object and procedural programming was evil for some reason. The examples they were giving us seemed overkill to me, partly because objects didn't seem like the right solution to every problem, and partly because it seemed like a lot of unnecessary overhead. It made me despise OOP.
In the years since then, I've grown to like OOP in situations where it makes sense to me. The best example I can think of this is the most recent web app I wrote. Initially it ran off a single database of its own, but during development I decided to have it hook into another database to import information about new users so that I could have the application set them up automatically (enter employee ID, retrieves name and department). Each database had a collection of functions that retrieved data, and they depended on a database connection. Also, I wanted an obvious distinction which database a function belonged to. To me, it made sense to create an object for each database. The constructors did the preliminary work of setting up the connections.
Within each object, things are pretty much procedural. For example, each class has a function called getEmployeeName() which returns a string. At this point I don't see a need to create an Employee object and retrieve the name as a property. An object might make more sense if I needed to retrieve several pieces of data about an employee, but for the small amount of stuff I needed it didn't seem worth it.
Cost. Explain how when properly used the features of the language should allow software to be written and maintained for a lower cost. (e.g. Java's Foo.getBar() instead of the foo->bar so often seen in C/C++ code).Otherwise why are we doing it?
I found the book Concepts, Techniques, and Models of Computer Programming to be very helpful in understanding and giving me a vocabulary to discuss the differences in language paradigms. The book doesn't really cover Java or C# as 00-languages, but rather the concepts of different paradigms. If i was teaching OO i would start by showing the differences in the paradigms, then slowly the differences in the 00-languages, the practical stuff they can pickup by themselves doing coursework/projects.
When I moved from procedural to object oriented, the first thing I did was get familiarized with static scope.
Java is a good language to start doing OO in because it attempts to stay true to all the different OO paradigms.
A procedural programmer will look for things like program entry and exit points and once they can conceptualize that static scope on a throwaway class is the most familiar thing to them, the knowledge will blossom out from there.
I remember the lightbulb moment quite vividly. Help them understand the key terms abstract, instance, static, methods and you're probably going to give them the tools to learn better moving forward.

The limit of OOP Paradigm in really complex system? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I asked a question previously about Dataset vs Business Objects
.NET Dataset vs Business Object : Why the debate? Why not combine the two?
and I want to generalize the question here: where is the proof that OOP is really suitable for very complex problems ? Let's take a MMO Game Engine for example. I'm not specialist at all but as I read this article, it clearly stands that OOP is far from being enough:
http://t-machine.org/index.php/2007/11/11/entity-systems-are-the-future-of-mmog-development-part-2/
It concludes:
Programming well with Entity Systems is very close to programming with a Relational Database. It would not be unreasonable to call ES’s a form of “Relation Oriented Programming”.
So isn't OOP trying to get rid off something that is here to stay ?
OOP is non-linear, Relational is linear, both are necessary depending on the part of a system so why try to eliminate Relational just because it isn't "pure" Object. Is OOP an end by itself ?
My question is not is OOP usefull. OOP is usefull, my question is rather why the purists want to do "pure" OOP ?
As the author of the linked post, I thought I'd throw in a couple of thoughts.
FYI: I started seriously (i.e. for commercial work) using OOP / ORM / UML in 1997, and it took me about 5 years of day to day usage to get really good at it IMHO. I'd been programming in ASM and non-OOP languages for about 5 years by that point.
The question may be imperfectly phrased, but I think it's a good question to be asking yourself and investigating - once you understand how to phrase it better, you'll have learnt a lot useful about how this all hangs together.
"So isn't OOP trying to get rid off something that is here to stay ?"
First, read Bjarne's paper here: http://www.stroustrup.com/oopsla.pdf
IMHO, no-one should be taught any OOP without reading that paper (and re-reading after they've "learnt" OOP). So many many people misunderstand what they're dealing with.
IME, many university courses don't teach OOP well; they teach people how to write methods, and classes, and how to use objects. They teach poorly why you would do these things, where the ideas come from, etc. I think much of the mis-usage comes from that: almost a case of the blind leading the blind (they aren't blind in "how" to use OOP, they're just blind in "why" to use OOP).
To quote from the final paragraphs of the paper:
"how you support good programming techniques and good design techniques matters more than labels and buzz words. The fundamental idea is simply to improve design and programming through abstraction. You want to hide details, you want to exploit any commonality in a system, and you want to make this affordable.
I would like to encourage you not to make object-oriented a meaningless term. The notion of ‘‘object-oriented’’ is too frequently debased:
– by equating it with good,
– by equating it with a single language, or
– by accepting everything as object-oriented.
I have argued that there are–and must be–useful techniques beyond object-oriented programming and design. However, to avoid being totally misunderstood, I would like to emphasize that I wouldn’t attempt a serious project using a programming lan-
guage that didn’t at least support the classical notion of object-oriented programming. In addition to facilities that support object-oriented programming, I want –and C++ provides features that go beyond those in their support for direct expression of concepts and relationships."
Now ... I'd ask you ... of all the OOP programmers and OOP projects you've seen, how many of them can honestly claim to have adhered to what Bjarne requests there?
IME, less than the majority.
Bjarne states that:
"The fundamental idea is simply to improve design and programming through abstraction"
...and yet many people invent for themselves a different meaning, something like:
"The fundamental idea is that OOP is good, and everything-not-OOP is inferior"
Programmers who have programmed sequentially with ASM, then later ASM's, then pascal, then C, then C++, and have been exposed to the chaos that was programming pre-encapsulation etc tend to have better understanding of this stuff. They know why OOP came about, what it was trying to solve.
Funnily enough, OOP was not trying to solve every programming problem. Who'd have htought it, to say how it's talked about today?
It was aimed at a small number of problems that were hugely dangerous the bigger your project got, and which it turned out to be somewhere between "good" and "very good" at solving.
But even some of them it isn't any better than merely "good" at solving; there are other paradigms that are better...
All IMHO, of course ;)
Systems of any notable complexity are not linear. Even if you worked really hard to make a system one linear process, you're still relying on things like disks, memory and network connections that can be flaky, so you'll need to work around that.
I don't know that anyone thinks OOP is the final answer. It's just a way of dealing with complexity by trying to keep various problems confined to the smallest possible sphere so the damage they do when they blow up is minimized. My problem with your question is that it assumes perfection is possible. If it were, I could agree OOP isn't necessary. It is for me until someone comes up with a better way for me to minimize the number of mistakes I make.
Just read yr article about Entity Systems, which compares ES to OOP, and it is flagrantly wrong about several aspects of OOP. for e.g., When there are 100 instances of a class, OOP does not mandate that there be 100 copies of the classes methods loaded in memory, only one is necessary. Everything that ES purports to be able to do "better" than OOP because it has "Components", and "Systems", OOP supports as well using interfaces and static classes, (and/or Singletons).
And OOP more naturally fits with the real-world, as any real or imagined Problem Domain, consisting of multiple physical and/or non-physical items and abstractions, and the relationships between them, can be modeled with an appropriately designed hiearchical OOP class structure.
What we try to do is put an OO style on top of a relational system. In C# land this gets us a strongly typed system so that everything from end to end can be compiled and tested. The database has a hard time being tested, refactored, etc. OOP allows us to organize our application into layers and hiearchies which relational doesn't allow.
Well you've got a theoretical question.
Firstly let me agree with you that OOP is not a solve-all solution. It's good for somethings, it's not good for others. But that doesn't mean it doesn't scale up. Some horribly complex and huge systems have been designed using OOP.
I think OOP is so popular because it deserves to be. It solves some problems rather wonderfully, it is easy to think in terms of Objects because we can do that without re-programming ourselves.
So until we can all come up with a better alternatives that actually works in practical life, I think OOP is a pretty good idea and so are relational databases.
There is really no limit to what OOP can deal with - just as there is no real limit to what C can deal with, or assembler for that matter. All are Turing-complete, which is all you really need.
OOP simply gives you a higher-level way of breaking down the program, just as C is a higher-level than assembler.
The article about entity systems does not say that OO cannot do this - in fact, it sounds like they are using OOP to implement their Entities, Components, etc. In any complex domain there will be different ways of breaking it down, and using OOP you can break it down to the object/class level at some point. This does not preclude having higher-level conceptual frameworks which are used to design the OOP system.
The problem isn't the object oriented approach in most situations, the problem is performance and actual development of the underlying hardware.
The OO paradigm approach software development by providing us with a metaphor of the real world, were we have concepts which defines the common accepted and expected properties and behaivour of real objects in the world. Is the way that humans model things and we're able to solve most of the problems with it.
In theory you can define every aspect of a game, system or whatever using OO. In practice if you do, your program will simply behave too slow so the paradigm is messed up by optimizations which trade the simplicity of the model from performance.
In that way, relational databases are not object oriented so we build an object oriented layer between our code and the database... by doing so you lost some of the performance of the database and some of its expressiveness because, from the point of view of OO paradigm a relational database is a full class, is an very complex object that provides information.
From my point of view OO is an almost perfect approach in the theoretical sense of the word, as it maps closely to the way we, humans, think, but it doesn't fit well with the limited resources of the computational development... so we take shortcuts. At the and, performance is far more important than theoretical organization or clearness so this shortcuts become standards or usual practices.
That is, we are adapting the theoretical model to our current limitations. In the times of cobol in the late 70's object oriented was simply impossible... it would imply to many aspects and too little performance so we used a simplified approach, so simplified you didn't have objects or class, you had variables ... but the concept was, in that time, the same. Groups of variables described related concepts, properties that today will feet into an object. Control sequences based on a variable value where used to replace class hierarchies and so on.
I think we've been using OOP for a long time and that we'll continue using it for a long time. As hardware capabilities improve we'll be able to unsimplify the model so that it becomes more adaptable. If I describe perfectly (almost) the concept of a cat (which involves a lot of describing for a lot of concepts involved) that concept will be able to be reused everywhere... the problem here is not, as I've said, with the paradigm itself but with our limitations to implement it.
EDIT: To answer the question about why use pure OO. Every "science" wants to have a complete model to represent things. We have two physic models to describe nature, one at the microscopic level and one for the macroscopic one, and we want to have just one because it simplifies things it provides us with a better way to prove, test and develop things. With OO the same process applies. You can't analytically test and prove a system if the system doesn't follow a precise set of rules. If you are changing between paradigms in a program then your program cannot be properly analized, it has to be disected in each one, analized and then analized again to see that the interactions are correct. It makes a lot more difficult to understand a system because in fact you have two or three system that interact in different ways.
Guys, isn't the question more about ORM than OOP? OOP is a style of programming - the thing that actually gets compared is a Relational Database mapped onto objects.
OOP is actually more than just the ORM! It's also not just the inheritance and polymorphism! It's an extremly wide range of design patterns and above all it's the way we think about programming itself.
Jorge: it's ok that you've pointed out the opitimization part - what you didn't add is that this step should be done last and in 99% cases the slow part is not the OOP.
Now plain and simple: the OOP style with all the principals added to it (clean code, use of design patterns, not to deep inheritance structures and let's not forget unit testing!) it a way to make more people understand what you wrote. That in turn is needed for companies to keep their bussiness secure. That's also a recepie for small teams to have better understanding with the community. It's like a common meta language on top of the programming language itself.
It's always easier to talk about concepts from a purists point of view. Once you're faced with a real life problem things get trickier and the world is no longer just black and white. Just like the author of the article is very thorough in pointing out that they're not doing OOP the "OOP purist" tells you that OOP is the only way to go. The truth is somewhere in between.
There is no single answer, as long as you understand the different ways (OOP, entity systems, functional programming and many more) of doing things and can give good reason for why you're choosing one over the other in any given situation you're more likely to succeed.
About Entity Systems. It's an interesting conception but it brings nothing really new. For example it states:
OOP style would be for each Component to have zero or more methods, that some external thing has to invoke at some point. ES style is for each Component to have no methods but instead for the continuously running system to run it’s own internal methods against different Components one at a time.
But isn't it same as Martin Fowler's anti-pattern called "Anemic Domain Model" (which is extensively used nowadays, in fact) link ?
So basically ES is an "idea on the paper". For people to accept it, it MUST be proven with working code examples. There is not a single word in the article on how to implement this idea on practice. Nothing said about scalability concerns. Nothing said about fault tolerance...
As for your actual question I don't see how Entity Systems described in article can be similar to relational databases. Relational databases have no such thing as "aspects" that are described in the article. In fact, relational - based on tables data structure - is very limited when it comes to working with hierarchical data, for example. More limited than for example object databases...
Could you clarify what exactly you are trying to compare and prove here? OOP is a programming paradigm, one of the many. It's not perfect. It's not a silver bullet.
What does "Relation Oriented Programming" mean? Data-centric? Well, Microsoft was moving towards more data-centric style of programming until they given up on Linq2Sql and fully focused on their O/RM EntityFramework.
Also relational databases isn't everything. There is many different kinds of database architectures: hierarchical databases, network databases, object databases ect. And those can be even more efficient than relational. Relational are so popular for nearly the same reasons why OOP is so popular: it's simple, very easy to understand and most often efficient enough.
Ironically when oo programming arrived made it much easier to build larger systems, this was reflected in the ramp up in software to market.
Regarding scale and complexity, with good design you can build pretty complex systems.
see ddd Eric Evans for some principle patterns on handling complexity in oo.
However not all problem domains are best suited to all languages, if you have the freedom to choose a language choose one that suits your problem domain. or build a dsl if that's more appropriate.
We are software engineers after all, unless there is someone telling you how to do your job, just use the best tools for the job, or write them :)

Is Single Responsibility Principle a rule of OOP? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
An answer to a Stack Overflow question stated that a particular framework violated a plain and simple OOP rule: Single Responsibility Principle (SRP).
Is the Single Responsibility Principle really a rule of OOP?
My understanding of the definition of Object Orientated Programming is "a paradigm where objects and their behaviour are used to create software". This includes the following techniques: Encapsulation, Polymorphism & Inheritance.
Now don't get me wrong - I believe SRP to be the key to most good OO designs, but I feel there are cases where this principle can and should be broken (just like database normalization rules). I aggressively push the benefits of SRP, and the great majority of my code follows this principle.
But, is it a rule, and thus implies that it shouldn't be broken?
Very few rules, if any, in software development are without exception. Some people think there are no place for goto but they're wrong.
As far as OOP goes, there isn't a single definition of object-orientedness so depending on who you ask you'll get a different set of hard and soft principles, patterns, and practices.
The classic idea of OOP is that messages are sent to otherwise opaque objects and the objects interpret the message with knowledge of their own innards and then perform a function of some sort.
SRP is a software engineering principle that can apply to the role of a class, or a function, or a module. It contributes to the cohesion of something so that it behaves well put together without unrelated bits hanging off of it or having multiple roles that intertwine and complicate things.
Even with just one responsibilty, that can still range from a single function to a group of loosely related functions that are part of a common theme. As long as you're avoiding jury-rigging an element to take the responsibilty of something it wasn't primarily designed for or doing some other ad-hoc thing that dilute the simplicity of an object, then violate whatever principle you want.
But I find that it's easier to get SRP correct then to do something more elaborate that is just as robust.
None of these rules are laws. They are more guidelines and best practices. There are times when it doesn't make sense to follow "the rules" and you need to do what is best for your situation.
Don't be afraid to do what you think is right. You might actually come up with newer and better rules.
To quote Captain Barbossa:
"..And secondly, you must be a pirate for the pirate's code to apply and you're not.
And thirdly, the code is more what you'd call "guidelines" than actual rules...."
To quote Jack Sparrow & Gibbs.
"I thought you were supposed to keep to the code."
Mr. Gibbs: "We figured they were more actual guidelines. "
So clearly Pirates understand this pretty well.
The "rules" could be understood via the patterns movement as "Forces"
So there is a force trying to make the class have a single responsibility. (cohesion)
But there is also a force trying to keep the coupling to other classes down.
As with all design ( not just code) the answer is that it depends.
Ahh, I guess this pertains to an answer I gave. :)
As with most rules and laws, there are underlying motives by which these rules are relevant -- if the underlying motive is not present or applicable to your case, then you are free to bend/break the rules according to your own needs.
That being said, SRP is not a rule of OOP per se, but are considered best practices to create OOP applications that are both easily extensible and unit-testable.
Both are characteristics that I consider as of utmost importance in enterprise application development, where maintenance of existing applications occupies more time than new development does.
As many of the other posters have said, all rules are made to be broken.
That being said, I do think that SRP is one of the more important rules for writing good code. It's not specific to Object Oriented programming, but the "encapsulation" part of OOP is very hard to do right if the class does not have a single responsibility.
After all, how do you correctly and simply encapsulate a class with multiple responsibilities? Usually the answer is multiple interfaces and in many languages that can help quite a bit, but it's still confusing to the users of your class that it may apply in completely different ways in different situations.
SRP is just another expression of ISP :-) .
And the "P" means "principle" , not "rule" :D

When is OOP better suited for? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Since I started studying object-oriented programming, I frequently read articles/blogs saying functions are better, or not all problems should be modeled as objects. From your personal programming adventures, when do you think a problem is better solved by OOP?
There is no hard and fast rule. A problem is better solved with OOP when you are better at solving problems and thinking in an OO mentality. Object Orientation is just another tool which has come along through trying to make computing a better tool for solving problems.
However, it can allow for better code reuse, and can also lead to neater code. But quite often these highly praised qualities are, in-relity, of little real value. Applying OO techniques to an existing functional application could really cause a lot of problems. The skill lies in learning many different techniques and applying the most appropriate to the problem at hand.
OO is often quoted as a Nirvana-like solution to the software development, however there are many times when it is not appropriate to be applied to the issue at hand. It can, quite often, lead to over-engineering of a problem to reach the perfect solution, when often it is really not necessary.
In essence, OOP is not really Object Oriented Programming, but mapping Object Oriented Thinking to a programming language capable of supporting OO Techniques. OO techniques can be supported by languages which are not inherently OO, and there are techniques you can use within functional languages to take advantage of the benefits.
As an example, I have been developing OO software for about 20 years now, so I tend to think in OO terms when solving problems, irrespective of the language I am writing in. Currently I am implementing polymorphism using Perl 5.6, which does not natively support it. I have chosen to do this as it will make maintenance and extension of the code a simple configuration task, rather than a development issue.
Not sure if this is clear. There are people who are hard in the OO court, and there are people who are hard in the Functional court. And then there are people who have tried both and try to take the best from each. Neither is perfect, but both have some very good traits that you can utilise no matter what the language.
If you are trying to learn OOP, don't just concentrate on OOP, but try to utilise Object Oriented Analysis and general OO principles to the whole spectrum of the problem solution.
I'm an old timer, but have also programmed OOP for a long time. I am personally against using OOP just to use it. I prefer objects to have specific reasons for existing, that they model something concrete, and that they make sense.
The problem that I have with a lot of the newer developers is that they have no concept of the resources that they are consuming with the code that they create. When dealing with a large amount of data and accessing databases the "perfect" object model may be the worst thing you can do for performance and resources.
My bottom line is if it makes sense as an object then program it as an object, as long as you consider the performance/resource impact of the implementation of your object model.
I think it fits best when you are modeling something cohesive with state and associated actions on those states. I guess that's kind of vague, but I'm not sure there is a perfect answer here.
The thing about OOP is that it lets you encapsulate and abstract data and information away, which is a real boon in building a large system. You can do the same with other paradigms, but it seems OOP is especially helpful in this category.
It also kind of depends on the language you are using. If it is a language with rich OOP support, you should probably use that to your advantage. If it doesn't, then you may need to find other mechanisms to help break up the problem into smaller, easily testable pieces.
I am sold to OOP.
Anytime you can define a concept for a problem, it can probably be wrapped in an object.
The problem with OOP is that some people overused it and made their code even more difficult to understand. If you are careful about what you put in objects and what you put in services (static classes) you will benefit from using objects.
Just don't put something that doesn't belong to an object in the object because you need your object to do something new that you didn't think of initially, refactor and find the best way to add that functionality.
There are 5 criteria whether you should favor Object Oriented over Object Based,Functional or Procedural code. Remember all of these styles are available in all languages, they're styles. All of these are written in a style of "Should I favor OO in this situation?"
The system is very complex and has over approximately 9k LOC (Just an arbitrary level). -- As systems get more complex, the benefits gained by encapsulating complexity go up quite a bit. With OO, as opposed to the other techniques, you tend to encapsulate more and more of the complexity, which is very valuable at this level. Object Based or procedural should be favored before this. (This is not advocating a particular language mind you. OO C fits these features more than OO C++ in my mind, a language with a notorious reputation for leaky abstractions and an ability to eat shops with even 1 mediocre/obstinate programmer for lunch).
Your code is not operations on data (i.e. Database based or math/analysis based). Database based code is often more easily represented via procedural style. Analysis based code is often easier represented in a functional style.
Your model is a simulation of something (OO excels at simulations).
You're doing something for which the object based subtype dispatch of OO is valuable (aka, you need to send a message to all objects of a certain type and various subtypes and get an appropriate, but different, reaction out of all of them).
Your app is not multi-threaded, especially in a non-worker task method type of codebase. OO is quite problematic in programs which are multithreaded and require different threads to do different tasks. If your program is structured with one or two main threads and many worker threads doing the same thing, the muddled control flow of OO programs is easier to handle, as all of the worker threads will be isolated in what they touch and can be considered as a monolithic section of code. Consider any other paradigm actually. Functional excels at multithreading (lack of side effects is a huge boon), and object based programming can give you boons with some of the encapsulation of OO, however with more traceable procedural code in critical sections of your codebase. Procedural of course excels in this arena as well.
Some places where OO isn't so good are where you're dealing with "Sets" of data like in SQL. OO tends to make set based operations more difficult because it isn't really designed to optimally take the intersection of two sets or the superset of two sets.
Also, there are times when a functional approach would make more sense such as this example taken from MSDN:
Consider, for example, writing a program to convert an XML document into a different form of data. While it would certainly be possible to write a C# program that parsed through the XML document and applied a variety of if statements to determine what actions to take at different points in the document, an arguably superior approach is to write the transformation as an eXtensible Stylesheet Language Transformation (XSLT) program. Not surprisingly, XSLT has a large streak of functionalism inside of it
I find it helps to think of a given problem in terms of 'things'.
If the problem can be thought of as having one or more 'things', where each 'thing' has a number of attributes or pieces of information that refer to its state, and a number of operations that can be performed on it - then OOP is probably the way to go!
The key to learning Object Oriented Programming is learning about Design Pattern. By learning about design patterns you can see better when classes are needed and when they are not. Like anything else used in programming the use of classes and other features of OOP languages depends on your design and requirements. Like algorithms Design patterns are a higher level concept.
A Design Pattern plays similar role to that of algorithms for traditional programming languages. A design pattern tells you how create and combine object to perform some useful task. Like the best algorithms the best design patterns are general enough to be application to a variety of common problems.
In my opinion it is more a question about you as a person. Certain people think better in functional terms and others prefer classes and objects. I would say that OOP is better suited when it matches your internal (subjective) mental model of the world.
Object oriented code and procedural code have different extensibility points. Object oriented solutions make it easier to add new classes without modifying existing functions (see the Open-Closed Principle), while procedural code allows you to add functions without modifying existing data structures. Quite often different parts of a system require different approaches depending upon the type of change that is anticipated.
OO allows for logic related to an object to be placed within a single place (the class, or object) so that it can be decoupled and easier to debug and maintain.
What I have observed, is that every app is a combination of OO and procedural code, where the procedural code is the glue that binds all your objects together (at the very least, the code in your main function). The more you can turn your procedural code into OO, the easier it will be to maintain yor code.
Why OOP is used for programming:
Its flexibility – OOP is really flexible in terms of use implementations.
It can reduce your source codes by more than 99.9% – it may sound like I’m over exaggerating, but it is true.
It’s much easier in implementing security – We all know that security is one of the vital requirements when it comes to web development. Using OOP can ease the security implementations in your web projects.
It makes the coding more organized – We all know that a Clean Program is a Clean Coding. Using OOP instead of procedural makes things more organized and systematized (obviously).
It helps your team to work with each other easily – I know some of you had/have experienced team projects and some of you guys know that it’s important to have the same method, implementations, algorithm etc etc etc
It depends by the problem: the OOP paradigm is useful in designing distribuited systems or framework with a lot of entity living during the actions of the user (example: web application).
But if you have a math problem you will prefer a functional language (LISP); for a performance-critical systems you will use ADA or C, etc etc.
The language OOP is useful because too it use probabily the garbage collector (automatic use of memory) in the run of program: you you program in C a lot of time you must debug and correct manually a problem of memory.
OOP is useful when you have things. A socket, a button, a file. If you end a class in er it is almost always a function that is pretending to be a class. TestRunner more than likely should be a function that runs tests(and probably named run tests).
Personally, I think OOP is practically a necessity for any large application. I can't imagine having a program over 100k lines of code without using OOP, it would be a maintenance and design nightmare.
I tell you when OOP is bad.
When the architect writes really complicated, non-documented OOP code. Leaves half way through the project. And many of his common code pieces he used across various project has missing code. Thank god for .NET Reflector.
And the organization was not running Visual Source Safe or Subversion.
And I'm sorry. 2 pages of code to login is rather ridiculous even if it is cutely OOPed....