Why is multiple inheritance not supported in most of programming language? - oop

Why is multiple inheritance not supported in most of programming language?
I could really use this feature to develop different layout of application?

Multiple inheritance is useful in many situations as a developer, but it greatly increases the complexity of the language, which makes life harder for both the compiler developers and the programmers.
One problem occurs when two parent classes have data members or methods of the same name. It is difficult to resolve which is being referenced by the sub-class.
Another occurs when two parent classes inherit from the same base class, forming a "diamond" pattern in the inheritance hierarchy.
The order that the initialisation/elaboration of the parent classes needs to be specified - this can sometimes lead to behaviour changing when the order of the inheritance changes - something may catch developers by surprise.
Some languages support a reference to 'super', or equivalent, which refers to an attribute of the base class for this object. That becomes difficult to support in a language with multiple inheritance.
Some languages attempt to provide an automatic Object-Relational Model, so the objects can be made persistent with a regular RDMS. This mapping is difficult at the best of times (it has been described as the "Vietnam War" of software development), but it is much more difficult if multiple inheritance is supported.

One reason not to support it is ambiguity of method resolution.
http://en.wikipedia.org/wiki/Diamond_problem
However, I'm not sure what you mean by "most" programming languages. Many that are in use today support it directly (C++, Python, Perl, OCaml) or have a mechanism for similar functionality (Ruby and Scala come to mind).

The real reason why multiple inheritance is not supported across many langauges, is just the laziness of language developers. To cover up this embarrassing failure, all sorts of excuses are made, "it makes life difficult for the developer" bla bla, but for anyone who has actually used a language that implements it well, multiple inheritance becomes natural and easy after about 1 month. No big deal.
The only problem with it is after you have realized how useful and easy it is, you tend to become allergic to languages that don't support it and this may constrain your career prospects.
So my advice would be to stay well away from it.

Related

Can we have Abstraction without Encapsulation or vice versa?

I faced this question in an interview. I came back and read up about it here on SO and came across this, this and this, and many other duplicates saying almost the same thing.
I understand that the general definitions predate their implementations in programming languages. I also know that programming languages (I speak for Java) try to separate the two definitions as much as possible. For ex: Java considers class as encapsulation, as provides a wrapper to hold all your similar data; and interface is a kind of abstraction.
But what I really want to know is how do the two definitions overlap? Are the two subsets, overlapping, or completely disjoint sets. I understand that the definitions change wrt to generic definitions and language specific definitions, so answer both if you can, or atleast mention the one you're talking about.

Why use classes instead of functions? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I do know some advantages to classes such as variable and function scopes, but other than that is just seems easier to me to have groups of functions rather than to have many instances and abstractions of classes. So why is the "norm" to group similar functions in a class?
Simple, non-OOP programs may be one
long list of commands. More complex
programs will group lists of commands
into functions or subroutines each of
which might perform a particular task.
With designs of this sort, it is
common for the program's data to be
accessible from any part of the
program. As programs grow in size,
allowing any function to modify any
piece of data means that bugs can have
wide-reaching effects.
In contrast, the object-oriented
approach encourages the programmer to
place data where it is not directly
accessible by the rest of the program.
Instead the data is accessed by
calling specially written functions,
commonly called methods, which are
either bundled in with the data or
inherited from "class objects" and act
as the intermediaries for retrieving
or modifying those data. The
programming construct that combines
data with a set of methods for
accessing and managing those data is
called an object.
Advantages of OOP programming:
MAINTAINABILITY Object-oriented programming methods make code more maintainable. Identifying the source of errors is easier because objects are self-contained.
REUSABILITY Because objects contain both data and methods that act on data, objects can be thought of as self-contained black boxes. This feature makes it easy to reuse code in new systems.Messages provide a predefined interface to an object's data and functionality. With this interface, an object can be used in any context.
SCALABILITY Object-oriented programs are also scalable. As an object's interface provides a road map for reusing the object in new software, and provides all the information needed to replace the object without affecting other code. This way aging code can be replaced with faster algorithms and newer technology.
The point of OOP is not to 'group similar functions in a class'. If this is all you're doing then you're not doing OOP (despite using an OO language). Having classes instead of just a bunch of functions has a side effect of 'variable and function scopes' that you mention, but I see it just as a side effect.
OOP is about such concepts as encapsulation, inheritance, polymorphism, abstraction and many others. It is a specific way of software design, a specific way of mapping a problem to a software solution.
Grouping functions in a class is by no means the norm. Let me share some of the things I have learned through experimentation with different languages and paradigms.
I think the core concept here is that of a namespace. Namespaces are so useful that they are present in almost on any programming language.
Namespaces can help you overcome some common problems and model various patterns that appear in many domains, e.g., avoiding name collisions, hiding details, representing hierarchies, define access control, grouping related symbols (functions or data), define a context or scope ... and I'm sure there are more applications.
Classes are a type of namespace, and the specific properties of classes vary from language to language and sometimes from version to version of the same language, e.g., some provide access modifiers, some do not; some allow inheritance from multiple classes, others do not. People have been trying to find the magic mix of features that will be the most useful and that in part explains the plethora of available options in different programming languages.
So, why use classes, because they help solve certain kinds of problems in a way that seems natural or maybe intuitive. Every time we write a computer program we're trying to capture the essence of the problem and if the problem can be modeled by using some of the patterns mentioned above then it makes perfect sense to use those features of a language that help you do that.
As the problem becomes better understood you might realize that certain parts of the program could be better implemented by using a different paradigm/feature/pattern and then it's time for refactoring. Most programs I have had the chance to work on keep evolving until either the money/resources run out or when we arrive at the point of diminishing returns, many times you have something that's good enough for now and there's little incentive to keep working on it.
It's not the norm, it's just one way of doing it. Classes group methods (functions) AND data together, based on the concept of encapsulation.
For lager projects it often becomes easier to group things this way. Many people find it easier to conceptualizes the problem with objects.
There are many reasons to use classes, not the least of which is encapsulation of logic. Objects more closely match the world we live in, and are thus often more intuitive than other methodologies. Consider a car, a car has properties like body color, interior color, engine horsepower, features, current mileage, etc.. It also has methods, like Start (), TurnRight(.30), ApplyBrakes(.50). It has events like the ding when you open your car door with the keys in the ignition.
Probably the biggest reason is that most applications seem to have a graphical component these days and most of the libraries for graphical user interface are implemented with object models.
Polymorphism is probably a big reason, too. The ability to treat multiple types of objects generically is quite helpful.
If you are a mathematician, a functional style may be more intuitive, ML, F#. If you’re interacting with data in a predictable format, a declarative style would be better like SQL or LINQ.
In simple words, it seems to me that (apart from everything everyone is saying) that classes are best suited for large projects, especially those implemented by more than one programmer to facilitate keeping things tidy; as using functions in such situations can become rather cumbersome.
Otherwise, for simple programs/projects that you would implement yourself to do one thing or another, then functions would do nicely.

What is the difference between procedural and OO development?

Of course, I can explain it in whole books.
But I read a few days ago, that in a application talk, it is often asked and they expect a answer in 2-5 sentence, that should be very clear and show that you udnerstand the material.
I tried a few times to collect the answer in 2 sentence but don't get a good one.
How's about this for a succinct description:
Procedural Programming is primarily organized around "actions" and "logic".
OOP is primarily organized around "objects" and "data".
OOP takes the view that what we really care about are the objects we want to manipulate rather than the logic required to manipulate them.
Procedural Programming means dividing the problem up into smaller parts and then representing each smaller part by a definitive sub-routine,function or procedure.
OOP decomposes the problem to a set of interacting objects, each object is comprised of a number of elements, called members and methods (as opposed to variables and functions). The purpose of the object is to abstract part of the real world that we're interested in (our problem domain).
Three sentences...
Defining data structures and the behavioural logic that acts on them are central to both approaches. Being able to encapsulate associated data and behaviour allows for the concept of self-contained “Object” constructs. Pure Object Oriented Programming is where no other type of construct is required.
There is of course a mixture of both approaches in most modern high-level languages. Constructs like Value Types and Static Classes are there to provide the procedural constructs that are still very useful.
Procedure development lacks Inheritance, Encapsulation and Polymorphism. Tree paradigms that make OOP a better way of developing complex solutions.
With procedural development you often run into spaghetti code especially with complex solutions which makes it much much harder to maintain such solutions.
I'm just thinking of times when I did Turbo Pascal development and the way I do it now... A complete shift.
One more difference I felt and experienced is maintenance of code. With procedural language code maintenance goes wary but it is lot better with OO. Sometimes, changing code at some where deep down in the procedural program has blown the whole functionality itself.
Main difference is that object-oriented programming (OOP) is a programming paradigm that uses "objects" — data structures consisting of datafields and methods — and their interactions to design applications and computer programs. Programming techniques may include features such as information hiding, data abstraction, encapsulation, modularity, polymorphism, and inheritance.
In my opinion OOP is like the reality we live in. Everything around us is an object, has its own behaviour and structure.

Practical uses of OOP

I recently had a debate with a colleague who is not a fan of OOP. What took my attention was what he said:
"What's the point of doing my coding in objects? If it's reuse then I can just create a library and call whatever functions I need for whatever task is at hand. Do I need these concepts of polymorphism, inheritance, interfaces, patterns or whatever?"
We are in a small company developing small projects for e-commerce sites and real estate.
How can I take advantage of OOP in an "everyday, real-world" setup? Or was OOP really meant to solve complex problems and not intended for "everyday" development?
My personally view: context
When you program in OOP you have a greater awareness of the context. It helps you to organize the code in such a way that it is easier to understand because the real world is also object oriented.
The good things about OOP come from tying a set of data to a set of behaviors.
So, if you need to do many related operations on a related set of data, you can write many functions that operate on a struct, or you can use an object.
Objects give you some code reuse help in the form of inheritance.
IME, it is easier to work with an object with a known set of attributes and methods that it is to keep a set of complex structs and the functions that operate on them.
Some people will go on about inheritance and polymorphism. These are valuable, but the real value in OOP (in my opinion) comes from the nice way it encapsulates and associates data with behaviors.
Should you use OOP on your projects? That depends on how well your language supports OOP. That depends on the types of problems you need to solve.
But, if you are doing small websites, you are still talking about enough complexity that I would use OOP design given proper support in the development language.
More than getting something to just work - your friend's point, a well designed OO design is easier to understand, to follow, to expand, to extend and to implement. It is so much easier for example to delegate work that categorically are similar or to hold data that should stay together (yes even a C struct is an object).
Well, I'm sure a lot of people will give a lot more academically correctly answers, but here's my take on a few of the most valuable advantages:
OOP allows for better encapsulation
OOP allows the programmer to think in more logical terms, making software projects easier to design and understand (if well designed)
OOP is a time saver. For example, look at the things you can do with a C++ string object, vectors, etc. All that functionality (and much more) comes for "free." Now, those are really features of the class libraries and not OOP itself, but almost all OOP implementations come with nice class libraries. Can you implement all that stuff in C (or most of it)? Sure. But why write it yourself?
Look at the use of Design Patterns and you'll see the utility of OOP. It's not just about encapsulation and reuse, but extensibility and maintainability. It's the interfaces that make things powerful.
A few examples:
Implementing a stream (decorator pattern) without objects is difficult
Adding a new operation to an existing system such as a new encryption type (strategy pattern) can be difficult without objects.
Look at the way PostgresQL is
implemented versus the way your
database book says a database should
be implemented and you'll see a big
difference. The book will suggest
node objects for each operator.
Postgres uses myriad tables and
macros to try to emulate these nodes.
It is much less pretty and much
harder to extend because of that.
The list goes on.
The power of most programming languages is in the abstractions that they make available. Object Oriented programming provides a very powerful system of abstractions in the way it allows you to manage relationships between related ideas or actions.
Consider the task of calculating areas for an arbitrary and expanding collection of shapes. Any programmer can quickly write functions for the area of a circle, square, triangle, ect. and store them in a library. The difficulty comes when trying to write a program that identifies and calculates the area of an arbitrary shape. Each time you add a new kind of shape, say a pentagon, you would need to update and extend something like an IF or CASE structure to allow your program to identify the new shape and call the correct area routine from your "library of functions". After a while, the maintenance costs associated with this approach begin to pile up.
With object-oriented programming, a lot of this comes free-- just define a Shape class that contains an area method. Then it doesn't really matter what specific shape you're dealing with at run time, just make each geometrical figure an object that inherits from Shape and call the area method. The Object Oriented paradigm handles the details of whether at this moment in time, with this user input, do we need to calculate the area of a circle, triangle, square, pentagon or the ellipse option that was just added half a minute ago.
What if you decided to change the interface behind the way the area function was called? With Object Oriented programming you would just update the Shape class and the changes automagically propagate to all entities that inherit from that class. With a non Object Oriented system you would be facing the task of slogging through your "library of functions" and updating each individual interface.
In summary, Object Oriented programming provides a powerful form of abstraction that can save you time and effort by eliminating repetition in your code and streamlining extensions and maintenance.
Around 1994 I was trying to make sense of OOP and C++ at the same time, and found myself frustrated, even though I could understand in principle what the value of OOP was. I was so used to being able to mess with the state of any part of the application from other languages (mostly Basic, Assembly, and Pascal-family languages) that it seemed like I was giving up productivity in favor of some academic abstraction. Unfortunately, my first few encounters with OO frameworks like MFC made it easier to hack, but didn't necessarily provide much in the way of enlightenment.
It was only through a combination of persistence, exposure to alternate (non-C++) ways of dealing with objects, and careful analysis of OO code that both 1) worked and 2) read more coherently and intuitively than the equivalent procedural code that I started to really get it. And 15 years later, I'm regularly surprised at new (to me) discoveries of clever, yet impressively simple OO solutions that I can't imagine doing as neatly in a procedural approach.
I've been going through the same set of struggles trying to make sense of the functional programming paradigm over the last couple of years. To paraphrase Paul Graham, when you're looking down the power continuum, you see everything that's missing. When you're looking up the power continuum, you don't see the power, you just see weirdness.
I think, in order to commit to doing something a different way, you have to 1) see someone obviously being more productive with more powerful constructs and 2) suspend disbelief when you find yourself hitting a wall. It probably helps to have a mentor who is at least a tiny bit further along in their understanding of the new paradigm, too.
Barring the gumption required to suspend disbelief, if you want someone to quickly grok the value of an OO model, I think you could do a lot worse than to ask someone to spend a week with the Pragmatic Programmers book on Rails. It unfortunately does leave out a lot of the details of how the magic works, but it's a pretty good introduction to the power of a system of OO abstractions. If, after working through that book, your colleague still doesn't see the value of OO for some reason, he/she may be a hopeless case. But if they're willing to spend a little time working with an approach that has a strongly opinionated OO design that works, and gets them from 0-60 far faster than doing the same thing in a procedural language, there may just be hope. I think that's true even if your work doesn't involve web development.
I'm not so sure that bringing up the "real world" would be as much a selling point as a working framework for writing good apps, because it turns out that, especially in statically typed languages like C# and Java, modeling the real world often requires tortuous abstractions. You can see a concrete example of the difficulty of modeling the real world by looking at thousands of people struggling to model something as ostensibly simple as the geometric abstraction of "shape" (shape, ellipse, circle).
All programming paradigms have the same goal: hiding unneeded complexity.
Some problems are easily solved with an imperative paradigm, like your friend uses. Other problems are easily solved with an object-oriented paradigm. There are many other paradigms. The main ones (logic programming, functional programming, and imperative programming) are all equivalent to each other; object-oriented programming is usually thought as an extension to imperative programming.
Object-oriented programming is best used when the programmer is modeling items that are similar, but not the same. An imperative paradigm would put the different kinds of models into one function. An object-oriented paradigm separates the different kinds of models into different methods on related objects.
Your colleague seems to be stuck in one paradigm. Good luck.
To me, the power of OOP doesn't show itself until you start talking about inheritance and polymorphism.
If one's argument for OOP rests the concept of encapsulation and abstraction, well that isn't a very convincing argument for me. I can write a huge library and only document the interfaces to it that I want the user to be aware of, or I can rely on language-level constructs like packages in Ada to make fields private and only expose what it is that I want to expose.
However, the real advantage comes when I've written code in a generic hierarchy so that it can be reused later such that the same exact code interfaces are used for different functionality to achieve the same result.
Why is this handy? Because I can stand on the shoulders of giants to accomplish my current task. The idea is that I can boil the parts of a problem down to the most basic parts, the objects that compose the objects that compose... the objects that compose the project. By using a class that defines behavior very well in the general case, I can use that same proven code to build a more specific version of the same thing, and then a more specific version of the same thing, and then yet an even more specific version of the same thing. The key is that each of these entities has commonality that has already been coded and tested, and there is no need to reimpliment it again later. If I don't use inheritance for this, I end up reimplementing the common functionality or explicitly linking my new code against the old code, which provides a scenario for me to introduce control flow bugs.
Polymorphism is very handy in instances where I need to achieve a certain functionality from an object, but the same functionality is also needed from similar, but unique types. For instance, in Qt, there is the idea of inserting items onto a model so that the data can be displayed and you can easily maintain metadata for that object. Without polymorphism, I would need to bother myself with much more detail than I currently do (I.E. i would need to implement the same code interfaces that conduct the same business logic as the item that was originally intended to go on the model). Because the base class of my data-bound object interacts natively with the model, I can instead insert metadata onto this model with no trouble. I get what I need out of the object with no concern over what the model needs, and the model gets what it needs with no concern over what I have added to the class.
Ask your friend to visualize any object in his very Room, House or City... and if he can tell a single such object which a system in itself and is capable of doing some meaningful work. Things like a button isnt doing something alone - it takes lots of objects to make a phone call. Similarly a car engine is made of the crank shaft, pistons, spark plugs. OOPS concepts have evolved from our perception in natural processes or things in our lives. The "Inside COM" book tells the purpose of COM by taking analogy from a childhood game of identifying animals by asking questions.
Design trumps technology and methodology. Good designs tend to incorporate universal principals of complexity management such as law of demeter which is at the heart of what OO language features strive to codify.
Good design is not dependant on use of OO specific language features although it is typically in ones best interests to use them.
Not only does it make
programming easier / more maintainable in the current situation for other people (and yourself)
It is already allowing easier database CRUD (Create, Update, Delete) operations.
You can find more info about it looking up:
- Java : Hibernate
- Dot Net : Entity Framework
See even how LINQ (Visual Studio) can make your programming life MUCH easier.
Also, you can start using design patterns for solving real life problems (design patterns are all about OO)
Perhaps it is even fun to demonstrate with a little demo:
Let's say you need to store employees, accounts, members, books in a text file in a similar way.
.PS. I tried writing it in a PSEUDO way :)
the OO way
Code you call:
io.file.save(objectsCollection.ourFunctionForSaving())
class objectsCollection
function ourFunctionForSaving() As String
String _Objects
for each _Object in objectsCollection
Objects &= _Object & "-"
end for
return _Objects
end method
NON-OO Way
I don't think i'll write down non-oo code. But think of it :)
NOW LET'S SAY
In the OO way. The above class is the parent class of all methods for saving the books, employees, members, accounts, ...
What happens if we want to change the way of saving to a textfile? For example, to make it compactible with a current standard (.CVS).
And let's say we would like to add a load function, how much code do you need to write?
In the OO- way you only need the add a New Sub method which can split all the data into parameters (This happens once).
Let your collegue think about that :)
In domains where state and behavior are poorly aligned, Object-Orientation reduces the overall dependency density (i.e. complexity) within these domains, which makes the resulting systems less brittle.
This is because the essence of Object-Orientation is based on the fact that, organizationally, it doesn't dustinguish between state and behavior at all, treating both uniformly as "features". Objects are just sets of features clumpled to minimize overall dependency.
In other domains, Object-Orientation is not the best approach. There are different language paradigms for different problems. Experienced developers know this, and are willing to use whatever language is closest to the domain.

When is OOP better suited for? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Since I started studying object-oriented programming, I frequently read articles/blogs saying functions are better, or not all problems should be modeled as objects. From your personal programming adventures, when do you think a problem is better solved by OOP?
There is no hard and fast rule. A problem is better solved with OOP when you are better at solving problems and thinking in an OO mentality. Object Orientation is just another tool which has come along through trying to make computing a better tool for solving problems.
However, it can allow for better code reuse, and can also lead to neater code. But quite often these highly praised qualities are, in-relity, of little real value. Applying OO techniques to an existing functional application could really cause a lot of problems. The skill lies in learning many different techniques and applying the most appropriate to the problem at hand.
OO is often quoted as a Nirvana-like solution to the software development, however there are many times when it is not appropriate to be applied to the issue at hand. It can, quite often, lead to over-engineering of a problem to reach the perfect solution, when often it is really not necessary.
In essence, OOP is not really Object Oriented Programming, but mapping Object Oriented Thinking to a programming language capable of supporting OO Techniques. OO techniques can be supported by languages which are not inherently OO, and there are techniques you can use within functional languages to take advantage of the benefits.
As an example, I have been developing OO software for about 20 years now, so I tend to think in OO terms when solving problems, irrespective of the language I am writing in. Currently I am implementing polymorphism using Perl 5.6, which does not natively support it. I have chosen to do this as it will make maintenance and extension of the code a simple configuration task, rather than a development issue.
Not sure if this is clear. There are people who are hard in the OO court, and there are people who are hard in the Functional court. And then there are people who have tried both and try to take the best from each. Neither is perfect, but both have some very good traits that you can utilise no matter what the language.
If you are trying to learn OOP, don't just concentrate on OOP, but try to utilise Object Oriented Analysis and general OO principles to the whole spectrum of the problem solution.
I'm an old timer, but have also programmed OOP for a long time. I am personally against using OOP just to use it. I prefer objects to have specific reasons for existing, that they model something concrete, and that they make sense.
The problem that I have with a lot of the newer developers is that they have no concept of the resources that they are consuming with the code that they create. When dealing with a large amount of data and accessing databases the "perfect" object model may be the worst thing you can do for performance and resources.
My bottom line is if it makes sense as an object then program it as an object, as long as you consider the performance/resource impact of the implementation of your object model.
I think it fits best when you are modeling something cohesive with state and associated actions on those states. I guess that's kind of vague, but I'm not sure there is a perfect answer here.
The thing about OOP is that it lets you encapsulate and abstract data and information away, which is a real boon in building a large system. You can do the same with other paradigms, but it seems OOP is especially helpful in this category.
It also kind of depends on the language you are using. If it is a language with rich OOP support, you should probably use that to your advantage. If it doesn't, then you may need to find other mechanisms to help break up the problem into smaller, easily testable pieces.
I am sold to OOP.
Anytime you can define a concept for a problem, it can probably be wrapped in an object.
The problem with OOP is that some people overused it and made their code even more difficult to understand. If you are careful about what you put in objects and what you put in services (static classes) you will benefit from using objects.
Just don't put something that doesn't belong to an object in the object because you need your object to do something new that you didn't think of initially, refactor and find the best way to add that functionality.
There are 5 criteria whether you should favor Object Oriented over Object Based,Functional or Procedural code. Remember all of these styles are available in all languages, they're styles. All of these are written in a style of "Should I favor OO in this situation?"
The system is very complex and has over approximately 9k LOC (Just an arbitrary level). -- As systems get more complex, the benefits gained by encapsulating complexity go up quite a bit. With OO, as opposed to the other techniques, you tend to encapsulate more and more of the complexity, which is very valuable at this level. Object Based or procedural should be favored before this. (This is not advocating a particular language mind you. OO C fits these features more than OO C++ in my mind, a language with a notorious reputation for leaky abstractions and an ability to eat shops with even 1 mediocre/obstinate programmer for lunch).
Your code is not operations on data (i.e. Database based or math/analysis based). Database based code is often more easily represented via procedural style. Analysis based code is often easier represented in a functional style.
Your model is a simulation of something (OO excels at simulations).
You're doing something for which the object based subtype dispatch of OO is valuable (aka, you need to send a message to all objects of a certain type and various subtypes and get an appropriate, but different, reaction out of all of them).
Your app is not multi-threaded, especially in a non-worker task method type of codebase. OO is quite problematic in programs which are multithreaded and require different threads to do different tasks. If your program is structured with one or two main threads and many worker threads doing the same thing, the muddled control flow of OO programs is easier to handle, as all of the worker threads will be isolated in what they touch and can be considered as a monolithic section of code. Consider any other paradigm actually. Functional excels at multithreading (lack of side effects is a huge boon), and object based programming can give you boons with some of the encapsulation of OO, however with more traceable procedural code in critical sections of your codebase. Procedural of course excels in this arena as well.
Some places where OO isn't so good are where you're dealing with "Sets" of data like in SQL. OO tends to make set based operations more difficult because it isn't really designed to optimally take the intersection of two sets or the superset of two sets.
Also, there are times when a functional approach would make more sense such as this example taken from MSDN:
Consider, for example, writing a program to convert an XML document into a different form of data. While it would certainly be possible to write a C# program that parsed through the XML document and applied a variety of if statements to determine what actions to take at different points in the document, an arguably superior approach is to write the transformation as an eXtensible Stylesheet Language Transformation (XSLT) program. Not surprisingly, XSLT has a large streak of functionalism inside of it
I find it helps to think of a given problem in terms of 'things'.
If the problem can be thought of as having one or more 'things', where each 'thing' has a number of attributes or pieces of information that refer to its state, and a number of operations that can be performed on it - then OOP is probably the way to go!
The key to learning Object Oriented Programming is learning about Design Pattern. By learning about design patterns you can see better when classes are needed and when they are not. Like anything else used in programming the use of classes and other features of OOP languages depends on your design and requirements. Like algorithms Design patterns are a higher level concept.
A Design Pattern plays similar role to that of algorithms for traditional programming languages. A design pattern tells you how create and combine object to perform some useful task. Like the best algorithms the best design patterns are general enough to be application to a variety of common problems.
In my opinion it is more a question about you as a person. Certain people think better in functional terms and others prefer classes and objects. I would say that OOP is better suited when it matches your internal (subjective) mental model of the world.
Object oriented code and procedural code have different extensibility points. Object oriented solutions make it easier to add new classes without modifying existing functions (see the Open-Closed Principle), while procedural code allows you to add functions without modifying existing data structures. Quite often different parts of a system require different approaches depending upon the type of change that is anticipated.
OO allows for logic related to an object to be placed within a single place (the class, or object) so that it can be decoupled and easier to debug and maintain.
What I have observed, is that every app is a combination of OO and procedural code, where the procedural code is the glue that binds all your objects together (at the very least, the code in your main function). The more you can turn your procedural code into OO, the easier it will be to maintain yor code.
Why OOP is used for programming:
Its flexibility – OOP is really flexible in terms of use implementations.
It can reduce your source codes by more than 99.9% – it may sound like I’m over exaggerating, but it is true.
It’s much easier in implementing security – We all know that security is one of the vital requirements when it comes to web development. Using OOP can ease the security implementations in your web projects.
It makes the coding more organized – We all know that a Clean Program is a Clean Coding. Using OOP instead of procedural makes things more organized and systematized (obviously).
It helps your team to work with each other easily – I know some of you had/have experienced team projects and some of you guys know that it’s important to have the same method, implementations, algorithm etc etc etc
It depends by the problem: the OOP paradigm is useful in designing distribuited systems or framework with a lot of entity living during the actions of the user (example: web application).
But if you have a math problem you will prefer a functional language (LISP); for a performance-critical systems you will use ADA or C, etc etc.
The language OOP is useful because too it use probabily the garbage collector (automatic use of memory) in the run of program: you you program in C a lot of time you must debug and correct manually a problem of memory.
OOP is useful when you have things. A socket, a button, a file. If you end a class in er it is almost always a function that is pretending to be a class. TestRunner more than likely should be a function that runs tests(and probably named run tests).
Personally, I think OOP is practically a necessity for any large application. I can't imagine having a program over 100k lines of code without using OOP, it would be a maintenance and design nightmare.
I tell you when OOP is bad.
When the architect writes really complicated, non-documented OOP code. Leaves half way through the project. And many of his common code pieces he used across various project has missing code. Thank god for .NET Reflector.
And the organization was not running Visual Source Safe or Subversion.
And I'm sorry. 2 pages of code to login is rather ridiculous even if it is cutely OOPed....