Related
From what I understand, OOP is the most commonly used paradigm for large scale projects. I also know that some smaller subsets of big systems use other paradigms (e.g. SQL, which is declarative), and I also realize that at lower levels of computing OOP isn't really feasible. But it seems to me that usually the pieces of higher level solutions are almost always put together in a OOP fashion.
Are there any scenarios where a truly non-OOP paradigm is actually a better choice for a largescale solution? Or is that unheard of these days?
I've wondered this ever since I've started studying CS; it's easy to get the feeling that OOP is some nirvana of programming that will never be surpassed.
In my opinion, the reason OOP is used so widely isn't so much that it's the right tool for the job. I think it's more that a solution can be described to the customer in a way that they understand.
A CAR is a VEHICLE that has an ENGINE. That's programming and real world all in one!
It's hard to comprehend anything that can fit the programming and real world quite so elegantly.
Linux is a large-scale project that's very much not OOP. And it wouldn't have a lot to gain from it either.
I think OOP has a good ring to it, because it has associated itself with good programming practices like encapsulation, data hiding, code reuse, modularity et.c. But these virtues are by no means unique to OOP.
You might have a look at Erlang, written by Joe Armstrong.
Wikipedia:
"Erlang is a general-purpose
concurrent programming language and
runtime system. The sequential subset
of Erlang is a functional language,
with strict evaluation, single
assignment, and dynamic typing."
Joe Armstrong:
“Because the problem with
object-oriented languages is they’ve
got all this implicit environment that
they carry around with them. You
wanted a banana but what you got was a
gorilla holding the banana and the
entire jungle.”
The promise of OOP was code reuse and easier maintenance. I am not sure it delivered. I see things such as dot net as being much the same as the C libraries we used to get fro various vendors. You can call that code reuse if you want. As for maintenance bad code is bad code. OOP did not help.
I'm the biggest fan of OOP, and I practice OOP every day.
It's the most natural way to write code, because it resembles the real life.
Though, I realize that the OOP's virtualization might cause performance issues.
Of course that depends on your design, the language and the platform you chose (systems written in Garbage collection based languages such as Java or C# might perform worse than systems which were written in C++ for example).
I guess in Real-time systems, procedural programming may be more appropriate.
Note that not all projects that claim to be OOP are in fact OOP. Sometimes the majority of the code is procedural, or the data model is anemic, and so on...
Zyx, you wrote, "Most of the systems use relational databases ..."
I'm afraid there's no such thing. The relational model will be 40 years old next year and has still never been implemented. I think you mean, "SQL databases." You should read anything by Fabian Pascal to understand the difference between a relational dbms and an SQL dbms.
" ... the relational model is usually chosen due to its popularity,"
True, it's popular.
" ... availability of tools,"
Alas without the main tool necessary: an implementation of the relational model.
" support,"
Yup, the relational model has fine support, I'm sure, but it's entirely unsupported by a dbms implementation.
" and the fact that the relational model is in fact a mathematical concept,"
Yes, it's a mathematical concept, but, not being implemented, it's largely restricted to the ivory towers. String theory is also a mathematical concept but I wouldn't implement a system with it.
In fact, despite it's being a methematical concept, it is certainly not a science (as in computer science) because it lacks the first requirement of any science: that it is falsifiable: there's no implementation of a relational dbms against which we can check its claims.
It's pure snake oil.
" ... contrary to OOP."
And contrary to OOP, the relational model has never been implemented.
Buy a book on SQL and get productive.
Leave the relational model to unproductive theorists.
See this and this. Apparently you can use C# with five different programming paradigms, C++ with three, etc.
Software construction is not akin to Fundamental Physics. Physics strive to describe reality using paradigms which may be challenged by new experimental data and/or theories. Physics is a science which searches for a "truth", in a way that Software construction doesn't.
Software construction is a business. You need to be productive, i.e. to achieve some goals for which someone will pay money. Paradigms are used because they are useful to produce software effectively. You don't need everyone to agree. If I do OOP and it's working well for me, I don't care if a "new" paradigm would potentially be 20% more useful to me if I had the time and money to learn it and later rethink the whole software structure I'm working in and redesign it from scratch.
Also, you may be using another paradigm and I'll still be happy, in the same way that I can make money running a Japanese food restaurant and you can make money with a Mexican food restaurant next door. I don't need to discuss with you whether Japanese food is better than Mexican food.
I doubt OOP is going away any time soon, it just fits our problems and mental models far too well.
What we're starting to see though is multi-paradigm approaches, with declarative and functional ideas being incorporated into object oriented designs. Most of the newer JVM languages are a good example of this (JavaFX, Scala, Clojure, etc.) as well as LINQ and F# on the .net platform.
It's important to note that I'm not talking about replacing OO here, but about complementing it.
JavaFX has shown that a declarative
solution goes beyond SQL and XSLT,
and can also be used for binding
properties and events between visual
components in a GUI
For fault tolerant and highly
concurrent systems, functional
programming is a very good fit,
as demonstrated by the Ericsson
AXD301 (programmed using Erlang)
So... as concurrency becomes more important and FP becomes more popular, I imagine that languages not supporting this paradigm will suffer. This includes many that are currently popular such as C++, Java and Ruby, though JavaScript should cope very nicely.
Using OOP makes the code easier to manage (as in modify/update/add new features) and understand. This is especially true with bigger projects. Because modules/objects encapsulate their data and operations on that data it is easier to comprehend the functionality and the big picture.
The benefit of OOP is that it is easier to discuss (with other developers/management/customer) a LogManager or OrderManager, each of which encompass specific functionality, then describing 'a group of methods that dump the data in file' and 'the methods that keep track of order details'.
So I guess OOP is helpful especially with big projects but there are always new concepts turning up so keep on lookout for new stuff in the future, evaluate and keep what is useful.
People like to think of various things as "objects" and classify them, so no doubt that OOP is so popular. However, there are some areas where OOP has not gained a bigger popularity. Most of the systems use relational databases rather than objective. Even if the second ones hold some notable records and are better for some types of tasks, the relational model is unsually chosen due to its popularity, availability of tools, support and the fact that the relational model is in fact a mathematical concept, contrary to OOP.
Another area where I have never seen OOP is the software building process. All the configuration and make scripts are procedural, partially because of the lack of the support for OOP in shell languages, partially because OOP is too complex for such tasks.
Slightly controversial opinion from me but I don't find OOP, at least of a kind that is popularly applied now, to be that helpful in producing the largest scale software in my particular domain (VFX, which is somewhat similar in scene organization and application state as games). I find it very useful on a medium to smaller scale. I have to be a bit careful here since I've invited some mobs in the past, but I should qualify that this is in my narrow experience in my particular type of domain.
The difficulty I've often found is that if you have all these small concrete objects encapsulating data, they now want to all talk to each other. The interactions between them can get extremely complex, like so (except much, much more complex in a real application spanning thousands of objects):
And this is not a dependency graph directly related to coupling so much as an "interaction graph". There could be abstractions to decouple these concrete objects from each other. Foo might not talk to Bar directly. It might instead talk to it through IBar or something of this sort. This graph would still connect Foo to Bar since, albeit being decoupled, they still talk to each other.
And all this communication between small and medium-sized objects which make up their own little ecosystem, if applied to the entire scale of a large codebase in my domain, can become extremely difficult to maintain. And it becomes so difficult to maintain because it's hard to reason about what happens with all these interactions between objects with respect to things like side effects.
Instead what I've found useful is to organize the overall codebase into completely independent, hefty subsystems that access a central "database". Each subsystem then inputs and outputs data. Some other subsystems might access the same data, but without any one system directly talking to each other.
... or this:
... and each individual system no longer attempts to encapsulate state. It doesn't try to become its own ecosystem. It instead reads and writes data in the central database.
Of course in the implementation of each subsystem, they might use a number of objects to help implement them. And that's where I find OOP very useful is in the implementation of these subsystems. But each of these subsystems constitutes a relatively medium to small-scale project, not too large, and it's at that medium to smaller scale that I find OOP very useful.
"Assembly-Line Programming" With Minimum Knowledge
This allows each subsystem to just focus on doing its thing with almost no knowledge of what's going on in the outside world. A developer focusing on physics can just sit down with the physics subsystem and know little about how the software works except that there's a central database from which he can retrieve things like motion components (just data) and transform them by applying physics to that data. And that makes his job very simple and makes it so he can do what he does best with the minimum knowledge of how everything else works. Input central data and output central data: that's all each subsystem has to do correctly for everything else to work. It's the closest thing I've found in my field to "assembly line programming" where each developer can do his thing with minimum knowledge about how the overall system works.
Testing is still also quite simple because of the narrow focus of each subsystem. We're no longer mocking concrete objects with dependency injection so much as generating a minimum amount of data relevant to a particular system and testing whether the particular system provides the correct output for a given input. With so few systems to test (just dozens can make up a complex software), it also reduces the number of tests required substantially.
Breaking Encapsulation
The system then turns into a rather flat pipeline transforming central application state through independent subsystems that are practically oblivious to each other's existence. One might sometimes push a central event to the database which another system processes, but that other system is still oblivious about where that event came from. I've found this is the key to tackling complexity at least in my domain, and it is effectively through an entity-component system.
Yet it resembles something closer to procedural or functional programming at the broad scale to decouple all these subsystems and let them work with minimal knowledge of the outside world since we're breaking encapsulation in order to achieve this and avoid requiring the systems to talk to each other. When you zoom in, then you might find your share of objects being used to implement any one of these subsystems, but at the broadest scale, the systems resembles something other than OOP.
Global Data
I have to admit that I was very hesitant about applying ECS at first to an architectural design in my domain since, first, it hadn't been done before to my knowledge in popular commercial competitors (3DS Max, SoftImage, etc), and second, it looks like a whole bunch of globally-accessible data.
I've found, however, that this is not a big problem. We can still very effectively maintain invariants, perhaps even better than before. The reason is due to the way the ECS organizes everything into systems and components. You can rest assured that an audio system won't try to mutate a motion component, e.g., not even under the hackiest of situations. Even with a poorly-coordinated team, it's very improbable that the ECS will degrade into something where you can no longer reason about which systems access which component, since it's rather obvious on paper and there are virtually no reasons whatsoever for a certain system to access an inappropriate component.
To the contrary it often removed many of the former temptations for hacky things with the data wide open since a lot of the hacky things done in our former codebase under loose coordination and crunch time was done in hasty attempts to x-ray abstractions and try to access the internals of the ecosystems of objects. The abstractions started to become leaky as a result of people, in a hurry, trying to just get and do things with the data they wanted to access. They were basically jumping through hoops trying to just access data which lead to interface designs degrading quickly.
There is something vaguely resembling encapsulation still just due to the way the system is organized since there's often only one system modifying a particular type of components (two in some exceptional cases). But they don't own that data, they don't provide functions to retrieve that data. The systems don't talk to each other. They all operate through the central ECS database (which is the only dependency that has to be injected into all these systems).
Flexibility and Extensibility
This is already widely-discussed in external resources about entity-component systems but they are extremely flexible at adapting to radically new design ideas
in hindsight, even concept-breaking ones like a suggestion for a creature which is a mammal, insect, and plant that sprouts leaves under sunlight all at once.
One of the reasons is because there are no central abstractions to break. You introduce some new components if you need more data for this or just create an entity which strings together the components required for a plant, mammal, and insect. The systems designed to process insect, mammal, and plant components then automatically pick it up and you might get the behavior you want without changing anything besides adding a line of code to instantiate an entity with a new combo of components. When you need whole new functionality, you just add a new system or modify an existing one.
What I haven't found discussed so much elsewhere is how much this eases maintenance even in scenarios when there are no concept-breaking design changes that we failed to anticipate. Even ignoring the flexibility of the ECS, it can really simplify things when your codebase reaches a certain scale.
Turning Objects Into Data
In a previous OOP-heavy codebase where I saw the difficulty of maintaining a codebase closer to the first graph above, the amount of code required exploded because the analogical Car in this diagram:
... had to be built as a completely separate subtype (class) implementing multiple interfaces. So we had an explosive number of objects in the system: a separate object for point lights from directional lights, a separate object for a fish eye camera from another, etc. We had thousands of objects implementing a few dozen abstract interfaces in endless combinations.
When I compared it to ECS, that required only hundreds and we were able to do the exact same things before using a small fraction of the code, because that turned the analogical Car entity into something that no longer requires its class. It turns into a simple collection of component data as a generalized instance of just one Entity type.
OOP Alternatives
So there are cases like this where OOP applied in excess at the broadest level of the design can start to really degrade maintainability. At the broadest birds-eye view of your system, it can help to flatten it and not try to model it so "deep" with objects interacting with objects interacting with objects, however abstractly.
Comparing the two systems I worked on in the past and now, the new one has more features but takes hundreds of thousands of LOC. The former required over 20 million LOC. Of course it's not the fairest comparison since the former one had a huge legacy, but if you take a slice of the two systems which are functionally quite equal without the legacy baggage (at least about as close to equal as we might get), the ECS takes a small fraction of the code to do the same thing, and partly because it dramatically reduces the number of classes there are in the system by turning them into collections (entities) of raw data (components) with hefty systems to process them instead of a boatload of small/medium objects.
Are there any scenarios where a truly non-OOP paradigm is actually a
better choice for a largescale solution? Or is that unheard of these
days?
It's far from unheard of. The system I'm describing above, for example, is widely used in games. It's quite rare in my field (most of the architectures in my field are COM-like with pure interfaces, and that's the type of architecture I worked on in the past), but I've found that peering over at what gamers are doing when designing an architecture made a world of difference in being able to create something that still remains very comprehensible at it grows and grows.
That said, some people consider ECS to be a type of object-oriented programming on its own. If so, it doesn't resemble OOP of a kind most of us would think of, since data (components and entities to compose them) and functionality (systems) are separated. It requires abandoning encapsulation at the broad system level which is often considered one of the most fundamental aspects of OOP.
High-Level Coding
But it seems to me that usually the pieces of higher level solutions
are almost always put together in a OOP fashion.
If you can piece together an application with very high-level code, then it tends to be rather small or medium in scale as far as the code your team has to maintain and can probably be assembled very effectively using OOP.
In my field in VFX, we often have to do things that are relatively low-level like raytracing, image processing, mesh processing, fluid dynamics, etc, and can't just piece these together from third party products since we're actually competing more in terms of what we can do at the low-level (users get more excited about cutting-edge, competitive production rendering improvements than, say, a nicer GUI). So there can be lots and lots of code ranging from very low-level shuffling of bits and bytes to very high-level code that scripters write through embedded scripting languages.
Interweb of Communication
But there comes a point with a large enough scale with any type of application, high-level or low-level or a combo, that revolves around a very complex central application state where I've found it no longer useful to try to encapsulate everything into objects. Doing so tends to multiply complexity and the difficulty to reason about what goes on due to the multiplied amount of interaction that goes on between everything. It no longer becomes so easy to reason about thousands of ecosystems talking to each other if there isn't a breaking point at a large enough scale where we stop modeling each thing as encapsulated ecosystems that have to talk to each other. Even if each one is individually simple, everything taken in as a whole can start to more than overwhelm the mind, and we often have to take a whole lot of that in to make changes and add new features and debug things and so forth if you try to revolve the design of an entire large-scale system solely around OOP principles. It can help to break free of encapsulation at some scale for at least some domains.
At that point it's not necessarily so useful anymore to, say, have a physics system encapsulate its own data (otherwise many things could want to talk to it and retrieve that data as well as initialize it with the appropriate input data), and that's where I found this alternative through ECS so helpful, since it turns the analogical physics system, and all such hefty systems, into a "central database transformer" or a "central database reader which outputs something new" which can now be oblivious about each other. Each system then starts to resemble more like a process in a flat pipeline than an object which forms a node in a very complex graph of communication.
I recently had a debate with a colleague who is not a fan of OOP. What took my attention was what he said:
"What's the point of doing my coding in objects? If it's reuse then I can just create a library and call whatever functions I need for whatever task is at hand. Do I need these concepts of polymorphism, inheritance, interfaces, patterns or whatever?"
We are in a small company developing small projects for e-commerce sites and real estate.
How can I take advantage of OOP in an "everyday, real-world" setup? Or was OOP really meant to solve complex problems and not intended for "everyday" development?
My personally view: context
When you program in OOP you have a greater awareness of the context. It helps you to organize the code in such a way that it is easier to understand because the real world is also object oriented.
The good things about OOP come from tying a set of data to a set of behaviors.
So, if you need to do many related operations on a related set of data, you can write many functions that operate on a struct, or you can use an object.
Objects give you some code reuse help in the form of inheritance.
IME, it is easier to work with an object with a known set of attributes and methods that it is to keep a set of complex structs and the functions that operate on them.
Some people will go on about inheritance and polymorphism. These are valuable, but the real value in OOP (in my opinion) comes from the nice way it encapsulates and associates data with behaviors.
Should you use OOP on your projects? That depends on how well your language supports OOP. That depends on the types of problems you need to solve.
But, if you are doing small websites, you are still talking about enough complexity that I would use OOP design given proper support in the development language.
More than getting something to just work - your friend's point, a well designed OO design is easier to understand, to follow, to expand, to extend and to implement. It is so much easier for example to delegate work that categorically are similar or to hold data that should stay together (yes even a C struct is an object).
Well, I'm sure a lot of people will give a lot more academically correctly answers, but here's my take on a few of the most valuable advantages:
OOP allows for better encapsulation
OOP allows the programmer to think in more logical terms, making software projects easier to design and understand (if well designed)
OOP is a time saver. For example, look at the things you can do with a C++ string object, vectors, etc. All that functionality (and much more) comes for "free." Now, those are really features of the class libraries and not OOP itself, but almost all OOP implementations come with nice class libraries. Can you implement all that stuff in C (or most of it)? Sure. But why write it yourself?
Look at the use of Design Patterns and you'll see the utility of OOP. It's not just about encapsulation and reuse, but extensibility and maintainability. It's the interfaces that make things powerful.
A few examples:
Implementing a stream (decorator pattern) without objects is difficult
Adding a new operation to an existing system such as a new encryption type (strategy pattern) can be difficult without objects.
Look at the way PostgresQL is
implemented versus the way your
database book says a database should
be implemented and you'll see a big
difference. The book will suggest
node objects for each operator.
Postgres uses myriad tables and
macros to try to emulate these nodes.
It is much less pretty and much
harder to extend because of that.
The list goes on.
The power of most programming languages is in the abstractions that they make available. Object Oriented programming provides a very powerful system of abstractions in the way it allows you to manage relationships between related ideas or actions.
Consider the task of calculating areas for an arbitrary and expanding collection of shapes. Any programmer can quickly write functions for the area of a circle, square, triangle, ect. and store them in a library. The difficulty comes when trying to write a program that identifies and calculates the area of an arbitrary shape. Each time you add a new kind of shape, say a pentagon, you would need to update and extend something like an IF or CASE structure to allow your program to identify the new shape and call the correct area routine from your "library of functions". After a while, the maintenance costs associated with this approach begin to pile up.
With object-oriented programming, a lot of this comes free-- just define a Shape class that contains an area method. Then it doesn't really matter what specific shape you're dealing with at run time, just make each geometrical figure an object that inherits from Shape and call the area method. The Object Oriented paradigm handles the details of whether at this moment in time, with this user input, do we need to calculate the area of a circle, triangle, square, pentagon or the ellipse option that was just added half a minute ago.
What if you decided to change the interface behind the way the area function was called? With Object Oriented programming you would just update the Shape class and the changes automagically propagate to all entities that inherit from that class. With a non Object Oriented system you would be facing the task of slogging through your "library of functions" and updating each individual interface.
In summary, Object Oriented programming provides a powerful form of abstraction that can save you time and effort by eliminating repetition in your code and streamlining extensions and maintenance.
Around 1994 I was trying to make sense of OOP and C++ at the same time, and found myself frustrated, even though I could understand in principle what the value of OOP was. I was so used to being able to mess with the state of any part of the application from other languages (mostly Basic, Assembly, and Pascal-family languages) that it seemed like I was giving up productivity in favor of some academic abstraction. Unfortunately, my first few encounters with OO frameworks like MFC made it easier to hack, but didn't necessarily provide much in the way of enlightenment.
It was only through a combination of persistence, exposure to alternate (non-C++) ways of dealing with objects, and careful analysis of OO code that both 1) worked and 2) read more coherently and intuitively than the equivalent procedural code that I started to really get it. And 15 years later, I'm regularly surprised at new (to me) discoveries of clever, yet impressively simple OO solutions that I can't imagine doing as neatly in a procedural approach.
I've been going through the same set of struggles trying to make sense of the functional programming paradigm over the last couple of years. To paraphrase Paul Graham, when you're looking down the power continuum, you see everything that's missing. When you're looking up the power continuum, you don't see the power, you just see weirdness.
I think, in order to commit to doing something a different way, you have to 1) see someone obviously being more productive with more powerful constructs and 2) suspend disbelief when you find yourself hitting a wall. It probably helps to have a mentor who is at least a tiny bit further along in their understanding of the new paradigm, too.
Barring the gumption required to suspend disbelief, if you want someone to quickly grok the value of an OO model, I think you could do a lot worse than to ask someone to spend a week with the Pragmatic Programmers book on Rails. It unfortunately does leave out a lot of the details of how the magic works, but it's a pretty good introduction to the power of a system of OO abstractions. If, after working through that book, your colleague still doesn't see the value of OO for some reason, he/she may be a hopeless case. But if they're willing to spend a little time working with an approach that has a strongly opinionated OO design that works, and gets them from 0-60 far faster than doing the same thing in a procedural language, there may just be hope. I think that's true even if your work doesn't involve web development.
I'm not so sure that bringing up the "real world" would be as much a selling point as a working framework for writing good apps, because it turns out that, especially in statically typed languages like C# and Java, modeling the real world often requires tortuous abstractions. You can see a concrete example of the difficulty of modeling the real world by looking at thousands of people struggling to model something as ostensibly simple as the geometric abstraction of "shape" (shape, ellipse, circle).
All programming paradigms have the same goal: hiding unneeded complexity.
Some problems are easily solved with an imperative paradigm, like your friend uses. Other problems are easily solved with an object-oriented paradigm. There are many other paradigms. The main ones (logic programming, functional programming, and imperative programming) are all equivalent to each other; object-oriented programming is usually thought as an extension to imperative programming.
Object-oriented programming is best used when the programmer is modeling items that are similar, but not the same. An imperative paradigm would put the different kinds of models into one function. An object-oriented paradigm separates the different kinds of models into different methods on related objects.
Your colleague seems to be stuck in one paradigm. Good luck.
To me, the power of OOP doesn't show itself until you start talking about inheritance and polymorphism.
If one's argument for OOP rests the concept of encapsulation and abstraction, well that isn't a very convincing argument for me. I can write a huge library and only document the interfaces to it that I want the user to be aware of, or I can rely on language-level constructs like packages in Ada to make fields private and only expose what it is that I want to expose.
However, the real advantage comes when I've written code in a generic hierarchy so that it can be reused later such that the same exact code interfaces are used for different functionality to achieve the same result.
Why is this handy? Because I can stand on the shoulders of giants to accomplish my current task. The idea is that I can boil the parts of a problem down to the most basic parts, the objects that compose the objects that compose... the objects that compose the project. By using a class that defines behavior very well in the general case, I can use that same proven code to build a more specific version of the same thing, and then a more specific version of the same thing, and then yet an even more specific version of the same thing. The key is that each of these entities has commonality that has already been coded and tested, and there is no need to reimpliment it again later. If I don't use inheritance for this, I end up reimplementing the common functionality or explicitly linking my new code against the old code, which provides a scenario for me to introduce control flow bugs.
Polymorphism is very handy in instances where I need to achieve a certain functionality from an object, but the same functionality is also needed from similar, but unique types. For instance, in Qt, there is the idea of inserting items onto a model so that the data can be displayed and you can easily maintain metadata for that object. Without polymorphism, I would need to bother myself with much more detail than I currently do (I.E. i would need to implement the same code interfaces that conduct the same business logic as the item that was originally intended to go on the model). Because the base class of my data-bound object interacts natively with the model, I can instead insert metadata onto this model with no trouble. I get what I need out of the object with no concern over what the model needs, and the model gets what it needs with no concern over what I have added to the class.
Ask your friend to visualize any object in his very Room, House or City... and if he can tell a single such object which a system in itself and is capable of doing some meaningful work. Things like a button isnt doing something alone - it takes lots of objects to make a phone call. Similarly a car engine is made of the crank shaft, pistons, spark plugs. OOPS concepts have evolved from our perception in natural processes or things in our lives. The "Inside COM" book tells the purpose of COM by taking analogy from a childhood game of identifying animals by asking questions.
Design trumps technology and methodology. Good designs tend to incorporate universal principals of complexity management such as law of demeter which is at the heart of what OO language features strive to codify.
Good design is not dependant on use of OO specific language features although it is typically in ones best interests to use them.
Not only does it make
programming easier / more maintainable in the current situation for other people (and yourself)
It is already allowing easier database CRUD (Create, Update, Delete) operations.
You can find more info about it looking up:
- Java : Hibernate
- Dot Net : Entity Framework
See even how LINQ (Visual Studio) can make your programming life MUCH easier.
Also, you can start using design patterns for solving real life problems (design patterns are all about OO)
Perhaps it is even fun to demonstrate with a little demo:
Let's say you need to store employees, accounts, members, books in a text file in a similar way.
.PS. I tried writing it in a PSEUDO way :)
the OO way
Code you call:
io.file.save(objectsCollection.ourFunctionForSaving())
class objectsCollection
function ourFunctionForSaving() As String
String _Objects
for each _Object in objectsCollection
Objects &= _Object & "-"
end for
return _Objects
end method
NON-OO Way
I don't think i'll write down non-oo code. But think of it :)
NOW LET'S SAY
In the OO way. The above class is the parent class of all methods for saving the books, employees, members, accounts, ...
What happens if we want to change the way of saving to a textfile? For example, to make it compactible with a current standard (.CVS).
And let's say we would like to add a load function, how much code do you need to write?
In the OO- way you only need the add a New Sub method which can split all the data into parameters (This happens once).
Let your collegue think about that :)
In domains where state and behavior are poorly aligned, Object-Orientation reduces the overall dependency density (i.e. complexity) within these domains, which makes the resulting systems less brittle.
This is because the essence of Object-Orientation is based on the fact that, organizationally, it doesn't dustinguish between state and behavior at all, treating both uniformly as "features". Objects are just sets of features clumpled to minimize overall dependency.
In other domains, Object-Orientation is not the best approach. There are different language paradigms for different problems. Experienced developers know this, and are willing to use whatever language is closest to the domain.
[Edit:] Earlier I asked this as a perhaps poorly-framed question about when to use OOP versus when to use procedural programming - some responses implied I was asking for help understanding OOP. On the contrary, I have used OOP a lot but want to know when to use a procedural approach. Judging by the responses, I take it that there is a fairly strong consensus that OOP is usually a better all-round approach but that a procedural language should be used if the OOP architecture will not provide any reuse benefits in the long term.
However my experience as a Java programmer has been otherwise. I saw a massive Java program that I architected rewritten by a Perl guru in 1/10 of the code that I had written and seemingly just as robust as my model of OOP perfection. My architecture saw a significant amount of reuse and yet a more concise procedural approach had produced a superior solution.
So, at the risk of repeating myself, I'm wondering in what situations should I choose a procedural over an object-oriented approach. How would you identify in advance a situation in which an OOP architecture is likely to be overkill and a procedural approach more concise and efficient.
Can anyone suggest examples of what those scenarios would look like?
What is a good way to identify in advance a project that would be better served by a procedural programming approach?
I like Glass' rules of 3 when it comes to Reuse (which seems to be what you're interested in).
1) It is 3 times as difficult to
build reusable components as single
use components 2) A reusable
component should be tried out in three
different applications before it will
be sufficiently general to accept into
a reuse library
From this I think you can extrapolate these corollaries
a) If you don't have the budget
for 3 times the time it would take you
to build a single use component, maybe
you should hold off on reuse. (Assuming Difficulty = Time)
b) If
you don't have 3 places where you'd
use the component you're building,
maybe you should hold off on building
the reusable component.
I still think OOP is useful for building the single use component, because you can always refactor it into something that is really reusable later on. (You can also refactor from PP to OOP but I think OOP comes with enough benefits regarding organization and encapsulation to start there)
Reusability (or lack of it) is not bound to any specific programming paradigm. Use object oriented, procedural, functional or any other programming as needed. Organization and reusability come from what you do, not from the tool.
Those who religiously support OOP don't have any facts to justify their support, as we see here in these comments as well. They are trained (or brain washed) in universities to use and praise OOP and OOP only and that is why they support it so blindly. Have they done any real work in PP at all? Other then protecting code from careless programmers in a team environment, OOP doesn't offer much. Personally working both in PP and OOP for years, I find that PP is simple, straight forward and more efficient, and I agree with the following wise men and women:
(Reference: http://en.wikipedia.org/wiki/Object-oriented_programming):
A number of well-known researchers and programmers have criticized OOP. Here is an incomplete list:
Luca Cardelli wrote a paper titled “Bad Engineering Properties of Object-Oriented Languages”.
Richard Stallman wrote in 1995, “Adding OOP to Emacs is not clearly an improvement; I used OOP when working on the Lisp Machine window systems, and I disagree with the usual view that it is a superior way to program.”
A study by Potok et al. has shown no significant difference in productivity between OOP and procedural approaches.
Christopher J. Date stated that critical comparison of OOP to other technologies, relational in particular, is difficult because of lack of an agreed-upon and rigorous definition of OOP. A theoretical foundation on OOP is proposed which uses OOP as a kind of customizable type system to support RDBMS.
Alexander Stepanov suggested that OOP provides a mathematically-limited viewpoint and called it “almost as much of a hoax as Artificial Intelligence” (possibly referring to the Artificial Intelligence projects and marketing of the 1980s that are sometimes viewed as overzealous in retrospect).
Paul Graham has suggested that the purpose of OOP is to act as a “herding mechanism” which keeps mediocre programmers in mediocre organizations from “doing too much damage”. This is at the expense of slowing down productive programmers who know how to use more powerful and more compact techniques.
Joe Armstrong, the principal inventor of Erlang, is quoted as saying “The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.”
Richard Mansfield, author and former editor of COMPUTE! magazine, states that “like countless other intellectual fads over the years (“relevance”, communism, “modernism”, and so on—history is littered with them), OOP will be with us until eventually reality asserts itself. But considering how OOP currently pervades both universities and workplaces, OOP may well prove to be a durable delusion. Entire generations of indoctrinated programmers continue to march out of the academy, committed to OOP and nothing but OOP for the rest of their lives.” and also is quoted as saying “OOP is to writing a program, what going through airport security is to flying”.
You gave the answer yourself - big projects simply need OOP to prevent getting too messy.
From my point of view, the biggest advantage of OOP is code organization. This includes the principles of DRY and encapsulation.
I would suggest using the most concise, standards-based approach that you can find for any given problem. Your colleague who used Perl demonstrated that a good developer who knows a particular tool well can achieve great results regardless of the methodology. Rather than compare your Java-versus-Perl projects as a good example of the procedural-versus-OOP debate, I would like to see a face-off between Perl and a similarly concise language such as Ruby, which happens to also have the benefits of object orientation. Now that's something I'd like to see. My guess is Ruby would come out on top but I'm not interested in provoking a language flame-war here - my point is only that you choose the appropriate tool for the job - whatever approach can accomplish the task in the most efficient and robust way possible. Java may be robust because of its object orientation but as you and your colleague and many others who are converting to dynamic languages such as Ruby and Python are finding these days, there are much more efficient solutions out there, whether procedural or OOP.
I think DRY principle (Don't Repeat Yourself) combined with a little Agile is a good approach. Build your program incrementally starting with the simplest thing that works then add features one by one and re-factor your code as necessary as you go along.
If you find yourself writing the same few lines of code again and again - maybe with different data - it's time to think about abstractions that can help separate the stuff that changes from the stuff that stays the same.
Create thorough unit tests for each iteration so that you can re-factor with confidence.
It's a mistake to spend too much time trying to anticipate which parts of your code need to be reusable. It will soon become apparent once the system starts to grow in size.
For larger projects with multiple concurrent development teams you need to have some kind of architectural plan to guide the development, but if you are working on your own or in small cooperative team then the architecture will emerge naturally if you stick to the DRY principle.
Another advantage of this approach is that whatever you do is based on real world experience. My favourite analogy - you have to play with the bricks before you can imagine how the building might be constructed.
I think you should use procedural style when you have a very well specified problem, the specification won't change and you want a very fast running program for it. In this case you may trade the maintainability for performance.
Usually this is the case when you write a game engine or a scientific simulation program. If your program calculate something more than million times per second it should be optimized to the edge.
You can use very efficient algorithms but it won't be fast enough until you optimize the cache usage. It can be a big performance boost your data is cached. This means the CPU don't need fetch bytes from the RAM, it know them. To achieve this you should try to store your data close to each other, your executable and data size should be minimal, and try using as less pointers as you can (use static global fixed sized arrays where you can afford).
If you use pointers you are continuously jumping in the memory and your CPU need to reload the cache every time. OOP code is full of pointers: every object is stored by its memory address. You call new everywhere which spread your objects all over the memory making the cache optimization almost impossible (unless you have an allocator or a garbage collector that keeps things close to each other). You call callbacks and virtual functions. The compiler usually can't inline the virtual functions and a virtual function call is relatively slow (jump to the VMT, get the address of the virtual function, call it [this involves pushing the parameters and local variables on the stack, executing the function then popping everything]). This matters a lot when you have a loop running from 0 to 1000000 25 times in every second. By using procedural style there aren't virtual function and the optimizar can inline everything in those hot loops.
If the project is so small that it would be contained within one class and is not going to be used for very long, I would consider using functions. Alternatively if the language you are using does not support OO (e.g. c).
"The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.” —Joe Armstrong
Do you want the jungle?
I think the suitability of OOP depends more on the subject area you're working in than the size of the project. There are some subject areas (CAD, simulation modeling, etc.) where OOP maps naturally to the concepts involved. However, there are a lot of other domains where the mapping ends up being clumsy and incongruous. Many people using OOP for everything seem to spend a lot of time trying to pound square pegs into round holes.
OOP has it's place, but so do procedural programming, functional programming, etc. Look at the problem you're trying to solve, then choose a programming paradigm that allows you to write the simplest possible program to solve it.
Procedural programs can be simpler for a certain type of program. Typically, these are the short script-like programs.
Consider this scenario:
Your code is not OO. You have data structures and many functions throughout your progam that operate on the data structures. Each function takes a data structure as a parameter and does different things depending on a "data_type" field in the data structure.
IF all is working and not going to be changed, who cares if it's OO or not? It's working. It's done. If you can get to that point faster writing procedurally, then maybe that's the way to go.
But are you sure it's not going to be changed? Let's say you're likely to add new types of data structures. Each time you add a new data structure type that you want those functions to operate on, you have to make sure you find and modify every one of those functions to add a new "else if" case to check for and add the behavior you want to affect the new type of data structure. The pain of this increases as the program gets larger and more complicated. The more likely this is, the better off you would be going with the OO approach.
And - are you sure that it's working with no bugs? More involved switching logic creates more complexity in testing each unit of code. With polymorphic method calls, the language handles the switching logic for you and each method can be simpler and more straightforward to test.
The two concepts are not mutually exclusive, it is very likely that you will use PP in conjunction with OOP, I can't see how to segregate them.
I believe Grady Booch said once that you really start to benefit a lot from OOP at 10000+ lines of code.
However, I'd always go the OO-way. Even for 200 lines. It's a superior approach in a long term, and the overhead is just an overrated excuse. All the big things start small.
One of the goals of OOP was to make reusability easier however it is not the only purpose. The key to learning to use objects effectively is Design Patterns.
We are all used to the idea of algorithms which tell us how to combine different procedures and data structures to perform common tasks. Conversely look at Design Patterns by the Gang of Four for ideas on how to combine objects to perform common tasks.
Before I learned about Design Patterns I was pretty much in the dark about how to use objects effectively other than as a super type structure.
Remember that implementing Interfaces is just as important if not more important than inheritance. Back in the day C++ was leading example of object oriented programming and using interfaces are obscured compared to inheritance (virtual functions, etc). The C++ Legacy meant a lot more emphasis was placed on reusing behavior in the various tutorials and broad overviews. Since then Java, C#, and other languages have moved interface up to more a focus.
What interfaces are great for is precisely defining how two object interact with each. It is not about reusing behavior. As it turns out much of our software is about how the different parts interact. So using interface gives a lot more productivity gain than trying to make reusable components.
Remember that like many other programming ideas Objects are a tool. You will have to use your best judgment as to how well they work for your project. For my CAD/CAM software for metal cutting machines there are important math functions that are not placed in objects because there is no reason for them be in objects. Instead they are exposed from library and used by the object that need them. Then there is are some math function that were made object oriented as their structure naturally lead to this setup. (Taking a list of points and transforming it in on of several different types of cutting paths). Again use your best judgment.
Part of your answer depends on what language you're using. I know that in Python, it's pretty simple to move procedural code into a class, or a more formal object.
One of my heuristics is a based on how the "state" of the situation is. If the procedure pollutes the namespace, or could possibly affect the global state (in a bad, or unpredictable way), then encapsulating that function in an object or class is probably wise.
My two cents...
Advantages of procedural programming
Simple designing (fast proof of concept, battle with dramatically
dynamic requirements)
Simple inter-project communications
Natural when temporal order matters
Less overhead at runtime
The more Procedural code become good the closer it's to Functional. And advantages of FP are well known.
I always begin designing in a top-down fashion and in the top parts it's much easier to think in OOP terms. But when comes the time to code some little specific parts you are much more productive with just procedure programming.
OOP is cool in designing and in shaping the project, so that the divide-et-impera paradigm can be applied. But you cannot apply it in every aspect of your code, as it were a religion :)
If you "think OO" when you're programming, then I'm not sure it makes sense to ask "when should I revert to procedural programming?" This is equivalent to asking java programmers what they can't do as well because java requires classes. (Ditto .NET languages).
If you have to make an effort to get past thinking procedurally, then I'd advise asking about how you can overcome that (if you care to); otherwise stay with procedural. If it's that much effort to get into OOP-mode, your OOP code probably won't work very well anyway (until you get further along the learning curve.)
IMHO, the long term benefits of OOP outweigh the time saved in the short term.
Like AZ said, using OOP in a procedural fashion (which I do quite a bit), is a good way to go (for smaller projects). The bigger the project, the more OOP you should employ.
You can write bad software in both concepts. Still, complex software are much easier to write, understand and maintain in OO languages than in procedural. I wrote highly complex ERP applications in procedural language (Oracle PL/SQL) and then switched to OOP (C#). It was and still is a breath of fresh air.
To this point, the arguments of using OO for DRY and encapsulation is just adding unnecessary complexity in terms of how implicit it is and just sheer of how many layers that a class can inherit a lot of properties and methods into it.
not to mention that it's really hard to design a good OO cause you'd end up adding unrelated/unnecessary things that are going to be inherited throughout the whole layers of classes that inherits them. which is really bad if one parent class gets messy, the whole codebase is messy. and gets refactored.
also the fact that those inherited properties are not specifically fit into the use case to the class that inherits it which requires to be overridden. and to the ones that don't need them at all just have them for no good reason.
for something that does not need to be shared, sure there's abstract properties. but you'd end up having to implement them in all the instances that tries to inherits them.
this inheritance is just too magicky and gets dangerous.
but I'd give OO credit on how it's good at enforcing of what should be available. but then again it's too much power that is really easy to be wrongly used.
In my opinion, final class should be the default. and you need to deliberately choose if you want to allow it to inheritance.
Most studies have found that OO code is more concise than procedural code. If you look at projects that re-wrote existing C code in C++ (not something I necessarily advise, BTW) , you normally see reductions in code size of between 50 and 75 percent.
So the answer is - always use OO!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What are the differences between these programming paradigms, and are they better suited to particular problems or do any use-cases favour one over the others?
Architecture examples appreciated!
All of them are good in their own ways - They're simply different approaches to the same problems.
In a purely procedural style, data tends to be highly decoupled from the functions that operate on it.
In an object oriented style, data tends to carry with it a collection of functions.
In a functional style, data and functions tend toward having more in common with each other (as in Lisp and Scheme) while offering more flexibility in terms of how functions are actually used. Algorithms tend also to be defined in terms of recursion and composition rather than loops and iteration.
Of course, the language itself only influences which style is preferred. Even in a pure-functional language like Haskell, you can write in a procedural style (though that is highly discouraged), and even in a procedural language like C, you can program in an object-oriented style (such as in the GTK+ and EFL APIs).
To be clear, the "advantage" of each paradigm is simply in the modeling of your algorithms and data structures. If, for example, your algorithm involves lists and trees, a functional algorithm may be the most sensible. Or, if, for example, your data is highly structured, it may make more sense to compose it as objects if that is the native paradigm of your language - or, it could just as easily be written as a functional abstraction of monads, which is the native paradigm of languages like Haskell or ML.
The choice of which you use is simply what makes more sense for your project and the abstractions your language supports.
I think the available libraries, tools, examples, and communities completely trumps the paradigm these days. For example, ML (or whatever) might be the ultimate all-purpose programming language but if you can't get any good libraries for what you are doing you're screwed.
For example, if you're making a video game, there are more good code examples and SDKs in C++, so you're probably better off with that. For a small web application, there are some great Python, PHP, and Ruby frameworks that'll get you off and running very quickly. Java is a great choice for larger projects because of the compile-time checking and enterprise libraries and platforms.
It used to be the case that the standard libraries for different languages were pretty small and easily replicated - C, C++, Assembler, ML, LISP, etc.. came with the basics, but tended to chicken out when it came to standardizing on things like network communications, encryption, graphics, data file formats (including XML), even basic data structures like balanced trees and hashtables were left out!
Modern languages like Python, PHP, Ruby, and Java now come with a far more decent standard library and have many good third party libraries you can easily use, thanks in great part to their adoption of namespaces to keep libraries from colliding with one another, and garbage collection to standardize the memory management schemes of the libraries.
These paradigms don't have to be mutually exclusive. If you look at python, it supports functions and classes, but at the same time, everything is an object, including functions. You can mix and match functional/oop/procedural style all in one piece of code.
What I mean is, in functional languages (at least in Haskell, the only one I studied) there are no statements! functions are only allowed one expression inside them!! BUT, functions are first-class citizens, you can pass them around as parameters, along with a bunch of other abilities. They can do powerful things with few lines of code.
While in a procedural language like C, the only way you can pass functions around is by using function pointers, and that alone doesn't enable many powerful tasks.
In python, a function is a first-class citizen, but it can contain arbitrary number of statements. So you can have a function that contains procedural code, but you can pass it around just like functional languages.
Same goes for OOP. A language like Java doesn't allow you to write procedures/functions outside of a class. The only way to pass a function around is to wrap it in an object that implements that function, and then pass that object around.
In Python, you don't have this restriction.
For GUI I'd say that the Object-Oriented Paradigma is very well suited. The Window is an Object, the Textboxes are Objects, and the Okay-Button is one too. On the other Hand stuff like String Processing can be done with much less overhead and therefore more straightforward with simple procedural paradigma.
I don't think it is a question of the language neither. You can write functional, procedural or object-oriented in almost any popular language, although it might be some additional effort in some.
In order to answer your question, we need two elements:
Understanding of the characteristics of different architecture styles/patterns.
Understanding of the characteristics of different programming paradigms.
A list of software architecture styles/pattern is shown on the software architecture article on Wikipeida. And you can research on them easily on the web.
In short and general, Procedural is good for a model that follows a procedure, OOP is good for design, and Functional is good for high level programming.
I think you should try reading the history on each paradigm and see why people create it and you can understand them easily.
After understanding them both, you can link the items of architecture styles/patterns to programming paradigms.
I think that they are often not "versus", but you can combine them. I also think that oftentimes, the words you mention are just buzzwords. There are few people who actually know what "object-oriented" means, even if they are the fiercest evangelists of it.
One of my friends is writing a graphics app using NVIDIA CUDA. Application fits in very nicely with OOP paradigm and the problem can be decomposed into modules neatly. However, to use CUDA you need to use C, which doesn't support inheritance. Therefore, you need to be clever.
a) You devise a clever system which will emulate inheritance to a certain extent. It can be done!
i) You can use a hook system, which expects every child C of parent P to have a certain override for function F. You can make children register their overrides, which will be stored and called when required.
ii) You can use struct memory alignment feature to cast children into parents.
This can be neat but it's not easy to come up with future-proof, reliable solution. You will spend lots of time designing the system and there is no guarantee that you won't run into problems half-way through the project. Implementing multiple inheritance is even harder, if not almost impossible.
b) You can use consistent naming policy and use divide and conquer approach to create a program. It won't have any inheritance but because your functions are small, easy-to-understand and consistently formatted you don't need it. The amount of code you need to write goes up, it's very hard to stay focused and not succumb to easy solutions (hacks). However, this ninja way of coding is the C way of coding. Staying in balance between low-level freedom and writing good code. Good way to achieve this is to write prototypes using a functional language. For example, Haskell is extremely good for prototyping algorithms.
I tend towards approach b. I wrote a possible solution using approach a, and I will be honest, it felt very unnatural using that code.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Since I started studying object-oriented programming, I frequently read articles/blogs saying functions are better, or not all problems should be modeled as objects. From your personal programming adventures, when do you think a problem is better solved by OOP?
There is no hard and fast rule. A problem is better solved with OOP when you are better at solving problems and thinking in an OO mentality. Object Orientation is just another tool which has come along through trying to make computing a better tool for solving problems.
However, it can allow for better code reuse, and can also lead to neater code. But quite often these highly praised qualities are, in-relity, of little real value. Applying OO techniques to an existing functional application could really cause a lot of problems. The skill lies in learning many different techniques and applying the most appropriate to the problem at hand.
OO is often quoted as a Nirvana-like solution to the software development, however there are many times when it is not appropriate to be applied to the issue at hand. It can, quite often, lead to over-engineering of a problem to reach the perfect solution, when often it is really not necessary.
In essence, OOP is not really Object Oriented Programming, but mapping Object Oriented Thinking to a programming language capable of supporting OO Techniques. OO techniques can be supported by languages which are not inherently OO, and there are techniques you can use within functional languages to take advantage of the benefits.
As an example, I have been developing OO software for about 20 years now, so I tend to think in OO terms when solving problems, irrespective of the language I am writing in. Currently I am implementing polymorphism using Perl 5.6, which does not natively support it. I have chosen to do this as it will make maintenance and extension of the code a simple configuration task, rather than a development issue.
Not sure if this is clear. There are people who are hard in the OO court, and there are people who are hard in the Functional court. And then there are people who have tried both and try to take the best from each. Neither is perfect, but both have some very good traits that you can utilise no matter what the language.
If you are trying to learn OOP, don't just concentrate on OOP, but try to utilise Object Oriented Analysis and general OO principles to the whole spectrum of the problem solution.
I'm an old timer, but have also programmed OOP for a long time. I am personally against using OOP just to use it. I prefer objects to have specific reasons for existing, that they model something concrete, and that they make sense.
The problem that I have with a lot of the newer developers is that they have no concept of the resources that they are consuming with the code that they create. When dealing with a large amount of data and accessing databases the "perfect" object model may be the worst thing you can do for performance and resources.
My bottom line is if it makes sense as an object then program it as an object, as long as you consider the performance/resource impact of the implementation of your object model.
I think it fits best when you are modeling something cohesive with state and associated actions on those states. I guess that's kind of vague, but I'm not sure there is a perfect answer here.
The thing about OOP is that it lets you encapsulate and abstract data and information away, which is a real boon in building a large system. You can do the same with other paradigms, but it seems OOP is especially helpful in this category.
It also kind of depends on the language you are using. If it is a language with rich OOP support, you should probably use that to your advantage. If it doesn't, then you may need to find other mechanisms to help break up the problem into smaller, easily testable pieces.
I am sold to OOP.
Anytime you can define a concept for a problem, it can probably be wrapped in an object.
The problem with OOP is that some people overused it and made their code even more difficult to understand. If you are careful about what you put in objects and what you put in services (static classes) you will benefit from using objects.
Just don't put something that doesn't belong to an object in the object because you need your object to do something new that you didn't think of initially, refactor and find the best way to add that functionality.
There are 5 criteria whether you should favor Object Oriented over Object Based,Functional or Procedural code. Remember all of these styles are available in all languages, they're styles. All of these are written in a style of "Should I favor OO in this situation?"
The system is very complex and has over approximately 9k LOC (Just an arbitrary level). -- As systems get more complex, the benefits gained by encapsulating complexity go up quite a bit. With OO, as opposed to the other techniques, you tend to encapsulate more and more of the complexity, which is very valuable at this level. Object Based or procedural should be favored before this. (This is not advocating a particular language mind you. OO C fits these features more than OO C++ in my mind, a language with a notorious reputation for leaky abstractions and an ability to eat shops with even 1 mediocre/obstinate programmer for lunch).
Your code is not operations on data (i.e. Database based or math/analysis based). Database based code is often more easily represented via procedural style. Analysis based code is often easier represented in a functional style.
Your model is a simulation of something (OO excels at simulations).
You're doing something for which the object based subtype dispatch of OO is valuable (aka, you need to send a message to all objects of a certain type and various subtypes and get an appropriate, but different, reaction out of all of them).
Your app is not multi-threaded, especially in a non-worker task method type of codebase. OO is quite problematic in programs which are multithreaded and require different threads to do different tasks. If your program is structured with one or two main threads and many worker threads doing the same thing, the muddled control flow of OO programs is easier to handle, as all of the worker threads will be isolated in what they touch and can be considered as a monolithic section of code. Consider any other paradigm actually. Functional excels at multithreading (lack of side effects is a huge boon), and object based programming can give you boons with some of the encapsulation of OO, however with more traceable procedural code in critical sections of your codebase. Procedural of course excels in this arena as well.
Some places where OO isn't so good are where you're dealing with "Sets" of data like in SQL. OO tends to make set based operations more difficult because it isn't really designed to optimally take the intersection of two sets or the superset of two sets.
Also, there are times when a functional approach would make more sense such as this example taken from MSDN:
Consider, for example, writing a program to convert an XML document into a different form of data. While it would certainly be possible to write a C# program that parsed through the XML document and applied a variety of if statements to determine what actions to take at different points in the document, an arguably superior approach is to write the transformation as an eXtensible Stylesheet Language Transformation (XSLT) program. Not surprisingly, XSLT has a large streak of functionalism inside of it
I find it helps to think of a given problem in terms of 'things'.
If the problem can be thought of as having one or more 'things', where each 'thing' has a number of attributes or pieces of information that refer to its state, and a number of operations that can be performed on it - then OOP is probably the way to go!
The key to learning Object Oriented Programming is learning about Design Pattern. By learning about design patterns you can see better when classes are needed and when they are not. Like anything else used in programming the use of classes and other features of OOP languages depends on your design and requirements. Like algorithms Design patterns are a higher level concept.
A Design Pattern plays similar role to that of algorithms for traditional programming languages. A design pattern tells you how create and combine object to perform some useful task. Like the best algorithms the best design patterns are general enough to be application to a variety of common problems.
In my opinion it is more a question about you as a person. Certain people think better in functional terms and others prefer classes and objects. I would say that OOP is better suited when it matches your internal (subjective) mental model of the world.
Object oriented code and procedural code have different extensibility points. Object oriented solutions make it easier to add new classes without modifying existing functions (see the Open-Closed Principle), while procedural code allows you to add functions without modifying existing data structures. Quite often different parts of a system require different approaches depending upon the type of change that is anticipated.
OO allows for logic related to an object to be placed within a single place (the class, or object) so that it can be decoupled and easier to debug and maintain.
What I have observed, is that every app is a combination of OO and procedural code, where the procedural code is the glue that binds all your objects together (at the very least, the code in your main function). The more you can turn your procedural code into OO, the easier it will be to maintain yor code.
Why OOP is used for programming:
Its flexibility – OOP is really flexible in terms of use implementations.
It can reduce your source codes by more than 99.9% – it may sound like I’m over exaggerating, but it is true.
It’s much easier in implementing security – We all know that security is one of the vital requirements when it comes to web development. Using OOP can ease the security implementations in your web projects.
It makes the coding more organized – We all know that a Clean Program is a Clean Coding. Using OOP instead of procedural makes things more organized and systematized (obviously).
It helps your team to work with each other easily – I know some of you had/have experienced team projects and some of you guys know that it’s important to have the same method, implementations, algorithm etc etc etc
It depends by the problem: the OOP paradigm is useful in designing distribuited systems or framework with a lot of entity living during the actions of the user (example: web application).
But if you have a math problem you will prefer a functional language (LISP); for a performance-critical systems you will use ADA or C, etc etc.
The language OOP is useful because too it use probabily the garbage collector (automatic use of memory) in the run of program: you you program in C a lot of time you must debug and correct manually a problem of memory.
OOP is useful when you have things. A socket, a button, a file. If you end a class in er it is almost always a function that is pretending to be a class. TestRunner more than likely should be a function that runs tests(and probably named run tests).
Personally, I think OOP is practically a necessity for any large application. I can't imagine having a program over 100k lines of code without using OOP, it would be a maintenance and design nightmare.
I tell you when OOP is bad.
When the architect writes really complicated, non-documented OOP code. Leaves half way through the project. And many of his common code pieces he used across various project has missing code. Thank god for .NET Reflector.
And the organization was not running Visual Source Safe or Subversion.
And I'm sorry. 2 pages of code to login is rather ridiculous even if it is cutely OOPed....