Aspect-oriented programming is a subject matter that has been very difficult for me to find any good information on. My old Software Engineering textbook only mentions it briefly (and vaguely), and the wikipedia and various other tutorials/articles I've been able to find on it give ultra-academic, highly-abstracted definitions of just what it is, how to use it, and when to use it. Definitions I just don't seem to understand.
My (very poor) understanding of AOP is that there are many aspects of producing a high-quality software system that don't fit neatly into a nice little cohesive package. Some classes, such as Loggers, Validators, DatabaseQueries, etc., will be used all over your codebase and thus will be highly-coupled. My (again, very poor) understanding of AOP is that it is concerned with the best practices of how to handle these types of "universally-coupled" packages.
Question : Is this true, or am I totally off? If I'm completely wrong, can someone please give a concise, laymen explanation for what AOP is, an example of a so-called aspect, and perhaps even provide a simple code example?
Separation of Concerns is a fundamental principle in software development, there is a classic paper by David Parnas On the Criteria To Be Used in Decomposing Systems into Modules that may introduce you to the subject and also read Uncle Bob's SOLID Principles.
But then there are Cross Cutting concerns that might be included in many use cases like authentication, authorization, validation, logging, transaction handling, exception handling, caching, etc that spawn all the layers in software. And if you want to tackle the problem without duplication and employing the DRY principle, you must handle it in a sophisticated way.
You must use declarative programming, that simply in .net could be annotating a method or a property by an attribute and what happened later is changing the behavior of code in runtime depending of those annotations.
You can find a nice chapter on this topic in Sommerville's Software engineering book
Useful links
C2 wiki CrossCuttingConcern, MSDN, How to Address Crosscutting Concerns in Aspect Oriented Software Development
AOP is a technique where we extract and remove the cross cutting concerns (logging, Exception handling, ....) from our code into it’s own aspect. leaving our original code focusing only on the business logic. not only this makes our code more readable, maintainable but also the code is DRY.
This can be better explained by an example:
Aspect Oriented Programming (AOP) in the .net world using Castle Windsor
or
Aspect Oriented Programming (AOP) in the .net world using Unity
AOP is about crosscutting concerns i.e. things that you need to do throughout the whole application. For instance logging. Suppose you want to trace when you enter and exit a method. This is very easy with aspects. You basically specify a "handler" for an event, such as entering a method. If necessary you can also specify with "wildcards" which methods you are interested in and then it is just a matter of writing the handler code, that could for instance log some info.
Aspect Oriented Programming is basically for separating the cross-cutting concerns (Non-functional) and develop it as aspects, like, security, logging, monitor etc., kept it aside whenever you need in your application, you can use it as plug & play. Only benefit we can achieve is clean code ,less code and programmers can focus on business logic(core concerns) , so that better modularity and quality system can be developed.
Related
I am working on a very large ASP.NET application. The problem is that there is not a lot of logic behind the design. The original developer chose about ten classes and that was it. There is high coupling and low cohesion. For example, clsPerson holds all the functionality for Person and breakes most of the rules of SOLID.
I have started to incorporate design patterns into my toolkit. My question is: what is the best way to incorporate badly designed classes into the better design. For example, if you had a class clsStudent that contained all the Student functionality to date and then created a class called clsUndergraduate then would you simply derive clsStudent? I realise that a lot of this depends on context but I am looking for general guidelines.
There is a lot of information online that talks about SOLID, but not a lot that talks about how to adapt an existing application to be SOLID.
It could be argued that you should only aim to implement SOLID principles as, and when the need arrives. However, I'm an advocate and believe that it's not difficult to add elements of SOLIDity to your design without too much overhead or heartache.
Trying to move an existing model to a more SOLID model can be difficult. I would suggest taking small, manageable parts and gradually refactoring. If you have the safety net of automated tests, this should be achievable with confidence. If not, make sure you fully understand the scope of the changes you are making. It's easy to introduce subtle bugs. Ultimately, serious changes will be likely to introduce new tests anyway.
Complying with the SRP is likely to be the easiest place to start. Try to define the main responsibility of each class. If they currently have more than one responsibility, note them and look at how these responsibilities can be moved out and elsewhere. For example, many 'God' classes will be managing persistence, validation, initialisation, etc. along with business logic. See if you can begin to take the persistence code out and put it into mappers/repositories/etc.
If you do this logically and sequentially, my experience is that, as you go, making lots of mistakes along the way, the relevance and importance of the other principles will emerge and make themselves, sort of, obvious.
I found that as I experimented with SOLID principles, by reading and re-reading (mainly Bob Martin and ObjectMentor) more clarity emerges as you have practical experience of implementation. Don't forget the opening principles defined in the Gang of Four, also. Concepts like 'favour composition over inheritance' go hand-in-hand with SOLID OO principles.
Bear in mind that SOLID code is generally more complex than non-SOLID code and can, therefore, be harder to maintain and debug by those not familiar with it. Skeptics might lay the accusation of 'over-engineering' at your door, which can be difficult to argue against with those who haven't wrestled with enhancing/fixing tightly coupled, incohesive code.
Good luck. Please feel free to ask more as I'd love to hear the input of others on this subject. It's scope is wide and varied.
I asked this question about Microsoft .NET Libraries and the complexity of its source code. From what I'm reading, writing general purpose libraries and writing applications can be two different things. When writing libraries, you have to think about the client who could literally be everyone (supposing I release the library for use in the general public).
What kind of practices or theories or techniques are useful when learning to write libraries? Where do you learn to write code like the one in the .NET library? This looks like a "black art" which I don't know too much about.
That's a pretty subjective question, but here's on objective answer. The Framework Design Guidelines book (be sure to get the 2nd edition) is a very good book about how to write effective class libraries. The content is very good and the often dissenting annotations are thought-provoking. Every shop should have a copy of this book available.
You definitely need to watch Josh Bloch in his presentation How to Design a Good API & Why it Matters (1h 9m long). He is a Java guru but library design and object orientation are universal.
One piece of advice often ignored by library authors is to internalize costs. If something is hard to do, the library should do it. Too often I've seen the authors of a library push something hard onto the consumers of the API rather than solving it themselves. Instead, look for the hardest things and make sure the library does them or at least makes them very easy.
I will be paraphrasing from Effective C++ by Scott Meyers, which I have found to be the best advice I got:
Adhere to the principle of least astonishment: strive to provide classes whose operators and functions have a natural syntax and an intuitive semantics. Preserve consistency with the behavior of the built-in types: when in doubt, do as the ints do.
Recognize that anything somebody can do, they will do. They'll throw exceptions, they'll assign objects to themselves, they'll use objects before giving them values, they'll give objects values and never use them, they'll give them huge values, they'll give them tiny values, they'll give them null values. In general, if it will compile, somebody will do it. As a result, make your classes easy to use correctly and hard to use incorrectly. Accept that clients will make mistakes, and design your classes so you can prevent, detect, or correct such errors.
Strive for portable code. It's not much harder to write portable programs than to write unportable ones, and only rarely will the difference in performance be significant enough to justify unportable constructs.
Even programs designed for custom hardware often end up being ported, because stock hardware generally achieves an equivalent level of performance within a few years. Writing portable code allows you to switch platforms easily, to enlarge your client base, and to brag about supporting open systems. It also makes it easier to recover if you bet wrong in the operating system sweepstakes.
Design your code so that when changes are necessary, the impact is localized. Encapsulate as much as you can; make implementation details private.
Edit: I just noticed I very nearly duplicated what cherouvim had posted; sorry about that! But turns out we're linking to different speeches by Bloch, even if the subject is exactly the same. (cherouvim linked to a December 2005 talk, I to January 2007 one.) Well, I'll leave this answer here — you're probably best off by watching both and seeing how his message and way of presenting it has evolved :)
FWIW, I'd like to point to this Google Tech Talk by Joshua Bloch, who is a greatly respected guy in the Java world, and someone who has given speeches and written extensively on API design. (Oh, and designed some exceptionally good general purpose libraries, like the Java Collections Framework!)
Joshua Bloch, Google Tech Talks, January 24, 2007:
"How To Design A Good API and Why it
Matters" (the video is about 1 hour long)
You can also read many of the same ideas in his article Bumper-Sticker API Design (but I still recommend watching the presentation!)
(Seeing you come from the .NET side, I hope you don't let his Java background get in the way too much :-) This really is not Java-specific for the most part.)
Edit: Here's another 1½ minute bit of wisdom by Josh Bloch on why writing libraries is hard, and why it's still worth putting effort in it (economies of scale) — in a response to a question wondering, basically, "how hard can it be". (Part of a presentation about the Google Collections library, which is also totally worth watching, but more Java-centric.)
Krzysztof Cwalina's blog is a good starting place. His book, Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries, is probably the definitive work for .NET library design best practices.
http://blogs.msdn.com/kcwalina/
The number one rule is to treat API design just like UI design: gather information about how your users really use your UI/API, what they find helpful and what gets in their way. Use that information to improve the design. Start with users who can put up with API churn and gradually stabilize the API as it matures.
I wrote a few notes about what I've learned about API design here: http://www.natpryce.com/articles/000732.html
I'd start looking more into design patterns. You'll probably not going to find much use for some of them, but as you get deeper into your library design the patterns will become more applicable. I'd also pick up a copy of NDepend - a great code measuring utility which may help you decouple things better. You can use .NET libraries as an example, but, personally, i don't find them to be great design examples mostly due to their complexities. Also, start looking at some open source projects to see how they're layered and structured.
A couple of separate points:
The .NET Framework isn't a class library. It's a Framework. It's a set of types meant to not only provide functionality, but to be extended by your own code. For instance, it does provide you with the Stream abstract class, and with concrete implementations like the NetworkStream class, but it also provides you the WebRequest class and the means to extend it, so that WebRequest.Create("myschema://host/more") can produce an instance of your own class deriving from WebRequest, which can have its own GetResponse method returning its own class derived from WebResponse, such that calling GetResponseStream will return your own class derived from Stream!
And your callers will not need to know this is going on behind the scenes!
A separate point is that for most developers, creating a reusable library is not, and should not be the goal. The goal should be to write the code necessary to meet requirements. In the process, reusable code may be found. In that case, it should be refactored out into a separate library, where it can be reused in the future.
I go further than that (when permitted). I will usually wait until I find two pieces of code that actually do the same thing, or which overlap. Presumably both pieces of code have passed all their unit tests. I will then factor out the common code into a separate class library and run all the unit tests again. Assuming that they still pass, I've begun the creation of some reusable code that works (since the unit tests still pass).
This is in contrast to a lesson I learned in school, when the result of an entire project was a beautiful reusable library - with no code to reuse it.
(Of course, I'm sure it would have worked if any code had used it...)
What are the possible and critical disadvantages of Aspect-Oriented Programming?
For example: cryptic debugging for newbies (readability impact)
I think the biggest problem is that nobody knows how to define the semantics of an aspect, or how to declare join points non-procedurally.
If you can't define what an aspect does independently of the context in which it will be embedded, or define the effects that it has in such a way that it doesn't damage the context in which it is embedded, you (and tools) can't reason about what it does reliably. (You'll note the most common example of aspects is "logging", which is defined as "write some stuff to a log stream the application doesn't know anything about", because this is pretty safe). This violates David Parnas' key notion of information hiding. One of the worst examples of aspects that I see are ones that insert synchronization primitives into code; this effects the sequence of possible interactions that code can have. How can you possibly know this is safe (won't deadlock? won't livelock? won't fail to protect? is recoverable in the face of a thrown exception in a synchronization partner?) unless the application only does trivial things already.
Join points are now typically defined by providing some kind of identifier wildcarding (e.g, "put this aspect into any method named "DataBaseAccess*". For this to work, the folks writing the affected methods have to signal the intention for their code to be aspectized by naming their code in a funny way; that's hardly modular. Worse, why should the victim of an aspect even have to know that it exists? And consider what happens if you simply rename some methods; the aspect is no longer injected where it is needed, and your application breaks. What is needed are join point specifications which are intentional; somehow, the aspect has to know where it is needed without the programmers placing a neon sign at each usage point. (AspectJ has some control-flow related join points which seem a bit better in this regard).
So aspects are kind of interesting idea, but I think they are technologically immature. And that immaturity makes their usage fragile. And that's where the problem lies.
(I'm a big fan of automated software engineering tools [see my bio] just not this way).
Poor toolchain support - debuggers, profilers etc may not know about the AOP and so may work on code as if all the aspects had been replaced by procedural code
Code bloat - small source can lead to much larger object code as code is "weaved" throughout the code base
I think the biggest disadvantage is using AOP well. People use it in places where it doesn't make sense, for example, and will not use it where it does.
For example, a factory pattern is obviously something that AOP can do better, as DI can also do it well, but the observer pattern is simpler when using AOP, as is the strategy pattern.
It will be harder to unit test, esp if you do the weaving at runtime.
If weaving at runtime then you also take a performance hit.
Having a good way to model what is going on when mixing AOP with classes is a problem, as UML I don't think is a good model at that point.
Unless you are using Eclipse then tools do have problems, but, with Eclipse and AJDT AOP is much easier.
We still use junit and nunit for example, and so have to modify our code to allow unit tests to run, when using priviledged mode AOP could do better unit tests by testing private methods also, and we don't have to change our programs just to make them work with unit-testing. This is another example of not really understanding how AOP can be helpful, we are still chained in many ways to the past with unit-test frameworks and current design pattern implementations and don't see how AOP could help us do better coding.
Maintenance and Debugging. With aop, you suddenly have code that is being run at a given point ( method entry, exit, whatever ) but in just looking at the code, you have no clue that it's even getting called, especially if the aop configuration is in another file, like xml config. If the advice causes some changes, then while debugging an application, things may look strange with no explanation. This doesn't affect only newbies.
I would not call it a critical disadvantage, but the biggest problem I have seen is one of developer experience and ability to adapt. Not all developers yet understand the difference between declarative and imperative programming.
We make use of the policy injection application block in EntLib 4.1 pretty extensively as well as Unity for DI and it's just not something that sinks in quickly for some people. I find my self explaining over and over again to the same people why the application is not behaving in the way they expect. It generally starts off them explaining something and me saying "see that declaration above the method." :) Some people get it right away, love it and become extremely productive -- others struggle.
Learning curve is not unique to AOP, but it seems to have a higher learning curve that other things that your average developer encounters.
With regard to the maintenance/debugging argument, aspect-oriented programming tends to go hand-in-hand with all the other aspects of agile software-development practices.
These practices tend to remove debugging from the picture, replacing it with unit testing and test-driven development.
In addition, it may be much easier to maintain a small, clear code footprint with advice than a large, incomprehensible code footprint without advice (the advice being the thing that transforms a large, incomprehensible code footprint into a small, clear code footprint).
Because the power of AOP, if there is a bug your crosscutting, it can cause to widespread problems. On the otherhand, someone might change the join points in a program – e.g., by renaming or moving methods – in ways that the aspect writer did not expect, with unintended consequences. One advantage of modularizing crosscutting concerns is enabling one programmer to affect the entire system easily.
I'm stumped as to why adoption of AO has been so slow. There are plenty of rich implementations out there for the predominant languages. My guess is that, like OO in it's day, it is enough of a paradigm shift that people don't recognize the places where it could be helping them.
So, beyond non-invasive logging, what are some of the ways that you have used, or plan to use AO, that reduces complexity, improves maintenance, enhances system "ilities"?
I am currently using AOP via EntLib / Unity in production for:
logging
caching
security
exception reporting
performance counters
Take a look at http://www.agileatwork.com/unit-of-work-with-unity-and-aspnet-mvc/ for an implementation of the unit of work pattern with AOP
[UnitOfWork]
public void Process(Job job)
{
...
}
Transaction management. I know it's a canonical use of AOP, but it really does shine when used for that.
And while I haven't had an opportunity to use it in a real-world situation, I see "around-advice" as being INCREDIBLY powerful, in particular for the value it adds to simplify the complexity of code by removing the need for many checks for rare conditions.
I agree for Spring AOP.
AOSD (we no longer speak about AOP I don't exactly why) is really useful for middleware/service oriented architecture where you already have, by design, some modularity.
I've used it in this context for telephony services with some really limited billing service.
I've also used it to build a kind of modular interpreter/compiler in order to perform some analysis around some code.
To my mind, one problem are the pointcut languages that could be sometimes tricky to describe exactly where you want to apply your advice. Another problem is composition, I don't know if it's been solved but it could be difficult to understand when you order your advices....
AOP is common, except people rarely call it AOP. Look at all the places in .NET programming where attributes are used. Attributes are essentially cross-cutting behaviors that can apply across many classes/methods/parameters.
More recently, the ASP.NET MVC platform has adopted heavy use of attributes, for a wide range of cross-cutting components such as security, data binding, and exception handling.
In my experience, with Spring AOP seems to be pretty common.
I think the difficulty is that people are just not used to thinking in terms of aspects, and weaving code in, even at compile time, can be somewhat scary, as it is harder to see what is actually affecting each method, for example, esp if you use a mixture of weaving at compile-time and run-time.
I have used it in situations where I have one controller, and I add in whether it is a servlet or webservice, for example. I have also used it to abstract out the database, so the database connections and database-optimized queries, could be woven into the application.
The e hardware verification language uses AOP.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Since I started studying object-oriented programming, I frequently read articles/blogs saying functions are better, or not all problems should be modeled as objects. From your personal programming adventures, when do you think a problem is better solved by OOP?
There is no hard and fast rule. A problem is better solved with OOP when you are better at solving problems and thinking in an OO mentality. Object Orientation is just another tool which has come along through trying to make computing a better tool for solving problems.
However, it can allow for better code reuse, and can also lead to neater code. But quite often these highly praised qualities are, in-relity, of little real value. Applying OO techniques to an existing functional application could really cause a lot of problems. The skill lies in learning many different techniques and applying the most appropriate to the problem at hand.
OO is often quoted as a Nirvana-like solution to the software development, however there are many times when it is not appropriate to be applied to the issue at hand. It can, quite often, lead to over-engineering of a problem to reach the perfect solution, when often it is really not necessary.
In essence, OOP is not really Object Oriented Programming, but mapping Object Oriented Thinking to a programming language capable of supporting OO Techniques. OO techniques can be supported by languages which are not inherently OO, and there are techniques you can use within functional languages to take advantage of the benefits.
As an example, I have been developing OO software for about 20 years now, so I tend to think in OO terms when solving problems, irrespective of the language I am writing in. Currently I am implementing polymorphism using Perl 5.6, which does not natively support it. I have chosen to do this as it will make maintenance and extension of the code a simple configuration task, rather than a development issue.
Not sure if this is clear. There are people who are hard in the OO court, and there are people who are hard in the Functional court. And then there are people who have tried both and try to take the best from each. Neither is perfect, but both have some very good traits that you can utilise no matter what the language.
If you are trying to learn OOP, don't just concentrate on OOP, but try to utilise Object Oriented Analysis and general OO principles to the whole spectrum of the problem solution.
I'm an old timer, but have also programmed OOP for a long time. I am personally against using OOP just to use it. I prefer objects to have specific reasons for existing, that they model something concrete, and that they make sense.
The problem that I have with a lot of the newer developers is that they have no concept of the resources that they are consuming with the code that they create. When dealing with a large amount of data and accessing databases the "perfect" object model may be the worst thing you can do for performance and resources.
My bottom line is if it makes sense as an object then program it as an object, as long as you consider the performance/resource impact of the implementation of your object model.
I think it fits best when you are modeling something cohesive with state and associated actions on those states. I guess that's kind of vague, but I'm not sure there is a perfect answer here.
The thing about OOP is that it lets you encapsulate and abstract data and information away, which is a real boon in building a large system. You can do the same with other paradigms, but it seems OOP is especially helpful in this category.
It also kind of depends on the language you are using. If it is a language with rich OOP support, you should probably use that to your advantage. If it doesn't, then you may need to find other mechanisms to help break up the problem into smaller, easily testable pieces.
I am sold to OOP.
Anytime you can define a concept for a problem, it can probably be wrapped in an object.
The problem with OOP is that some people overused it and made their code even more difficult to understand. If you are careful about what you put in objects and what you put in services (static classes) you will benefit from using objects.
Just don't put something that doesn't belong to an object in the object because you need your object to do something new that you didn't think of initially, refactor and find the best way to add that functionality.
There are 5 criteria whether you should favor Object Oriented over Object Based,Functional or Procedural code. Remember all of these styles are available in all languages, they're styles. All of these are written in a style of "Should I favor OO in this situation?"
The system is very complex and has over approximately 9k LOC (Just an arbitrary level). -- As systems get more complex, the benefits gained by encapsulating complexity go up quite a bit. With OO, as opposed to the other techniques, you tend to encapsulate more and more of the complexity, which is very valuable at this level. Object Based or procedural should be favored before this. (This is not advocating a particular language mind you. OO C fits these features more than OO C++ in my mind, a language with a notorious reputation for leaky abstractions and an ability to eat shops with even 1 mediocre/obstinate programmer for lunch).
Your code is not operations on data (i.e. Database based or math/analysis based). Database based code is often more easily represented via procedural style. Analysis based code is often easier represented in a functional style.
Your model is a simulation of something (OO excels at simulations).
You're doing something for which the object based subtype dispatch of OO is valuable (aka, you need to send a message to all objects of a certain type and various subtypes and get an appropriate, but different, reaction out of all of them).
Your app is not multi-threaded, especially in a non-worker task method type of codebase. OO is quite problematic in programs which are multithreaded and require different threads to do different tasks. If your program is structured with one or two main threads and many worker threads doing the same thing, the muddled control flow of OO programs is easier to handle, as all of the worker threads will be isolated in what they touch and can be considered as a monolithic section of code. Consider any other paradigm actually. Functional excels at multithreading (lack of side effects is a huge boon), and object based programming can give you boons with some of the encapsulation of OO, however with more traceable procedural code in critical sections of your codebase. Procedural of course excels in this arena as well.
Some places where OO isn't so good are where you're dealing with "Sets" of data like in SQL. OO tends to make set based operations more difficult because it isn't really designed to optimally take the intersection of two sets or the superset of two sets.
Also, there are times when a functional approach would make more sense such as this example taken from MSDN:
Consider, for example, writing a program to convert an XML document into a different form of data. While it would certainly be possible to write a C# program that parsed through the XML document and applied a variety of if statements to determine what actions to take at different points in the document, an arguably superior approach is to write the transformation as an eXtensible Stylesheet Language Transformation (XSLT) program. Not surprisingly, XSLT has a large streak of functionalism inside of it
I find it helps to think of a given problem in terms of 'things'.
If the problem can be thought of as having one or more 'things', where each 'thing' has a number of attributes or pieces of information that refer to its state, and a number of operations that can be performed on it - then OOP is probably the way to go!
The key to learning Object Oriented Programming is learning about Design Pattern. By learning about design patterns you can see better when classes are needed and when they are not. Like anything else used in programming the use of classes and other features of OOP languages depends on your design and requirements. Like algorithms Design patterns are a higher level concept.
A Design Pattern plays similar role to that of algorithms for traditional programming languages. A design pattern tells you how create and combine object to perform some useful task. Like the best algorithms the best design patterns are general enough to be application to a variety of common problems.
In my opinion it is more a question about you as a person. Certain people think better in functional terms and others prefer classes and objects. I would say that OOP is better suited when it matches your internal (subjective) mental model of the world.
Object oriented code and procedural code have different extensibility points. Object oriented solutions make it easier to add new classes without modifying existing functions (see the Open-Closed Principle), while procedural code allows you to add functions without modifying existing data structures. Quite often different parts of a system require different approaches depending upon the type of change that is anticipated.
OO allows for logic related to an object to be placed within a single place (the class, or object) so that it can be decoupled and easier to debug and maintain.
What I have observed, is that every app is a combination of OO and procedural code, where the procedural code is the glue that binds all your objects together (at the very least, the code in your main function). The more you can turn your procedural code into OO, the easier it will be to maintain yor code.
Why OOP is used for programming:
Its flexibility – OOP is really flexible in terms of use implementations.
It can reduce your source codes by more than 99.9% – it may sound like I’m over exaggerating, but it is true.
It’s much easier in implementing security – We all know that security is one of the vital requirements when it comes to web development. Using OOP can ease the security implementations in your web projects.
It makes the coding more organized – We all know that a Clean Program is a Clean Coding. Using OOP instead of procedural makes things more organized and systematized (obviously).
It helps your team to work with each other easily – I know some of you had/have experienced team projects and some of you guys know that it’s important to have the same method, implementations, algorithm etc etc etc
It depends by the problem: the OOP paradigm is useful in designing distribuited systems or framework with a lot of entity living during the actions of the user (example: web application).
But if you have a math problem you will prefer a functional language (LISP); for a performance-critical systems you will use ADA or C, etc etc.
The language OOP is useful because too it use probabily the garbage collector (automatic use of memory) in the run of program: you you program in C a lot of time you must debug and correct manually a problem of memory.
OOP is useful when you have things. A socket, a button, a file. If you end a class in er it is almost always a function that is pretending to be a class. TestRunner more than likely should be a function that runs tests(and probably named run tests).
Personally, I think OOP is practically a necessity for any large application. I can't imagine having a program over 100k lines of code without using OOP, it would be a maintenance and design nightmare.
I tell you when OOP is bad.
When the architect writes really complicated, non-documented OOP code. Leaves half way through the project. And many of his common code pieces he used across various project has missing code. Thank god for .NET Reflector.
And the organization was not running Visual Source Safe or Subversion.
And I'm sorry. 2 pages of code to login is rather ridiculous even if it is cutely OOPed....