Microsoft doesn't follow UseGenericEventHandlersInstances FxCop rule - fxcop

I'm currently going through the process of running FxCop through our application that has been in development for around a year already.
One rule that I am unsure of is UseGenericEventHandlersInstances. I understand how to implement it, but don't really see the benefit, other than saving the step of having to define a delegate for each custom event handler. In our case, we already have these delegates created and I wonder if we have any benefit of changes our event handlers or should just disable this rule.
In my limited experience almost all FxCop rules appear to be adhered to be Microsoft and the .NET framework. However, in this case that is not true. The .NET framework uses the traditional approach that we have used, with declaring delegates, at least for all of the mouse and keybaord event handlers which were the ones that I investigated.
Nobody knows why Microsoft does or does not do something, but this further makes me questions the validity and benefit of this rule.

Most of the non-generic delegates that you see in recent versions of the .NET BCL were created before generics were introduced. Changing the pre-existing events to use EventHandler would be a breaking change, as would removing the public ___EventHandler delegates that might be used by other code. However, there is quite a bit of EventHandler use for new events exposed by the BCL, so it would be incorrect to assume that Microsoft is not abiding by the rule for new code.
In your case, if changing older event declarations would be a breaking change or would have low value, you could opt to suppress existing violations of the rule. You could, of course, also choose to ignore the rule even for new events, although I'm not sure why one would want to do so given that following the rule actually avoids unnecessary work for the original developer while enhancing API useability.

Related

Method JavaFx TreeItem getRoot() is not visible. What is the OOP/MVC reason it is not?

I needed to get the root item of a TreeView. The obvious way to get it is to use the getRoot() on the TreeView. Which I use.
I like to experiment, and was wondering if I can get same root, buy climbing up the tree from a leaf item (a TreeItem), using recursively getParent() until the result is NULL.
It is working as well, and, in my custom TreeItem, I added a public method 'getRoot()' to play around with it. Thus finding out this method does already exist in parent TreeItem, but is not exposed.
My question : Why would it not be exposed ? Is is a bad practice regarding OOP / MVC architecture ?
The reason for the design is summed up by kleopatra's comment:
Why would it not be exposed I would pose it the other way round: why should it? It's convenience api at best, easy to implement by clients, not really needed - adding such to a framework/toolkit tends to exploding api/implementation to maintain.
JavaFX is filled with decisions like this on purpose. A lot of the reasoning is based on experience (good and bad) from AWT/Spring. Just some examples:
For specifying execution on the UI thread, there is a runLater API, but no invokeAndWait API like Swing, even though it would be easy for the framework to provide such an API and it has been requested.
Providing an invokeAndWait API means that naive (and experienced :-) developers could use it incorrectly to accidentally deadlock threads.
Lots of classes are final and not extensible.
Sometimes developers want to extend classes, but can't because they are final. This means that they can't over-ride a lot of the built-in tested functionality of the framework and accidentally break it that way. Instead they can usually use aggregation over inheritance to do what they need. The framework forces them to do so in order to protect itself and them.
Color objects are immutable.
Immutable objects in general make stuff easier to maintain.
Native look and feels aren't part of the framework.
You can still create them if you want, and there are 3rd party libraries that do that, but it doesn't need to be in the core framework.
The application programming interface is single threaded not multi-threaded.
Because the developers of the framework realized that multi-threaded UI frameworks are a failed dream.
The philosophy was to code to make the 80% use case easier and the the 20% use case (usually) possible, using additional user or 3rd party code, while making it difficult for the user code to accidentally (or intentionally) break the framework. You just stumbled upon one instance of an application of this philosophy.
There are a whole host of catch-phrases that you could use to describe the reason for this design approach. None of them are OOP or MVC specific. The underlying principles have been around far longer than software engineering, they are just approaches towards work and engineering in general. Here are some links if interested:
You ain't going to need it YAGNI
Minimal viable product MVP
Worse-is-better
Muntzing
Feature creep prevention
Keep it simple stupid KISS
Occam's razor

How to understand the big picture in a loose coupled application?

We have been developing code using loose coupling and dependency injection.
A lot of "service" style classes have a constructor and one method that implements an interface. Each individual class is very easy to understand in isolation.
However, because of the looseness of the coupling, looking at a class tells you nothing about the classes around it or where it fits in the larger picture.
It's not easy to jump to collaborators using Eclipse because you have to go via the interfaces. If the interface is Runnable, that is no help in finding which class is actually plugged in. Really it's necessary to go back to the DI container definition and try to figure things out from there.
Here's a line of code from a dependency injected service class:-
// myExpiryCutoffDateService was injected,
Date cutoff = myExpiryCutoffDateService.get();
Coupling here is as loose as can be. The expiry date be implemented literally in any manner.
Here's what it might look like in a more coupled application.
ExpiryDateService = new ExpiryDateService();
Date cutoff = getCutoffDate( databaseConnection, paymentInstrument );
From the tightly coupled version, I can infer that the cutoff date is somehow determined from the payment instrument using a database connection.
I'm finding code of the first style harder to understand than code of the second style.
You might argue that when reading this class, I don't need to know how the cutoff date is figured out. That's true, but if I'm narrowing in on a bug or working out where an enhancement needs to slot in, that is useful information to know.
Is anyone else experiencing this problem? What solutions have you? Is this just something to adjust to? Are there any tools to allow visualisation of the way classes are wired together? Should I make the classes bigger or more coupled?
(Have deliberately left this question container-agnostic as I'm interested in answers for any).
While I don't know how to answer this question in a single paragraph, I attempted to answer it in a blog post instead: http://blog.ploeh.dk/2012/02/02/LooseCouplingAndTheBigPicture.aspx
To summarize, I find that the most important points are:
Understanding a loosely coupled code base requires a different mindset. While it's harder to 'jump to collaborators' it should also be more or less irrelevant.
Loose coupling is all about understanding a part without understanding the whole. You should rarely need to understand it all at the same time.
When zeroing in on a bug, you should rely on stack traces rather than the static structure of the code in order to learn about collaborators.
It's the responsibility of the developers writing the code to make sure that it's easy to understand - it's not the responsibility of the developer reading the code.
Some tools are aware of DI frameworks and know how to resolve dependencies, allowing you to navigate your code in a natural way. But when that isn't available, you just have to use whatever features your IDE provides as best you can.
I use Visual Studio and a custom-made framework, so the problem you describe is my life. In Visual Studio, SHIFT+F12 is my friend. It shows all references to the symbol under the cursor. After a while you get used to the necessarily non-linear navigation through your code, and it becomes second-nature to think in terms of "which class implements this interface" and "where is the injection/configuration site so I can see which class is being used to satisfy this interface dependency".
There are also extensions available for VS which provide UI enhancements to help with this, such as Productivity Power Tools. For instance, you can hover over an interface, a info box will pop up, and you can click "Implemented By" to see all the classes in your solution implementing that interface. You can double-click to jump to the definition of any of those classes. (I still usually just use SHIFT+F12 anyway).
I just had an internal discussion about this, and ended up writing this piece, which I think is too good not to share. I'm copying it here (almost) unedited, but even though it's part of a bigger internal discussion, I think most of it can stand alone.
The discussion is about introduction of a custom interface called IPurchaseReceiptService, and whether or not it should be replaced with use of IObserver<T>.
Well, I can't say that I have strong data points about any of this - it's just some theories that I'm pursuing... However, my theory about cognitive overhead at the moment goes something like this: consider your special IPurchaseReceiptService:
public interface IPurchaseReceiptService
{
void SendReceipt(string transactionId, string userGuid);
}
If we keep it as the Header Interface it currently is, it only has that single SendReceipt method. That's cool.
What's not so cool is that you had to come up with a name for the interface, and another name for the method. There's a bit of overlap between the two: the word Receipt appears twice. IME, sometimes that overlap can be even more pronounced.
Furthermore, the name of the interface is IPurchaseReceiptService, which isn't particularly helpful either. The Service suffix is essentially the new Manager, and is, IMO, a design smell.
Additionally, not only did you have to name the interface and the method, but you also have to name the variable when you use it:
public EvoNotifyController(
ICreditCardService creditCardService,
IPurchaseReceiptService purchaseReceiptService,
EvoCipher cipher
)
At this point, you've essentially said the same thing thrice. This is, according to my theory, cognitive overhead, and a smell that the design could and should be simpler.
Now, contrast this to use of a well-known interface like IObserver<T>:
public EvoNotifyController(
ICreditCardService creditCardService,
IObserver<TransactionInfo> purchaseReceiptService,
EvoCipher cipher
)
This enables you to get rid of the bureaucracy and reduce the design the the heart of the matter. You still have intention-revealing naming - you only shift the design from a Type Name Role Hint to an Argument Name Role Hint.
When it comes to the discussion about 'disconnectedness', I'm under no illusion that use of IObserver<T> will magically make this problem go away, but I have another theory about this.
My theory is that the reason many programmers find programming to interfaces so difficult is exactly because they are used to Visual Studio's Go to definition feature (incidentally, this is yet another example of how tooling rots the mind). These programmers are perpetually in a state of mind where they need to know what's 'on the other side of an interface'. Why is this? Could it be because the abstraction is poor?
This ties back to the RAP, because if you confirm programmers' belief that there's a single, particular implementation behind every interface, it's no wonder they think that interfaces are only in the way.
However, if you apply the RAP, I hope that slowly, programmers will learn that behind a particular interface, there may be any implementation of that interface, and their client code must be able to handle any implementation of that interface without changing the correctness of the system. If this theory holds, we've just introduced the Liskov Substitution Principle into a code base without scaring anyone with high-brow concepts they don't understand :)
However, because of the looseness of the coupling, looking at a class
tells you nothing about the classes around it or where it fits in the
larger picture.
This is not accurate.For each class you know exactly what kind of objects the class depends on, to be able to provide its functionality at runtime.
You know them since you know that what objects are expected to be injected.
What you don't know is the actual concrete class that will be injected at runtime which will implement the interface or base class that you know your class(es) depend on.
So if you want to see what is the actual class injected, you just have to look at the configuration file for that class to see the concrete classes that are injected.
You could also use facilities provided by your IDE.
Since you refer to Eclipse then Spring has a plugin for it, and has also a visual tab displaying the beans you configure. Did you check that? Isn't it what you are looking for?
Also check out the same discussion in Spring Forum
UPDATE:
Reading your question again, I don't think that this is a real question.
I mean this in the following manner.
Like all things loose coupling is not a panacea and has its own disadvantages per se.
Most tend to focus on the benefits but as any solution it has its disadvantages.
What you do in your question is describe one of its main disadvantages which is that it indeed is not easy to see the big picture since you have everything configurable and plugged in by anything.
There are other drawbacks as well that one could complaint e.g. that it is slower than tight coupled applications and still be true.
In any case, re-iterating, what you describe in your question is not a problem you stepped upon and can find a standard solution (or any for that manner).
It is one of the drawbacks of loose coupling and you have to decide if this cost is higher than what you actually gain by it, like in any design-decision trade off.
It is like asking:
Hey I am using this pattern named Singleton. It works great but I can't create new objects!How can I get arround this problem guys????
Well you can't; but if you need to, perhaps singleton is not for you....
One thing that helped me is placing multiple closely related classes in the same file. I know this goes against the general advice (of having 1 class per file) and I generally agree with this, but in my application architecture it works very well. Below I will try to explain in which case this is.
The architecture of my business layer is designed around the concept of business commands. Command classes (simple DTO with only data and no behavior) are defined and for each command there is a 'command handler' that contains the business logic to execute this command. Each command handler implements the generic ICommandHandler<TCommand> interface, where TCommand is the actual business command.
Consumers take a dependency on the ICommandHandler<TCommand> and create new command instances and use the injected handler to execute those commands. This looks like this:
public class Consumer
{
private ICommandHandler<CustomerMovedCommand> handler;
public Consumer(ICommandHandler<CustomerMovedCommand> h)
{
this.handler = h;
}
public void MoveCustomer(int customerId, Address address)
{
var command = new CustomerMovedCommand();
command.CustomerId = customerId;
command.NewAddress = address;
this.handler.Handle(command);
}
}
Now consumers only depend on a specific ICommandHandler<TCommand> and have no notion of the actual implementation (as it should be). However, although the Consumer should know nothing about the implementation, during development I (as a developer) am very much interested in the actual business logic that is executed, simply because development is done in vertical slices; meaning that I'm often working on both the UI and business logic of a simple feature. This means I'm often switching between business logic and UI logic.
So what I did was putting the command (in this example the CustomerMovedCommand and the implementation of ICommandHandler<CustomerMovedCommand>) in the same file, with the command first. Because the command itself is concrete (since its a DTO there is no reason to abstract it) jumping to the class is easy (F12 in Visual Studio). By placing the handler next to the command, jumping to the command means also jumping to the business logic.
Of course this only works when it is okay for the command and handler to be living in the same assembly. When your commands need to be deployed separately (for instance when reusing them in a client/server scenario), this will not work.
Of course this is just 45% of my business layer. Another big peace however (say 45%) are the queries and they are designed similarly, using a query class and a query handler. These two classes are also placed in the same file which -again- allows me to navigate quickly to the business logic.
Because the commands and queries are about 90% of my business layer, I can in most cases move very quickly from presentation layer to business layer and even navigate easily within the business layer.
I must say these are the only two cases that I place multiple classes in the same file, but makes navigation a lot easier.
If you want to learn more about how I designed this, I've written two articles about this:
Meanwhile... on the command side of my architecture
Meanwhile... on the query side of my architecture
In my opinion, loosely coupled code can help you much but I agree with you about the readability of it.
The real problem is that name of methods also should convey valuable information.
That is the Intention-Revealing Interface principle as stated by
Domain Driven Design ( http://domaindrivendesign.org/node/113 ).
You could rename get method:
// intention revealing name
Date cutoff = myExpiryCutoffDateService.calculateFromPayment();
I suggest you to read thoroughly about DDD principles and your code could turn much more readable and thus manageable.
I have found The Brain to be useful in development as a node mapping tool. If you write some scripts to parse your source into XML The Brain accepts, you could browse your system easily.
The secret sauce is to put guids in your code comments on each element you want to track, then the nodes in The Brain can be clicked to take you to that guid in your IDE.
Depending on how many developers are working on projects and whether you want to reuse some parts of it in different projects loose coupling can help you a lot. If your team is big and project needs to span several years, having loose coupling can help as work can be assigned to different groups of developers more easily. I use Spring/Java with lots of DI and Eclipse offers some graphs to display dependencies. Using F3 to open class under cursor helps a lot. As stated in previous posts, knowing shortcuts for your tool will help you.
One other thing to consider is creating custom classes or wrappers as they are more easily tracked than common classes that you already have (like Date).
If you use several modules or layer of application it can be a challenge to understand what a project flow is exactly, so you might need to create/use some custom tool to see how everything is related to each other. I have created this for myself, and it helped me to understand project structure more easily.
Documentation !
Yes, you named the major drawback of loose coupled code. And if you probably already realized that at the end, it will pay off, it's true that it will always be longer to find "where" to do your modifications, and you might have to open few files before finding "the right spot"...
But that's when something really important: the documentation. It's weird that no answer explicitly mentioned that, it's a MAJOR requirement in all big sized development.
API Documentation
An APIDoc with a good search feature. That each file and --almost-- each methods have a clear description.
"Big picture" documentation
I think it's good to have a wiki that explain the big picture. Bob have made a proxy system ? How doest it works ? Does it handle authentication ? What kind of component will use it ? Not a whole tutorial, but just a place when you can read 5 minutes, figure out what components are involved and how they are linked together.
I do agree with all the points of Mark Seemann answer, but when you get in a project for the first time(s), even if you understand well the principles behing decoupling, you'll either need a lot of guessing, or some sort of help to figure out where to implement a specific feature you want to develop.
... Again: APIDoc and a little developper Wiki.
I am astounded that nobody has written about the testability (in terms of unit testing of course) of the loose coupled code and the non-testability (in the same terms) of the tightly coupled design! It is no brainer which design you should choose. Today with all the Mock and Coverage frameworks it is obvious, well, at least for me.
Unless you do not do unit tests of your code or you think you do them but in fact you don't...
Testing in isolation can be barely achieved with tight coupling.
You think you have to navigate through all the dependencies from your IDE? Forget about it! It is the same situation as in case of compilation and runtime. Hardly any bug can be found during the compilation, you cannot be sure whether it works unless you test it, which means execute it. Want to know what is behind the interface? Put a breakpoint and run the goddamn application.
Amen.
...updated after the comment...
Not sure if it is going to serve you but in Eclipse there is something called hierarchy view. It shows you all the implementations of an interface within your project (not sure if the workspace as well). You can just navigate to the interface and press F4. Then it will show you all the concrete and abstract classes implementing the interface.

What is interface bloat?

Can someone explain to me what interface bloat in OOP is (preferably with an example).
G'day,
Assuming you mean API and not GUI, for me I/F bloat can happen in several ways.
An API just keeps getting extended and extended with new functions without any form of segregation so you finish up with a monolithic header file that becomes hard to use.
The functions declared in an existing API keep getting new parameters added to their signatures so you have to keep upgrading and your existing applications are not backwards compatible.
Functions in an existing API keep getting overloaded with very similar variants which can lead to difficulty selecting the relevant function to be used.
To help with this you can:
Separate out the API into a series of headers and libraries so you can more easily control what parts you actually need. Any internal dependencies should be resolved automatically by the vendor so the user doesn't have to find out the dependencies by trial and error, e.g. that I need to include header file wibble.h when I only wanted to use the functions in the API declared in the shozbot.h header file.
Make upgrades to the API backwards compatible by introducing overloading where applicable. But you should group the overloaded functions into categpories, e.g. if new set of overloaded functions are added to an existing API, say our_api.h, to adapt it to a new technology, say SOA, then they are provided separately in their own header file our_api_soa.h in addition to the existing header our_api.h.
HTH
Think of an OO language where all methods are defined in Object, even though they are only meaningful for some subclasses. That would be the most extreme example.
Most Microsoft products?
Interface bloat is having too much on the screen at once, particularly elements that are little used, or are confusing in their function. Probably an easier way to describe interface bloat is to look at something that does not have it, try Basecamp from 37signals. There are only a few tabs, and a few links in the header.
Interface bloat can be remedied by collapsable panes (using Javascript, for example), or drill-down menus that hide less-often used choices until they are needed.
Interface bloat is the gradual addition of elements that turn what may been a simple, elegant interface into one littered with buttons, menus, options, etc. all over the place that ruin the original cohesive feel of the application. One example that comes to mind for me is iTunes. In it's early renditions, it was quite simple, but has, over time, added quite a lot of features that might qualify as bloat (iTunes DJ, Coverflow, Genius).
Interface bloat is sometimes caused by trying to have every feature one click away, as in this humorous example:
Too many toolbar buttons
(Although funny, this example isn't fair to Firefox because in this example the user added all those toolbars)
A UI design technique called "progressive disclosure" is one way to reduce interface bloat. Only expose the most frequently-used features as a top-level click. If you have less-frequently-used features that are still valuable enough to include in your app, group them in a logical way, e.g. behind a dropdown menu or other navigation element.
Learning by example:
http://img46.imageshack.us/img46/5127/ofilematrix.png
An extreme example of interface bloat that most C++ programmers will be familiar with is std::basic_string. Page up and page down of member functions with only small variations, most of these functions wouldn't have had to be member functions but could have been free functions in a string utility library.

What is Aspect Oriented Programming? [duplicate]

This question already has answers here:
Closed 14 years ago.
Duplicate:
What is aspect-oriented programming?
Every time I here a podcast or read a blog entry about it, even here, they make it sound like string theory or something. Is the best way to describe it OOP with Dependency Injection on steroids?
Every time someone tries to explain it, it’s like, Aspects, [Adults from Peanuts Cartoon sound], Orthogonal, [more noise], cross cutting concerns, etc. Seriously, can anyone describe it in layman’s terms.
Laymans terms so let me give you an example. Let's say you have a web application, and you need to add error logging / auditing. One implementation would be to go into every public method and add your try catch blocks etc...
Well Aspect oriented says hogwash with that, let me inject my method around your method so for example instead of calling YourClass.UpdateModel(), the system might call,
LoggingHandler.CallMethod() this method might then redirect the call to UpdateModel but wraps it in a try catch block to handle logging errors.
Now the trick is that this redirection happens automagically, through configuration or by applying attributes to methods.
This works for as you said cross cutting things which are very common programing elements that exist in every domain, such as: Logging, Auditing, Transaction Mgmt, Authorization.
The idea behind it is to remove all this common plumbing code out of your business / app tier so you can focus on solving the problem not worrying about logging this method call or that method call.
Class and method attributes in .NET are a form of aspect-oriented programming. You decorate your classes/methods with attributes. Behind the scenes this adds code to your class/method that performs the particular functions of the attribute. For example, marking a class serializable allows it to be serialized automatically for storage or transmission to another system. Other attributes might mark certain properties as non-serializable and these would be automatically omitted from the serialized object. Serialization is an aspect, implemented by other code in the system, and applied to your class by the application of a "configuration" attribute (decoration) .
AOP is all about managing the common functionality (which span across the application, hence cross cutting) within the application such that it is not embedded within the business logic.
Examples to such cross cutting concerns are logging, managing security, transaction management etc.
Frameworks allows this to managed automatically with the help of some configuration files.
I currently use Post Sharp, i would read the info from their website. I use it to provide a security around method calls.
"PostSharp is an open platform for the analysis and transformation of .NET assemblies. It comes with PostSharp Laos, a powerful yet simple plug-in that let you develop custom attributes that actually adds behavior of your code. PostSharp Laos is the leading aspect-oriented programming (AOP) solution for the .NET Framework."
The classic examples are security and logging. Instead of writing code within your application to log occurance of x or check object z for security access control there is a language contraption "out of band" of normal code which can systematically inject security or logging into routines that don't nativly have them in such a way that even though your code doesn't supply it -- its taken care of.
A more concrete example is the operating system providing access controls to a file. A software program does not need to check for access restrictions because the underlying system does that work for it.
If you think you need AOP in my experience you actually really need to be investing more time and effort into appropriate meta-data management within your system with a focus on well thought structural / systems design.

Do you use AOP (Aspect Oriented Programming) in production software?

AOP is an interesting programming paradigm in my opinion. However, there haven't been discussions about it yet here on stackoverflow (at least I couldn't find them). What do you think about it in general? Do you use AOP in your projects? Or do you think it's rather a niche technology that won't be around for a long time or won't make it into the mainstream (like OOP did, at least in theory ;))?
If you do use AOP then please let us know which tools you use as well. Thanks!
Python supports AOP by letting you dynamically modify its classes at runtime (which in Python is typically called monkeypatching rather than AOP). Here are some of my AOP use cases:
I have a website in which every page is generated by a Python function. I'd like to take a class and make all of the webpages generated by that class password-protected. AOP comes to the rescue; before each function is called, I do the appropriate session checking and redirect if necessary.
I'd like to do some logging and profiling on a bunch of functions in my program during its actual usage. AOP lets me calculate timing and print data to log files without actually modifying any of these functions.
I have a module or class full of non-thread-safe functions and I find myself using it in some multi-threaded code. Some AOP adds locking around these function calls without having to go into the library and change anything.
This kind of thing doesn't come up very often, but whenever it does, monkeypatching is VERY useful. Python also has decorators which implement the Decorator design pattern (http://en.wikipedia.org/wiki/Decorator_pattern) to accomplish similar things.
Note that dynamically modifying classes can also let you work around bugs or add features to a third-party library without actually having to modify that library. I almost never need to do this, but the few times it's come up it's been incredibly useful.
Yes.
Orthogonal concerns, like security, are best done with AOP-style interception. Whether that is done automatically (through something like a dependency injection container) or manually is unimportant to the end goal.
One example: the "before/after" attributes in xUnit.net (an open source project I run) are a form of AOP-style method interception. You decorate your test methods with these attributes, and just before and after that test method runs, your code is called. It can be used for things like setting up a database and rolling back the results, changing the security context in which the test runs, etc.
Another example: the filter attributes in ASP.NET MVC also act like specialized AOP-style method interceptors. One, for instance, allows you to say how unhandled errors should be treated, if they happen in your action method.
Many dependency injection containers, including Castle Windsor and Unity, support this behavior either "in the box" or through the use of extensions.
I don't understand how one can handle cross-cutting concerns like logging, security, transaction management, exception-handling in a clean fashion without using AOP.
Anyone using the Spring framework (probably about 50% of Java enterprise developers) is using AOP whether they know it or not.
At Terracotta we use AOP and bytecode instrumentation pretty extensively to integrate with and instrument third-party software. For example, our Spring intergration is accomplished in large part by using aspectwerkz. In a nutshell, we need to intercept calls to Spring beans and bean factories at various points in order to cluster them.
So AOP can be useful for integrating with third party code that can't otherwise be modified. However, we've found there is a huge pitfall - if possible, only use the third party public API in your join points, otherwise you risk having your code broken by a change to some private method in the next minor release, and it becomes a maintenance nightmare.
AOP and transaction demarcation is a match made in heaven. We use Spring AOP #Transaction annotations, it makes for easier and more intuitive tx-demarcation than I've ever seen anywhere else.
We used aspectJ in one of my big projects for quite some time. The project was made up of several web services, each with several functions, which was the front end for a complicated document processing/querying system. Somewhere around 75k lines of code. We used aspects for two relatively minor pieces of functionality.
First was tracing application flow. We created an aspect that ran before and after each function call to print out "entered 'function'" and "exited 'function'". With the function selector thing (pointcut maybe? I don't remember the right name) we were able to use this as a debugging tool, selecting only functions that we wanted to trace at a given time. This was a really nice use for aspects in our project.
The second thing we did was application specific metrics. We put aspects around our web service methods to capture timing, object information, etc. and dump the results in a database. This was nice because we could capture this information, but still keep all of that capture code separate from the "real" code that did the work.
I've read about some nice solutions that aspects can bring to the table, but I'm still not convinced that they can really do anything that you couldn't do (maybe better) with "normal" technology. For example, I couldn't think of any major feature or functionality that any of our projects needed that couldn't be done just as easily without aspects - where I've found aspects useful are the kind of minor things that I've mentioned.
I use AOP heavily in my C# applications. I'm not a huge fan of having to use Attributes, so I used Castle DynamicProxy and Boo to apply aspects at runtime without polluting my code
We use AOP in our session facade to provide a consistent framework for our customers to customize our application. This allows us to expose a single point of customization without having to add manual hook support in for each method.
Additionally, AOP provides a single point of configuration for additional transaction setup and teardown, and the usual logging things. All told, much more maintainable than doing all of this by hand.
The main application I work on includes a script host. AOP allows the host to examine the properties of a script before deciding whether or not to load the script into the Application Domain. Since some of the scripts are quite cumbersome, this makes for much faster loading at run-time.
We also use and plan to use a significant number of attributes for things like compiler control, flow control and in-IDE debugging, which do not need to be part of the final distributed application.
We use PostSharp for our AOP solution. We have caching, error handling, and database retry aspects that we currently use and are in the process of making our security checks an Aspect.
Works great for us. Developers really do like the separation of concerns. The Architects really like having the platform level logic consolidated in one location.
The PostSharp library is a post compiler that does the injection of the code. It has a library of pre-defined intercepts that are brain dead easy to implement. It feels like wiring in event handlers.
Yes, we do use AOP in application programming . I preferably use AspectJ for integrating aop in my Spring applications. Have a look at this article for getting a broader prospective for the same.
http://codemodeweb.blogspot.in/2018/03/spring-aop-and-aspectj-framework.html