Aspect Oriented Programming: What do you use PostSharp for? - aop

I would like to ask users of the AOP framework Postsharp, what specifically are you using the framework for?
Also, I know it's use has a big negative impact on build times, but how about runtime performace? Is there much of a hit?
Thanks,
S

I use it to remove the property name smell from INotifyPropertyChanged methods, and it hasn't hugely affected runtime performance.

I use the compile time weaving to add extra functionality to some methods that have been decorated with a certain attribute.
Like here.

In short, it makes development faster, code more maintainable and easier to understand. There doesn't have to be a performance hit when you are willing to put in the effort.

We use it to inject our own aspects (persistent property accessors, construction notifiers, session & transaction activators, etc.) in DataObjects.Net.

Related

How can I write a fully declarative configuration in Optaplanner if some with<X> methods are missing (there are set<X> methods)

I use Optaplanner as an optimisation library. I am trying to move away from XML configuration but I noticed that some of the *MoveSelectorConfig and *EntitySelectorConfig classes have set<X> methods instead of with<X> methods (e.g. setEntityClass(), setId(), setCacheType(), setSelectionOrder()). This makes it not possible to write a fully declarative configuration. Is this intended? What is the rationale? Are there plans to change this?
It is most definitely not intended, rather an oversight. If you want to report which builder methods are missing, we will eventually fix that. That said, since the setters are always there, this is not critical.

Origin of some of AOP's terminology

I would think this question have been asked before, but I was not immediately able to find related SO questions, or articles elsewhere for that matter.
It strikes me that certain terms in AOP are rather strange. It seems I'm not the only one - this article, for instance, notes that "unfortunately, AOP terminology is not particularly intuitive". However, I have not found a resource explaining why they are not more "intuitive", if that's possible.
More specifically: I can somewhat understand "aspect" and "join points" - they seem descriptive enough. But "pointcuts" and "advice" seem somewhat odd. How did these terms come about?
I think knowing the etymology of these terms will help in remembering them better, if not allowing for some insight into the thinking of AOP's designers. At least, I hope this will help me from ever blubbering out nonsensical things like "cut points" or "advice points" in meetings...
Totally agree with your frustration. Each terminology has it's use but everytime I have to deal with AOP I sometimes have to refresh my memory on what each terminology does.
What helps me is that the whole AOP is based on single concept of Method interceptor that can be applied to method, can decide if it needs to take action on that method call and apply custom logic to before and after that method call.
Take a look at Springs org.aopalliance.intercept.MethodInterceptor and it's inheritance hierarchy. For example the advice is actually an abstract definition of MethodInterceptor and pointcut is the logic of selecting to which methods to apply that advice (or MethodIntercptor).
As far as I can remember even pointcut is just another method interceptor that delegates to a method interceptor.
Etymology will not help much. You will just have to learn the terminology. But as for historical information about how some terms came about to be used, maybe you need to perform a web search, it is not really a question for Stack Overflow. At least I found some background info about the term advice for you.
Update: Actually there are not so many technical terms you need to be familiar with. The following is from one of my AOP slides. I use them in order to introduce AOP to developers when coaching them:
What is an aspect?
An aspect contains all necessary elements to implement a cross-cutting concern in a modular way. So, it is much like what a class is for a core concern.
An aspect can, like a class, contain some kind of "methods" called advice and data. It can be a singleton or instantiated multiple times, depending on its usage.
Because an aspect is defined independently of the core system, we need something else to weave its orthogonal functionality into the core code. This something is called a pointcut and determines where an advice should be applied, e.g. before or after certain method calls, upon an exception, when an object is created and so forth.
Any place or event in the core code where aspect code can potentially be woven in is called a joinpoint.
If you need a crib or memory hook, maybe this helps (please note the words in italics):
The aspect method which advises the core code about how to apply a cross-cutting concern, is called advice.
Each point in your core code which you can hook into in order to apply a cross-cutting concern, is called a joinpoint.
You cut a slice (or subset) off of your core code joinpoints by means of a query syntax (like SQL selects table rows) called pointcut.

What's the point of creating classes at runtime in Objective-C?

I've recently reread the interesting tutorial from Mike Ash about How to create classes at Objective-C Runtime
I has been a long time I am wondering where to apply this powerful feature of the language. I always see an overkill solution to most of the ideas that come to my mind, and I eventually proceed with NSDictionary. What are your cases of use of creating classes at runtime? The only one I see is an Obj-C interpreter... More ideas?
There's some possible options I see, when someone need to create class in runtime
To hide information about it (It won't help in most cases, but... you can)
To perform multiple-inheritance (If you really need it :)
Using your own language(i.e. some XML-like), that can be interpreted by your program, writted in Obj-C (Something like NSProxy, but even better.)
Creating some Dynamic-Class that can change it's behavior in runtime
In general.. There is some possible usages of this. But in real world, in default service applications there's no need to do this, actually:)
It could be used for example along Core Data or any API related to a database to create new classes of objects unknown at compilation time. However, I doubt this is used often, it's mostly the mechanism the system uses itself when it runs a program...
KVO, in the Cocoa frameworks, is implemented by dynamically creating "notifying" versions of your classes. See http://www.mikeash.com/pyblog/friday-qa-2009-01-23.html

Are Traits good or bad?

This is an open-ended question, but I would like to solicit some opinions from the SO community on Traits; do you think Traits in Squeak/Pharo are a good thing, or should you stay away from them and use composition and delegation instead? I ask because while I know how to use them (thanks to the Pharo book), I am not really sure how acceptable it is to use them or where it is OK to use them and where it isn't.
I do not like traits because they introduce strong dependencies into code. These dependencies can be obvious (a class that imports a trait, a trait that expects methods), but also very subtle (a trait that shadows super methods/instance variables). Furthermore there is no adequate tool support for traits.
In my experience delegation gives a much better and more reusable design in a dynamically typed object-oriented language like Smalltalk.
Things have their pros and cons. Lukas rightly mentions many of the cons:
Introduce strong dependencies into code.
no adequate tool support.
While the second may go away some day, the first will not.
The purpose of traits is to prevent code duplication that occurs, when two classes that don't share a superclass other than Object, share an instance method. Now, sometimes delegation can fix that, but oftentimes it cannot. So, the pro of traits is:
Reduced code duplication.
My verdict here is that the disadvantages overweigh. I think that, today and forever, code duplication is bound to occur. And when delegation won't do, I can even imagine that code duplication isn't all that harmful, as it often precedes the divergent evolution of the copied code snippets.
I think, the best thing to do, as of today, is to keep automated track of code duplication, and always monitor when one end changes while the other doesn't. I'm currently writing a tool that'll keep track of such links, even across repositories. I'll report on it in my blog when it's ready.

When do you need to create abstractions in the form of interfaces?

When do you encourage programming against an interface and not directly to a concrete class?
A guideline that I follow is to create abstractions whenever code requires to cross a logical/physical boundary, most especially when infrastructure-related concerns are involved.
Another checkpoint would be if a dependency will likely change in the future, due to possible additional concerns code (such as caching, transactional awareness, invoking a webservice instead of in-process execution) or if such dependencies have direct references to infrastructure integration points.
If code depends on something that does not require control to cross a logical/physical boundary, I more or less don't create abstractions to interact with those.
Am I missing anything?
Also, use interfaces when
Multiple objects will need to be acted upon in a particular fashion, but are not fundamentally related. Perhaps many of your business objects access a particular utility object, and when they do they need to give a reference of themselves to that utility object so the utility object can call a particular method. Have that method in an interface and pass that interface to that utility object.
Passing around interfaces as parameters can be very helpful in unit testing. Even if you have just one type of object that sports a particular interface, and hence don't really need a defined interface, you might define/implement an interface solely to "fake" that object in unit tests.
related to the first 2 bullets, check out the Observer pattern and the Dependency Injection. I'm not saying to implement these patterns, but they illustrate types of places where interfaces are really helpful.
Another twist on this is for implementing a couple of the SOLID Principals, Open Closed principal and the Interface Segregation principle. Like the previous bullet, don't get stressed about strictly implementing these principals everywhere (right away at least), but use these concepts to help move your thinking away from just what objects go where to thinking more about contracts and dependency
In the end, let's not make it too complicated: we're in a strongly typed world in .NET. If you need to call a method or set a property but the object you're passing/using could be fundamentally different, use an interface.
I would add that if your code is not going to be referenced by another library (for a while at least), then the decision of whether to use an interface in a particular situation is one that you can responsibly put off. The "extract interface" refactoring is easy to do these days. In my current project, I've got an object being passed around that I'm thinking maybe I should switch to an interface; I'm not stressing about it.
Interfaces abstraction are convenient when doing unit test. It helps for mocking test objects. It very useful in TDD for developing without actually using data from your database.
If you don't need any features of the class that aren't found in the Interface...then why not always prefer the Interface implementation?
It will make your code easier to modify in the future and easier to test (mocking).
you have the right idea, already. i would only add a couple of notes to this...
first, abstraction does not mean 'interface'. for example, a "connection string" is an abstraction, even though it's just a string... it's not about the 'type' of the thing in question, it's about the intention of use for that thing.
and secondly, if you are doing test automation of any kind, look for the pain and friction that are exposed by writing the tests. if you find yourself having to set up too many external conditions for a test, it's a sign that you need a better abstraction between the thing your testing and the things it interacts with.
I think you've said it pretty well. Much of this will be a stylistic thing. There are open source projects I've looked at where everything has an interface and an implementation, and it's kind of frustrating, but it might make iterative development a little easier, since any objects implementation can break but dummies will still work. But honestly, I can dummy any class that doesn't overuse the final keyword by inheritance.
I would add to your list this: anything which can be thought of as a black box should be abstracted. This includes some of the things you've mentioned, but it also includes hairy algorithms, which are likely to have multiple useful implementations with different advantages for different situation.
Additionally, interfaces come in handy very often with composite objects. That's the only way something like java's swing library gets anything done, but it can also be useful for more mundane objects. (I personally like having an interface like ValidityChecker with ways to and-compose or or-compose subordinate ValidityCheckers.)
Most of the useful things that come with the Interface passing have been already said. However I would add:
implementing an interface to an object, or later multiple objects, FORCES all the implementers to follow an IDENTICAL pattern to implement contract with the object. This can be useful in case you have not so OOP-experienced-programmers actually writing the implementation code.
in some languages you can add attributes on the interface itself, which can be different from the actual object implementation attribute as sense and intent