I would think this question have been asked before, but I was not immediately able to find related SO questions, or articles elsewhere for that matter.
It strikes me that certain terms in AOP are rather strange. It seems I'm not the only one - this article, for instance, notes that "unfortunately, AOP terminology is not particularly intuitive". However, I have not found a resource explaining why they are not more "intuitive", if that's possible.
More specifically: I can somewhat understand "aspect" and "join points" - they seem descriptive enough. But "pointcuts" and "advice" seem somewhat odd. How did these terms come about?
I think knowing the etymology of these terms will help in remembering them better, if not allowing for some insight into the thinking of AOP's designers. At least, I hope this will help me from ever blubbering out nonsensical things like "cut points" or "advice points" in meetings...
Totally agree with your frustration. Each terminology has it's use but everytime I have to deal with AOP I sometimes have to refresh my memory on what each terminology does.
What helps me is that the whole AOP is based on single concept of Method interceptor that can be applied to method, can decide if it needs to take action on that method call and apply custom logic to before and after that method call.
Take a look at Springs org.aopalliance.intercept.MethodInterceptor and it's inheritance hierarchy. For example the advice is actually an abstract definition of MethodInterceptor and pointcut is the logic of selecting to which methods to apply that advice (or MethodIntercptor).
As far as I can remember even pointcut is just another method interceptor that delegates to a method interceptor.
Etymology will not help much. You will just have to learn the terminology. But as for historical information about how some terms came about to be used, maybe you need to perform a web search, it is not really a question for Stack Overflow. At least I found some background info about the term advice for you.
Update: Actually there are not so many technical terms you need to be familiar with. The following is from one of my AOP slides. I use them in order to introduce AOP to developers when coaching them:
What is an aspect?
An aspect contains all necessary elements to implement a cross-cutting concern in a modular way. So, it is much like what a class is for a core concern.
An aspect can, like a class, contain some kind of "methods" called advice and data. It can be a singleton or instantiated multiple times, depending on its usage.
Because an aspect is defined independently of the core system, we need something else to weave its orthogonal functionality into the core code. This something is called a pointcut and determines where an advice should be applied, e.g. before or after certain method calls, upon an exception, when an object is created and so forth.
Any place or event in the core code where aspect code can potentially be woven in is called a joinpoint.
If you need a crib or memory hook, maybe this helps (please note the words in italics):
The aspect method which advises the core code about how to apply a cross-cutting concern, is called advice.
Each point in your core code which you can hook into in order to apply a cross-cutting concern, is called a joinpoint.
You cut a slice (or subset) off of your core code joinpoints by means of a query syntax (like SQL selects table rows) called pointcut.
Related
In a code review, I heard it is bad to create huge classes with a lot lines of code. Apparently the 1000 rules of code I had was terrible practice in terms of readability/navigability, I do hear some sense in that.
So I have some complex classes which are basically the code logic behind different screens. I'm programming on Android so this is for example for a Fragment or Activity (though this is a generic question).
Now I can choose to group methods and put them in Utility classes. This will at the very least shorten my code in the sense of lines per class and put some feeling about what method is where. The methods though, are really only used by no more than 1 class, so should this really be a utility class? My gut feeling says utility classes should be stateless classes that contain static methods usable for classes.
Now I could also go for collapsible code blocks and group my methods in them. This will provide readability and usability for me, but not for other programmers.
Then if I look at for instance the Android Activity class, it contains over 6000 lines of code. Is this considered bad practice as well?
I realize this question might be too much "opinion based" but I hope there is a clear and common answer to it.
Lines of code is a somewhat meaningless measure in my opinion. Certain types of classes are going to be naturally longer - for example MVC controllers.
The most important principal to keep in mind when designing a class is the single responsibility principle, which states:
every class should have a single responsibility, and that
responsibility should be entirely encapsulated by the class
The Activity type in Android could well follow this principal and still be 6000 lines long if, for example, an activity is a very complex thing which requires lots of horrible nested control and flow statements.
Without seeing the class it's difficult to say, however, in practice, it's unlikely that a well designed single class would grow to this size.
While implementating aspect oriented programming i am getting confuse
Why Dojo having to two differentaspect library files?
When to use
dojo.aspect and dojox.lang.aspect ?
I have never heard about dojox.lang.aspect before, but according to git log the latest commit dates to 2008.
You can find an explanation why dojox.lang.aspect exists in an article by its author Eugene Lazutkin: AOP aspect of JavaScript with Dojo.
While it doesn't make much sense in some cases, we should realize that the primary role of dojo.connect() is to process DOM events.
[...]
To sum it up: events != AOP. Events cannot simulate all aspects of AOP, and AOP cannot be used in lieu of events.
So AOP in JavaScript sounds simple! Well, in reality there are several sticky places:
Implementing every single advice as a function is expensive.
Usually iterations are more efficient than recursive calls.
The size of call stack is limited.
If we have chained calls, it'll be impossible to rearrange them. So if we want to remove one advice in the middle, we are out of luck.
The “around” advice will “eat” all previously attached “before” and “after” advices changing the order of their execution. We cannot guarantee anymore that “before” advices run before all “around” advices, and so on.
Usually in order to decouple an “around” advice from the original function, the proceed() function is used. Calling it will result in calling the next-in-chain around advice method or the original method.
Our aspect is an object but in some cases we want it to be a static object, in other cases we want it to be a dynamic object created taking into account the state of the object we operate upon.
These and some other problems were resolved in dojox.lang.aspect (currently in the trunk, will be released in Dojo 1.2).
As of the latest Dojo 1.7, there is a strong tendency to differentiate between events and aspects, i.e. between dojo/on and dojo/aspect (both were implemented via dojo.connect before).
From a usage standpoint, dojo/aspect is a very simplified version of dojox/lang/aspect.
With dojo/aspect, you can create aspects corresponding to a named function (e.g. the "get" method in the "xhr" class), allowing you to create a before, after or around advice anytime xhr.get is called.
On the other hand, (TMHO) only dojox/lang/aspect provides enough features to play with aop.
It allows you to define your pointcuts with regular expressions, therefore allowing things like "execute an around advice for any functions whose name starts with get on any object"...
You can even pass-in an array of function names or regular expressions to which your aspects will be applied.
The blog post pointed by phusick gives good examples of that.
This is an open-ended question, but I would like to solicit some opinions from the SO community on Traits; do you think Traits in Squeak/Pharo are a good thing, or should you stay away from them and use composition and delegation instead? I ask because while I know how to use them (thanks to the Pharo book), I am not really sure how acceptable it is to use them or where it is OK to use them and where it isn't.
I do not like traits because they introduce strong dependencies into code. These dependencies can be obvious (a class that imports a trait, a trait that expects methods), but also very subtle (a trait that shadows super methods/instance variables). Furthermore there is no adequate tool support for traits.
In my experience delegation gives a much better and more reusable design in a dynamically typed object-oriented language like Smalltalk.
Things have their pros and cons. Lukas rightly mentions many of the cons:
Introduce strong dependencies into code.
no adequate tool support.
While the second may go away some day, the first will not.
The purpose of traits is to prevent code duplication that occurs, when two classes that don't share a superclass other than Object, share an instance method. Now, sometimes delegation can fix that, but oftentimes it cannot. So, the pro of traits is:
Reduced code duplication.
My verdict here is that the disadvantages overweigh. I think that, today and forever, code duplication is bound to occur. And when delegation won't do, I can even imagine that code duplication isn't all that harmful, as it often precedes the divergent evolution of the copied code snippets.
I think, the best thing to do, as of today, is to keep automated track of code duplication, and always monitor when one end changes while the other doesn't. I'm currently writing a tool that'll keep track of such links, even across repositories. I'll report on it in my blog when it's ready.
When do you encourage programming against an interface and not directly to a concrete class?
A guideline that I follow is to create abstractions whenever code requires to cross a logical/physical boundary, most especially when infrastructure-related concerns are involved.
Another checkpoint would be if a dependency will likely change in the future, due to possible additional concerns code (such as caching, transactional awareness, invoking a webservice instead of in-process execution) or if such dependencies have direct references to infrastructure integration points.
If code depends on something that does not require control to cross a logical/physical boundary, I more or less don't create abstractions to interact with those.
Am I missing anything?
Also, use interfaces when
Multiple objects will need to be acted upon in a particular fashion, but are not fundamentally related. Perhaps many of your business objects access a particular utility object, and when they do they need to give a reference of themselves to that utility object so the utility object can call a particular method. Have that method in an interface and pass that interface to that utility object.
Passing around interfaces as parameters can be very helpful in unit testing. Even if you have just one type of object that sports a particular interface, and hence don't really need a defined interface, you might define/implement an interface solely to "fake" that object in unit tests.
related to the first 2 bullets, check out the Observer pattern and the Dependency Injection. I'm not saying to implement these patterns, but they illustrate types of places where interfaces are really helpful.
Another twist on this is for implementing a couple of the SOLID Principals, Open Closed principal and the Interface Segregation principle. Like the previous bullet, don't get stressed about strictly implementing these principals everywhere (right away at least), but use these concepts to help move your thinking away from just what objects go where to thinking more about contracts and dependency
In the end, let's not make it too complicated: we're in a strongly typed world in .NET. If you need to call a method or set a property but the object you're passing/using could be fundamentally different, use an interface.
I would add that if your code is not going to be referenced by another library (for a while at least), then the decision of whether to use an interface in a particular situation is one that you can responsibly put off. The "extract interface" refactoring is easy to do these days. In my current project, I've got an object being passed around that I'm thinking maybe I should switch to an interface; I'm not stressing about it.
Interfaces abstraction are convenient when doing unit test. It helps for mocking test objects. It very useful in TDD for developing without actually using data from your database.
If you don't need any features of the class that aren't found in the Interface...then why not always prefer the Interface implementation?
It will make your code easier to modify in the future and easier to test (mocking).
you have the right idea, already. i would only add a couple of notes to this...
first, abstraction does not mean 'interface'. for example, a "connection string" is an abstraction, even though it's just a string... it's not about the 'type' of the thing in question, it's about the intention of use for that thing.
and secondly, if you are doing test automation of any kind, look for the pain and friction that are exposed by writing the tests. if you find yourself having to set up too many external conditions for a test, it's a sign that you need a better abstraction between the thing your testing and the things it interacts with.
I think you've said it pretty well. Much of this will be a stylistic thing. There are open source projects I've looked at where everything has an interface and an implementation, and it's kind of frustrating, but it might make iterative development a little easier, since any objects implementation can break but dummies will still work. But honestly, I can dummy any class that doesn't overuse the final keyword by inheritance.
I would add to your list this: anything which can be thought of as a black box should be abstracted. This includes some of the things you've mentioned, but it also includes hairy algorithms, which are likely to have multiple useful implementations with different advantages for different situation.
Additionally, interfaces come in handy very often with composite objects. That's the only way something like java's swing library gets anything done, but it can also be useful for more mundane objects. (I personally like having an interface like ValidityChecker with ways to and-compose or or-compose subordinate ValidityCheckers.)
Most of the useful things that come with the Interface passing have been already said. However I would add:
implementing an interface to an object, or later multiple objects, FORCES all the implementers to follow an IDENTICAL pattern to implement contract with the object. This can be useful in case you have not so OOP-experienced-programmers actually writing the implementation code.
in some languages you can add attributes on the interface itself, which can be different from the actual object implementation attribute as sense and intent
I know about "class having a single reason to change". Now, what is that exactly? Are there some smells/signs that could tell that class does not have a single responsibility? Or could the real answer hide in YAGNI and only refactor to a single responsibility the first time your class changes?
The Single Responsibility Principle
There are many obvious cases, e.g. CoffeeAndSoupFactory. Coffee and soup in the same appliance can lead to quite distasteful results. In this example, the appliance might be broken into a HotWaterGenerator and some kind of Stirrer. Then a new CoffeeFactory and SoupFactory can be built from those components and any accidental mixing can be avoided.
Among the more subtle cases, the tension between data access objects (DAOs) and data transfer objects (DTOs) is very common. DAOs talk to the database, DTOs are serializable for transfer between processes and machines. Usually DAOs need a reference to your database framework, therefore they are unusable on your rich clients which neither have the database drivers installed nor have the necessary privileges to access the DB.
Code Smells
The methods in a class start to be grouped by areas of functionality ("these are the Coffee methods and these are the Soup methods").
Implementing many interfaces.
Write a brief, but accurate description of what the class does.
If the description contains the word "and" then it needs to be split.
Well, this principle is to be used with some salt... to avoid class explosion.
A single responsibility does not translate to single method classes. It means a single reason for existence... a service that the object provides for its clients.
A nice way to stay on the road... Use the object as person metaphor... If the object were a person, who would I ask to do this? Assign that responsibility to the corresponding class. However you wouldn't ask the same person to do your manage files, compute salaries, issue paychecks, and verify financial records... Why would you want a single object to do all these? (it's okay if a class takes on multiple responsibilities as long as they are all related and coherent.)
If you employ a CRC card, it's a nice subtle guideline. If you're having trouble getting all the responsibilities of that object on a CRC card, it's probably doing too much... a max of 7 would do as a good marker.
Another code smell from the refactoring book would be HUGE classes. Shotgun surgery would be another... making a change to one area in a class causes bugs in unrelated areas of the same class...
Finding that you are making changes to the same class for unrelated bug-fixes again and again is another indication that the class is doing too much.
A simple and practical method to check single responsibility (not only classes but also method of classes) is the name choice. When you design a class, if you easily find a name for the class that specify exactly what it defines, you're in the right way.
A difficulty to choose a name is near always a symptom of bad design.
the methods in your class should be cohesive...they should work together and make use of the same data structures internally. If you find you have too many methods that don't seem entirely well related, or seem to operate on different things, then quite likely you don't have a good single responsibility.
Often it's hard to initially find responsibilities, and sometimes you need to use the class in several different contexts and then refactor the class into two classes as you start to see the distinctions. Sometimes you find that it's because you are mixing an abstract and concrete concept together. They tend to be harder to see, and, again, use in different contexts will help clarify.
The obvious sign is when your class ends up looking like a Big Ball of Mud, which is really the opposite of SRP (single responsibility principle).
Basically, all the object's services should be focused on carrying out a single responsibility, meaning every time your class changes and adds a service which does not respect that, you know you're "deviating" from the "right" path ;)
The cause is usually due to some quick fixes hastily added to the class to repair some defects. So the reason why you are changing the class is usually the best criteria to detect if you are about to break the SRP.
Martin's Agile Principles, Patterns, and Practices in C# helped me a lot to grasp SRP. He defines SRP as:
A class should have only one reason to change.
So what is driving change?
Martin's answer is:
[...] each responsibility is an axis of change. (p. 116)
and further:
In the context of the SRP, we define a responsibility to be a reason for change. If you can think of more than one motive for changing a class, that class has more than one responsibility (p. 117)
In fact SRP is encapsulating change. If change happens, it should just have a local impact.
Where is YAGNI?
YAGNI can be nicely combined with SRP: When you apply YAGNI, you wait until some change is actually happening. If this happens you should be able to clearly see the responsibilities which are inferred from the reason(s) for change.
This also means that responsibilities can evolve with each new requirement and change. Thinking further SRP and YAGNI will provide you the means to think in flexible designs and architectures.
Perhaps a little more technical than other smells:
If you find you need several "friend" classes or functions, that's usually a good smell of bad SRP - because the required functionality is not actually exposed publically by your class.
If you end up with an excessively "deep" hierarchy (a long list of derived classes until you get to leaf classes) or "broad" hierarchy (many, many classes derived shallowly from a single parent class). It's usually a sign that the parent class does either too much or too little. Doing nothing is the limit of that, and yes, I have seen that in practice, with an "empty" parent class definition just to group together a bunch of unrelated classes in a single hierarchy.
I also find that refactoring to single responsibility is hard. By the time you finally get around to it, the different responsibilities of the class will have become entwined in the client code making it hard to factor one thing out without breaking the other thing. I'd rather err on the side of "too little" than "too much" myself.
Here are some things that help me figure out if my class is violating SRP:
Fill out the XML doc comments on a class. If you use words like if, and, but, except, when, etc., your classes probably is doing too much.
If your class is a domain service, it should have a verb in the name. Many times you have classes like "OrderService", which should probably be broken up into "GetOrderService", "SaveOrderService", "SubmitOrderService", etc.
If you end up with MethodA that uses MemberA and MethodB that uses MemberB and it is not part of some concurrency or versioning scheme, you might be violating SRP.
If you notice that you have a class that just delegates calls to a lot of other classes, you might be stuck in proxy class hell. This is especially true if you end up instantiating the proxy class everywhere when you could just use the specific classes directly. I have seen a lot of this. Think ProgramNameBL and ProgramNameDAL classes as a substitute for using a Repository pattern.
I've also been trying to get my head around the SOLID principles of OOD, specifically the single responsibility principle, aka SRP (as a side note the podcast with Jeff Atwood, Joel Spolsky and "Uncle Bob" is worth a listen). The big question for me is: What problems is SOLID trying to address?
OOP is all about modeling. The main purpose of modeling is to present a problem in a way that allows us to understand it and solve it. Modeling forces us to focus on the important details. At the same time we can use encapsulation to hide the "unimportant" details so that we only have to deal with them when absolutely necessary.
I guess you should ask yourself: What problem is your class trying to solve? Has the important information you need to solve this problem risen to the surface? Are the unimportant details tucked away so that you only have to think about them when absolutely necessary?
Thinking about these things results in programs that are easier to understand, maintain and extend. I think this is at the heart of OOD and the SOLID principles, including SRP.
Another rule of thumb I'd like to throw in is the following:
If you feel the need to either write some sort of cartesian product of cases in your test cases, or if you want to mock certain private methods of the class, Single Responsibility is violated.
I recently had this in the following way:
I had a cetain abstract syntax tree of a coroutine which will be generated into C later. For now, think of the nodes as Sequence, Iteration and Action. Sequence chains two coroutines, Iteration repeats a coroutine until a userdefined condition is true and Action performs a certain userdefined action. Furthermore, it is possible to annotate Actions and Iterations with codeblocks, which define the actions and conditions to evaluate as the coroutine walks ahead.
It was necessary to apply a certain transformation to all of these code blocks (for those interested: I needed to replace the conceptual user variables with actual implementation variables in order to prevent variable clashes. Those who know lisp macros can think of gensym in action :) ). Thus, the simplest thing that would work was a visitor which knows the operation internally and just calls them on the annotated code block of the Action and Iteration on visit and traverses all the syntax tree nodes. However, in this case, I'd have had to duplicate the assertion "transformation is applied" in my testcode for the visitAction-Method and the visitIteration-Method. In other words, I had to check the product test cases of the responsibilities Traversion (== {traverse iteration, traverse action, traverse sequence}) x Transformation (well, codeblock transformed, which blew up into iteration transformed and action transformed). Thus, I was tempted to use powermock to remove the transformation-Method and replace it with some 'return "I was transformed!";'-Stub.
However, according to the rule of thumb, I split the class into a class TreeModifier which contains a NodeModifier-instance, which provides methods modifyIteration, modifySequence, modifyCodeblock and so on. Thus, I could easily test the responsibility of traversing, calling the NodeModifier and reconstructing the tree and test the actual modification of the code blocks separately, thus removing the need for the product tests, because the responsibilities were separated now (into traversing and reconstructing and the concrete modification).
It also is interesting to notice that later on, I could heavily reuse the TreeModifier in various other transformations. :)
If you're finding troubles extending the functionality of the class without being afraid that you might end up breaking something else, or you cannot use class without modifying tons of its options which modify its behavior smells like your class doing too much.
Once I was working with the legacy class which had method "ZipAndClean", which was obviously zipping and cleaning specified folder...