In aspect oriented programming, where exactly does a join point start - aop

I was going through spring documentation, and this is what it says about join points
" Join point: a point during the execution of a program, such as the execution of a method or the handling of an exception. In Spring AOP, a join point always represents a method execution."
and also this is what the document says regarding "before advice"..
"Before advice: Advice that executes before a join point, but which does not have the ability to prevent execution flow proceeding to the join point (unless it throws an exception)."
When they say, the before advice executes before a join point, where exactly is that point located for a given method.Lets say we have the following method, is it correct to assume that it will be located at the place where we have the greater than> symbol inside the method ?
public void Calculate()
{
>
// some logic
}

For Spring AOP (a proxy-based framework) it is more like this:
class MyClass implements MyInterface {
public void doSomething() {}
}
// Dynamic proxy created during runtime
class ProxyXY extends MyClass implements MyInterface {
public void before_doSomething() {
// Do whatever the AOP advice says and then...
doSomething();
}
}
This is really just schematic and simplified, but I guess you get the idea. In AspectJ it is quite different because there are no proxies involved and bytecode is generated directly into the target class.

A Join Point basically is the point of program in application which joins the business logic of the application with the centralized AOP concerns. Spring only supports method execution as a Join point i.e. before or after execution of a method(action) in the business logic ,we can join to a concern as required.
Go through this article for getting a gist on AOP in Spring.
http://codemodeweb.blogspot.in/2018/03/spring-aop-and-aspectj-framework.html

Related

Mocking EntityManager

I am getting NPE while mocking EntityManager, below is my code,
#Stateless
public class NodeChangeDeltaQueryBean implements NodeChangeDeltaQueryLocal {
#PersistenceContext
private EntityManager em;
#Override
public String findIdByNaturalKey(final String replicationDomain, final int sourceNodeIndex,
final int nodeChangeNumber) {
List<String> result =
NodeChangeDelta.findIdByNaturalKey(this.em, replicationDomain, sourceNodeIndex,
nodeChangeNumber).getResultList();
return result.isEmpty() ? null : result.get(0);
}
}
My Entity Class
#Entity
public class NodeChangeDelta implements Serializable, Cloneable, GeneratedEntity, KeyedEntity<String> {
public static TypedQuery<String> findIdByNaturalKey(final EntityManager em, final String replicationDomain, final int sourceNodeIndex, final int nodeChangeNumber) {
return em.createNamedQuery("NodeChangeDelta.findIdByNaturalKey", String.class)
.setParameter("replicationDomain", replicationDomain)
.setParameter("sourceNodeIndex", sourceNodeIndex)
.setParameter("nodeChangeNumber", nodeChangeNumber);
}
}
My Test Class
#RunWith(MockitoJUnitRunner.class)
public class NodeChangeDeltaQueryBeanTest {
#InjectMocks
NodeChangeDeltaQueryBean nodeChangeDeltaQueryBean;
#Mock
EntityManager em;
#Test
public void testFindIdByNaturalKey() {
this.addNodeChangeDelta();
this.nodeChangeDeltaQueryBean.findIdByNaturalKey(this.REPLICATION_DOMAIN,
this.SOURCE_NODE_INDEX, this.NODE_CHANGE_NUMDER);
}
}
While debugging em is not null (also other arguments REPLICATION_DOMAIN,
SOURCE_NODE_INDEX, NODE_CHANGE_NUMDER not null) in Entity class, whereas em.createNamedQuery("NodeChangeDelta.findIdByNaturalKey", String.class) is null.
On the mockito wiki : Don't mock types you don't own !
This is not a hard line, but crossing this line may have repercussions! (it most likely will.)
Imagine code that mocks a third party lib. After a particular upgrade of a third library, the logic might change a bit, but the test suite will execute just fine, because it's mocked. So later on, thinking everything is good to go, the build-wall is green after all, the software is deployed and... Boom
It may be a sign that the current design is not decoupled enough from this third party library.
Also another issue is that the third party lib might be complex and require a lot of mocks to even work properly. That leads to overly specified tests and complex fixtures, which in itself compromises the compact and readable goal. Or to tests which do not cover the code enough, because of the complexity to mock the external system.
Instead, the most common way is to create wrappers around the external lib/system, though one should be aware of the risk of abstraction leakage, where too much low level API, concepts or exceptions, goes beyond the boundary of the wrapper. In order to verify integration with the third party library, write integration tests, and make them as compact and readable as possible as well.
Mock type that you don't have the control can be considered a (mocking) anti-pattern. While EntityManager is pretty much standard, one should not consider there won't be any behavior change in upcoming JDK / JSR releases (it already happened numerous time in other part of the API, just look at the JDK release notes). Plus the real implementations may have subtleties in their behavior that can hardly be mocked, tests may be green but the production tomcats are on fire (true story).
My point is that if the code needs to mock a type I don't own, the design should change asap so I, my colleagues or future maintainers of this code won't fall in these traps.
Also the wiki links to other blogs entries describing issues they had when they tried to mock type they didn't have control.
Instead I really advice everyone to don't use mock when testing integration with another system. I believe for database stuff, Arquillian is the thing to go, the project appears to be quite active.
Adapted from my answer : https://stackoverflow.com/a/28698223/48136
In Mockito, any method invocation on a mock that is not explicitly configured, always returns null. Therefore in findIdByNaturalKey, em.createNamedQuery is returning null and so NPE on setParameter. You need to configure it to RETURN_MOCKS.
Also, I am not sure if #InjectMocks supports #PersistenceContext. If it does not then em is probably null. If it does, please let me know and the above is your issue.

Autofac: Resolving dependencies with parameters

I'm currently learning the API for Autofac, and I'm trying to get my head around what seems to me like a very common use case.
I have a class (for this simple example 'MasterOfPuppets') that has a dependency it receives via constructor injection ('NamedPuppet'), this dependency needs a value to be built with (string name):
public class MasterOfPuppets : IMasterOfPuppets
{
IPuppet _puppet;
public MasterOfPuppets(IPuppet puppet)
{
_puppet = puppet;
}
}
public class NamedPuppet : IPuppet
{
string _name;
public NamedPuppet(string name)
{
_name = name;
}
}
I register both classes with their interfaces, and than I want to resolve IMasterOfPuppets, with a string that will be injected into the instance of 'NamedPuppet'.
I attempted to do it in the following way:
IMasterOfPuppets master = bs.container.Resolve<IMasterOfPuppets>(new NamedParameter("name", "boby"));
This ends with a runtime error, so I guess Autofac only attempts to inject it to the 'MasterOfPuppets'.
So my question is, how can I resolve 'IMasterOfPuppets' only and pass parameter arguments to it's dependency, in the most elegant fashion?
Do other ioc containers have better solutions for it?
Autofac doesn't support passing parameters to a parent/consumer object and having those parameters trickle down into child objects.
Generally I'd say requiring the consumer to know about what's behind the interfaces of its dependencies is bad design. Let me explain:
From your design, you have two interfaces: IMasterOfPuppets and IPuppet. In the example, you only have one type of IPuppet - NamedPuppet. Keeping in mind that the point of even having the interface is to separate the interface from the implementation, you might also have this in your system:
public class ConfigurablePuppet : IPuppet
{
private string _name;
public ConfigurablePuppet(string name)
{
this._name = ConfigurationManager.AppSettings[name];
}
}
Two things to note there.
First, you have a different implementation of IPuppet that should work in place of any other IPuppet when used with the IMasterOfPuppets consumer. The IMasterOfPuppets implementation should never know that the implementation of IPuppet changed... and the thing consuming IMasterOfPuppets should be even further removed.
Second, both the example NamedPuppet and the new ConfigurablePuppet take a string parameter with the same name, but it means something different to the backing implementation. So if your consuming code is doing what you show in the example - passing in a parameter that's intended to be the name of the thing - then you probably have an interface design problem. See: Liskov substitution principle.
Point being, given that the IMasterOfPuppets implementation needs an IPuppet passed in, it shouldn't care how the IPuppet was constructed to begin with or what is actually backing the IPuppet. Once it knows, you're breaking the separation of interface and implementation, which means you may as well do away with the interface and just pass in NamedPuppet objects all the time.
As far as passing parameters, Autofac does have parameter support.
The recommended and most common type of parameter passing is during registration because at that time you can set things up at the container level and you're not using service location (which is generally considered an anti-pattern).
If you need to pass parameters during resolution Autofac also supports that. However, when passing during resolution, it's more service-locator-ish and not so great becausee, again, it implies the consumer knows about what it's consuming.
You can do some fancy things with lambda expression registrations if you want to wire up the parameter to come from a known source, like configuration.
builder.Register(c => {
var name = ConfigurationManager.AppSettings["name"];
return new NamedPuppet(name);
}).As<IPuppet>();
You can also do some fancy things using the Func<T> implicit relationship in the consumer:
public class MasterOfPuppets : IMasterOfPuppets
{
IPuppet _puppet;
public MasterOfPuppets(Func<string, IPuppet> puppetFactory)
{
_puppet = puppetFactory("name");
}
}
Doing that is the equivalent of using a TypedParameter of type string during the resolution. But, as you can see, that comes from the direct consumer of IPuppet and not something that trickles down through the stack of all resolutions.
Finally, you can also use Autofac modules to do some interesting cross-cutting things the way you see in the log4net integration module example. Using a technique like this allows you to insert a specific parameter globally through all resolutions, but it doesn't necessarily provide for the ability to pass the parameter at runtime - you'd have to put the source of the parameter inside the module.
Point being Autofac supports parameters but not what you're trying to do. I would strongly recommend redesigning the way you're doing things so you don't actually have the need to do what you're doing, or so that you can address it in one of the above noted ways.
Hopefully that should get you going in the right direction.

What do you mean by "programming to interface" and "programming to implementation"?

In the Head First Design Patterns book, the author often says that one should program to interface rather than implementation?
What does that mean?
Let's illustrate it with the following code:
namespace ExperimentConsoleApp
{
class Program
{
static void Main()
{
ILogger loggerA = new DatabaseLogger();
ILogger loggerB = new FileLogger();
loggerA.Log("My message");
loggerB.Log("My message");
}
}
public interface ILogger
{
void Log(string message);
}
public class DatabaseLogger : ILogger
{
public void Log(string message)
{
// Log to database
}
}
public class FileLogger : ILogger
{
public void Log(string message)
{
// Log to File
}
}
}
Suppose you are the Logger developer and the application developer needs a Logger from you. You give the Application developer your ILogger interface and you say to him he can use but he doesn't have to worry about the implementation details.
After that you start developing a FileLogger and Databaselogger and you make sure they follow the interface that you gave to the Application developer.
The Application developer is now developing against an interface, not an implementation. He doesn't know or care how the class is implemented. He only knows the interface. This promotes less coupling in code and it gives you the ability to (trough configuration files for example) easily switch to another implementation.
Worry more about what a class does rather than how it does it. The latter should be an implementation detail, encapsulated away from clients of your class.
If you start with an interface, you're free to inject in a new implementation later without affecting clients. They only use references of the interface type.
It means that when working with a class, you should only program against the public interface and not make assumptions about how it was implemented, as it may change.
Normally this translates to using interfaces/abstract classes as variable types instead of concrete ones, allowing one to swap implementations if needed.
In the .NET world one example is the use of the IEnumerable/IEnumerator interfaces - these allow you to iterate over a collection without worrying how the collection was implemented.
It is all about coupling. Low coupling is a very important property* of software architecture. The less you need to know about your dependency the better.
Coupling can be measured by the number of assumptions you have to make in order to interact/use your dependency (paraphrasing M Fowler here).
So when using more generic types we are more loosely coupled. We are for example de-coupled from a particular implementation strategy of a collection: linked list, double linked list, arrays, trees, etc. Or from the classic OO school: "what exact shape it is: rectangle, circle, triangle", when we just want to dependent on a shape (in old school OO we apply polymorphism here)

adapter pattern and dependency

I have little doubt about adapter class. I know what's the goal of adapter class. And when should be used. My doubt is about class construction. I've checked some tutorials and all of them say that I should pass "Adaptee" class as a dependency to my "Adapter".
e.g.
Class SampleAdapter implements MyInterface
{
private AdapteeClass mInstance;
public SampleAdapter(AdapteeClass instance)
{
mInstance=instance;
}
}
This example is copied from wikipedia. As you can see AdapteeClass is passed to my object as dependency. The question is why? If I'm changing interface of an object It's obvious I'm going to use "new" interface and I won't need "old" one. Why I need to create instance of "old" class outside my adapter. Someone may say that I should use dependency injection so I can pass whatever I want, but this is adapter - I need to change interface of concrete class. Personally I think code bellow is better.
Class SampleAdapter implements MyInterface
{
private AdapteeClass mInstance;
public SampleAdapter()
{
mInstance= new AdapteeClass();
}
}
What is your opinion?
I would say that you should always avoid the new operator in a class when it comes to complex objects (except when the class is a Builder or Factory) to reduce coupling and make your code better testable. Off course objects like a List or Dictionary or value objects can be constructed inside a class method (which is probably the purpose of the class method!)
Lets say for example that your AdapteeClass is a Remote Proxy. If you want to use Unit Testing, your unit tests will have to use the real proxy class because there is no way to replace it in your unit tests.
If you use the first approach, you can easily inject a mock or fake into the constructor when running your unit test so you can test all code paths.
Google has a guide on writing testable code which describes this in more detail but some important points are:
Warning Signs for not testable code
new keyword in a constructor or at field declaration
Static method calls in a constructor or at field declaration
Anything more than field assignment in constructors
Object not fully initialized after the constructor finishes (watch out for initialize methods)
Control flow (conditional or looping logic) in a constructor
Code does complex object graph construction inside a constructor rather than using a factory or builder
Adding or using an initialization block
AdapteeClass can have one or more non-trivial constructors. In this case you'll need to duplicate all of them in your SampleAdapter constructor to have the same flexibility. Passing already constructed object is simpler.
I think creating the Adaptee inside the Adapter is limiting. What if some day you want to adapt a pre-existing instance?
To be honest though, I'd do both if at all possible.
Class SampleAdapter implements MyInterface
{
private AdapteeClass mInstance;
public SampleAdapter()
: base (new AdapteeClass())
{
}
public SampleAdapter(AdapteeClass instance)
{
mInstance=instance;
}
}
Let's assume you have an external hard drive with a regular USB port and you are trying to hook it up with a Mac which only has type-c ports. Yes, you can buy a new drive which has a type-c port but what about the data in it?
It's the same for the adapter pattern. There're times you initialize AdapteeClass with tons of flavors. When you do the conversion, you want to keep all the context.

Interface reference variables

I am going over some OO basics and trying to understand why is there a use of Interface reference variables.
When I create an interface:
public interface IWorker
{
int HoneySum { get; }
void getHoney();
}
and have a class implement it:
public class Worker : Bee, IWorker
{
int honeySum = 15;
public int HoneySum { get { return honeySum; } }
public void getHoney()
{
Console.WriteLine("Worker Bee: I have this much honey: {0}", HoneySum);
}
}
why do people use:
IWorker worker = new Worker();
worker.getHoney();
instead of just using:
Worker worker3 = new Worker();
worker3.getHoney();
whats the point of a interface reference variable when you can just instatiate the class and use it's methods and fields that way?
If your code knows what class will be used, you are right, there is no point in having an interface type variable. Just like in your example. That code knows that the class that will be instantiated is Worker, because that code won't magically change and instantiate anything else than Worker. In that sense, your code is coupled with the definition and use of Worker.
But you might want to write some code that works without knowing the class type. Take for example the following method:
public void stopWorker(IWorker worker) {
worker.stop(); // Assuming IWorker has a stop() method
}
That method doesn't care about the specific class. It would handle anything that implements IWorker.
That is code you don't have to change if you want later to use a different IWorker implementation.
It's all about low coupling between your pieces of code. It's all about maintainability.
Basically it's considered good programming practice to use the interface as the type. This allows different implementations of the interface to be used without effecting the code. I.e. if the object being assigned was passed in then you can pass in anything that implements the interface without effecting the class. However if you use the concrete class then you can only passin objects of that type.
There is a programming principle I cannot remember the name of at this time that applies to this.
You want to keep it as generic as possible without tying to specific implementation.
Interfaces are used to achieve loose coupling between system components. You're not restricting your system to the specific concrete IWorker instance. Instead, you're allowing the consumer to specify which concrete implementation of IWorker they'd like to use. What you get out of it is loosely dependent components and better flexibility.
One major reason is to provide compatibility with existing code. If you have existing code that knows how to manipulate objects via some particular interface, you can instantly make your new code compatible with that existing code by implementing that interface.
This kind of capability becomes particularly important for long-term maintenance. You already have an existing framework, and you typically want to minimize changes to other code to fit your new code into the framework. At least in the ideal case, you do this by writing your new code to implement some number of existing interfaces. As soon as you do, the existing code that knows how to manipulate objects via those interfaces can automatically work with your new class just as well as it could with the ones for which it was originally designed.
Think about interfaces as protocols and not classes i.e. does this object implement this protocol as distinct from being a protocol? For example can my number object be serialisable? Its class is a number but it might implement an interface that describes generally how it can be serialised.
A given class of object may actually implement many interfaces.