I need to handle cross cutting concerns through out my micro service built using vertx. Back in sometime we used spring AOP, but the tech stack is changed to Vertx and finding out a way if it's something even possible. Would like to take suggestions on all the possible options in this case.
AOP is just another way to generate proxy classes. In Vert.x, you would usually achieve the same goal using Handlers and Context.
So let's say you'd like to have a logging around your endpoints. You can do it as follows:
Router filterRouter = Router.router(vertx);
filterRouter.get().handler((ctx)->{
System.out.println("Before");
ctx.next();
System.out.println("After");
});
filterRouter.mountSubRouter("/", router);
If you don't want to proceed with the filter chain, you just won't invoke ctx.next() in your handler.
Related
I am trying to inject my own IRouteMessagesToEndpoints in NServiceBus with structure map as I need to redirect various messages to different endpoints depending on some business logic (not via namespace/assembly/type). This would allow it to fire using bus.Send(); and be configured to our requirements. I thought this was possible, but I can't seem to get it to work. I have tried using the Configure.Component() and ObjectFactory.Configure() for the injection, and both run without any exception, but when I debug my implementation of the interface the breakpoint does not hit.
My question is, can it be done this way (there's nothing on the internet that covers this)? I notice that the EndPointRouter in the GatewayReceiver has a setter, but I cannot work out how to access the property.
Unfortunately, even though IRouteMessagesToEndpoints is a public interface at the moment is not possible to replace the default implementation, sorry!
Please raise an issue about it in https://github.com/Particular/NServiceBus.Gateway/issues/new so we can discuss it better.
I'm building an Ember application with ember-cli and, as a persistence layer, an HTTP API using rails-api + Grape + ActiveModelSerializer. I am at a very basic stage but I want to setup my front-end and back-end in as much standard and clean way as possible before going on with developing further API and ember models.
I could not find a comprensive guide about serialization and deserialization made by the store but I read the documentation about DS.ActiveModelSerializer and DS.ActiveModelAdapter (which says the same things!) along with their parent classes.
What are the exact roles of adapter and serializer and how are they related?
Considering the tools I am using do I need to implement both of them?
Both Grape/ActiveModelSerializer and EmberData offer customization. As my back-end and front-end are for each other and not for anything else which side is it better to customize?
Hmmm...which side is better is subjective, so this is sort of my thought process:
Generally speaking, one would want an API that is able to "talk to anything" in case a device client is required or in case the API gets to be consumed by other parties in the future, so that would suggest that you'd config your Ember App to talk to your backend. But again, I think this is a subjective question/answer 'cause no one but you and your team can tell what's good for a given scenario you are or might be experiencing while the app gets created.
I think the guides explain the Adapter and Serializer role/usage and customization pretty decently these days.
As for implementing them, it may be necessary to create an adapter for your application to define a global namespace if you have one (if your controllers are behind another area like localhost:3000/api/products, then set namespace: 'api' otherwise this is not necessary), or similarly the host if you're using cors, and if you're doing with cli you might want to set the security policy in the environment to allow connections to other domains for cors and stuff like that. This can be done per model as well. But again, all of this is subjective as it depends on what you want/need to achieve.
Need some expert opinion on this case study.
Problem Statement/Scenario:
My WCF client/proxy continually requiring some lockup data from relevant WCF service. More precisely, I've a WCF service that provides Location data (City/Country etc) from a database (although data is cached on Service). Some how I want to avoid Serialization/DeSerialization (Object contains a lot of associated properties as well as inner objects) cost and service operation execution for better throughput.
Few days back I studied WCF behaviors/WCF extension methods.I found an interesting article on MSDN (http://msdn.microsoft.com/en-us/magazine/cc163302.aspx). After reading this article I thought this could help me to improve performance of my service. So before implementing this I want to confirm that either I'm thinking in right direction or any other solution can solve my problem.
I'm thinking to implement Dispatcher Extensions to solve this problem instead of Proxy (Client) Extensions. I've following queries?
I) Where (Proxy/Service level) I need to implement extensions?
II) When implementing Dispatcher Extensions my call will not send to actual service and I'll save Serialization/DeSerialization cost. Right/Wrong?
III) Implementing Dispatcher Extensions in my case is also better, because why need not to bother about which proxy interface method call occurred as caching logic is on service side. Right/Wrong?
Please suggest me a better solution as I want to save Serialization/DeSerialization cost as well as I want to implement data caching.
Thanks in advance
/Rizwan
There are two ways I've incorporated WCF caching in the past:
Using Castle DynamicProxy to generate proxies for my ServiceContract interfaces. These dynamic proxies use interceptors to perform caching. If the data is not in the cache, the interceptor creates a real WCF client (a ChannelFactory<TInterface>) and invokes the WCF operation, then caches the result. I like this approach, because the caching implementation isn't really WCF specific.
Implement an IRealProxy for WCF which wraps the actual remote operations and performs caching/retrieval as necessary. In principal, this is similar to approach 1, but the implementation is specific to WCF (with remnants of .NET Remoting). I used this approach before migrating to #1. I migrated to approach 1 because approach 1 let me accomplish caching on both the client and the server in an implementation agnostic manner. At the time, I rolled my own RealProxy, but it looks like someone else has since done the same and posted the code: http://blog.ngommans.ca/index.php?/archives/31-Custom-Proxy-Generation-using-RealProxy.html
Was planning to use Service Routing (on WCF/REST) to do some common tasks before a request hits the actual service. Now that I read more about it, looks like REST is not supported yet on RoutingService and the suggested method is to use System.Web.Routing or ARR.
What needs to happen in the router is a key validation, a header value extraction and versioning.
ARR doesn't look right for this as it just routes and there is no "handler" we have access to. System.Web.Routing looks like a lot of custom implementation which might undermine the efficiency of WCF.
An old school alternative am thinking of is to have the common functionalities in one chain-of-responsibilities implementation and just compose it in every service. This has the disadvantage of being referenced in N number of places for N services. But this increasingly looks like the only alternative if I don't want to mess with the WCF handling of endpoints.
Am looking for advice on a right way to do this, and any samples.
Didn't try, but maybe writing a custom service behavior can solve your problem. Take a look here : Extending WCF with Custom Behaviors.
The idea is to extend the WCF engine with a custom behavior, then attaching your service with this behaviors. This is transparent for the services.
Take a look at HttpMessageHandlers in the new WCF Web Api project htttp://wcf.codeplex.com This mechanisms allows you to do something similar to Rack or WSGI. I have a couple of examples of what you can do with them on my blog http://www.bizcoder.com/index.php/2011/05/22/how-to-get-ahead-with-messagehandlers/
When working with WCF services, is it better to create a new instance of the service every time you use it? Or is it better to create one and re-use it? Why is either approach better? Is it the same for asynchronous proxies?
Or is it better to create one and re-use it?
Do not start to implement your own pooling implementation. That has already been done in the framework. A WCF proxy uses cached channels factories underneath. Therefore, creating new proxies is not overly expensive (but see Guy Starbuck's reply regarding sessions and security!).
Also be aware that a proxy times out after a certain idle time (10mins by default).
If you want more explicit control you might consider using ChannelFactories and channels directly instead of the "easy to go, full out of the box" ClientBase proxies.
http://msdn.microsoft.com/en-us/library/ms734681.aspx
And a "must read" regarding this topic is:
http://blogs.msdn.com/wenlong/archive/2007/10/27/performance-improvement-of-wcf-client-proxy-creation-and-best-practices.aspx
in addition to the things Guy Starbuck mentioned a key factor would be the security model you're using (in conjunction with the session requirements) - if you don't re-use your proxy, you can't re-use a security sessions.
This means that the client would have to authenticate itself with each call which is wasteful.
If, however, you decide this is what you wish to do, make sure to configure the client to not establish a security context (as you will never use it), this will save you a couple of roundtrips to the server :-)
One more point to consider is channel faults. By design WCF does not allow to use client proxy after unhandled exception happened.
IMyContract proxy = new MyContractClient( );
try
{
proxy.MyMethod( );
}
catch
{}
//Throws CommunicationObjectFaultedException
proxy.MyMethod( );
There is a corollary here to Server Activated Objects in .NET Remoting (one of the technologies that is replaced by WCF), which have two modes, "Single Call" (stateless) and "Singleton" (stateful).
The approach you take in WCF should be based on your performance and scaling requirements in conjunction with the needs of your consumers, as well as server-side design constraints.
If you have to maintain state between calls to the service, then you will obviously want to have a stateful instance, but if you don't you should probably implement it so that it is static, which should scale better (you can more easily load balance, etc).