I am reading following tutorial on Lagom.
I understand DI but the section also talks of Application and Loader. I am unable to understand the purpose of creating an Application and Loader class. So far, I have been able to run basic services (e.g., hello, world service from GettingStarted) without creating Application and loader class.
Let us consider a sample ApplicationLoader (and this is not the only way to do but an example for the sake of the question)
abstract class FriendModule (context: LagomApplicationContext)
extends LagomApplication(context)
with AhcWSComponents
with CassandraPersistenceComponents
{
persistentEntityRegistry.register(wire[FriendEntity])
override def jsonSerializerRegistry = FriendSerializerRegistry
override lazy val lagomServer: LagomServer = serverFor[FriendService](wire[FriendServiceImpl])
}
class FriendApplicationLoader extends LagomApplicationLoader {
override def load(context: LagomApplicationContext): LagomApplication =
new FriendModule(context) with ConductRApplicationComponents
override def loadDevMode(context: LagomApplicationContext): LagomApplication =
new FriendModule(context) with LagomDevModeComponents
override def describeService = Some(readDescriptor[FriendService])
}
Firstly the reason we create a class FriendModule that extends `LagomApplication, is to mixin all our dependencies. They could be:
If the application relies on cassandra and persistence api, then we mixin that. If the application needs to make HTTP calls then we provide it the WSClient etc
We of-course wire in the compile time dependencies
By doing below, we bind the implementation with the service declared
override lazy val lagomServer: LagomServer = serverForFriendService
But notice, we haven't still coupled our microservice with a Service Locator.
The role of a service locator is to provide the ability to discover application services and communicate with them. For example: If an application has five different microservices running, then each one would need to know the address of every other for the communication to be possible.
Service Locator takes this responsibility of keeping information of the address of the microservices concerned. In the absence of this service locator, we would need to configure the URL of each microservice and make it available to each microservice (may be via a properties file?).
So in the class FriendApplicationLoader we bind our implementation with LagomDevModeComponents in the dev case. LagomDevModeComponentsregisters our service with the registry. This is how magically Lagom microservices can communicate with others in a simple manner.
Related
I am in the process of migrating NServiceBus up to v6 and am at a roadblock in the process of removing reference to IBus.
We build upon a common library for many of our applications (Website, Micro Services etc) and this library has the concept of IEventPublisher which is essentially a Send and Publish interface. This library has no knowledge of NSB.
We can then supply the implementation of this IEventPublisher using DI from the application, this allows the library's message passing to be replaced with another technology very easily.
So what we end up with is an implementation similar to
public class NsbEventPublisher : IEventPublisher
{
IEndpointInstance _instance;
public NsbEventPublisher(IEndpointInstance endpoint)
{
instance = endpoint;
}
public void Send(object message)
{
instance.Send(message, sendOptions);
}
public void Publish(object message)
{
instance.Publish(message, sendOptions);
}
}
This is a simplification of what actually happens but illustrates my problem.
Now when the DI container is asked for an IEventPublisher it knows to return a NsbEventPublisher and it knows to resolve the IEndpointInstance as we bind this in the bootstrapper for the website to the container as a singleton.
All is fine and my site runs perfect.
I am now migrating the micro-services (running in NSB.Host) and the DI container is refusing to resolve IEndpointInstance when resolving the dependencies within a message handler. Reading the docs this is intentional and I should be using IMessageHandlerContext when in a message handler.
https://docs.particular.net/nservicebus/upgrades/5to6/moving-away-from-ibus
The docs even elude to the issue I have in the bottom example around the class MyContextAccessingDependency. The suggestion is to pass the message context through the method which puts a hard dependency on the code running in the context of a message handler.
What I would like to do is have access to a sender/publisher and the DI container can give me the correct implementation. The code does not need any concept of the caller and if it was called from a message handler or from a self hosted application that just wants to publish.
I see that there is two interfaces for communicating with the "Bus" IPipelineContext and IMessageSession which IMessageHandlerContext and IEndpointInstance interfaces extend respectively.
What I am wondering is there some unification of the two interfaces that gets bound by NSB into the container so I can accept an interface that sends/publishes messages. In a handler it is an IMessageHandlerContext and on my self hosted application the IEndPointInstance.
For now I am looking to change my implementation of IEventPublisher depending on application hosting. I was just hoping there might be some discussion about how this approach is modeled without a reliable interface to send/publish irrespective of what initiated the execution of the code path.
A few things to note before I get to the code:
The abstraction over abstraction promise, never works. I have never seen the argument of "I'm going to abstract ESB/Messaging/Database/ORM so that I can swap it in future" work. ever.
When you abstract message sending functionality like that, you'll lose some of the features the library provides. In this case, you can't perform 'Conversations' or use 'Sagas' which would hinder your overall experience, e.g. when using monitoring tools and watching diagrams in ServiceInsight, you won't see the whole picture but only nugets of messages passing through the system.
Now in order to make that work, you need to register IEndpointInstance in your container when your endpoint starts up. Then that interface can be used in your dependency injection e.g. in NsbEventPublisher to send the messages.
Something like this (depending which IoC container you're using, here I assume Autofac):
static async Task AsyncMain()
{
IEndpointInstance endpoint = null;
var builder = new ContainerBuilder();
builder.Register(x => endpoint)
.As<IEndpointInstance>()
.SingleInstance();
//Endpoint configuration goes here...
endpoint = await Endpoint.Start(busConfiguration)
.ConfigureAwait(false);
}
The issues with using IEndpointInstance / IMessageSession are mentioned here.
I'm building a Web API application using OWIN and hosting in IIS. I now want to preload some data from a database which can be used in the controller methods without loading the data from the database for each request. I have also followed this guide to setup Windsor as IoC container. Does anyone know how to properly set this up?
It's easy to do. In the Startup class, populate one or more classes with the database data. Do this as you would normally load data into a data store.
Register each of these classes with your IoC from the Startup class. It is best to separate the controller from data layer, so create a business logic layer or a repository layer that takes your data store class in the constructor like this:
public class Service
{
private readonly IDataStore _dataStore;
public Service(IDataStore dataStore)
{
_dataStore = dataStore;
}
}
Register the service with your IoC and you should be good to go.
Hope that helps.
One guy explained this way but not very clear to how to implement it.
From experience:
Using different binding, for example one BasicHttpBinding for Java clients while using WsHttpBinding for .NET clients. Also HTTPS for some and HTTP for others...
Dividing and exposing different contracts/interfaces. For example you have one interface that exposes many operations and you have a cut down interface which does basic things and you publish the second to outside so internal clients use the endpoint for extended interface but external clients use the other one.
For example
interface IFoo
{
void DoBasic();
}
interface IFooInternal : IFoo
{
void DoMore();
}
Now you have One class implementing both:
public class Foo : IFooInternal
{
....
}
And now you expose only one to outside while implementation is in the same class.
the things which i do not understand how to design my service contract in such a way that few operation i will expose to other client and extended feature i will expose to internal client. so if possible just make me understand giving me a small program & code that how it can be possible through multiple endpoints in WCF service. thanks
TL;DR:
What is a good and testable way to implement the dependency between the ViewModels and the WCF services in a MVVM client?
Please read the rest of the question for more details about the problems I encountered while trying to do this:
I am working on a silverlight client that connects to a wcf service, and I want to write unit tests for the client.
So I'm looking for a good solution for using the wcf clients in my ViewModels and testing that interaction. I have found two solutions until now:
Solution 1: This is actually how I have implemented it until now:
public class ViewModelExample
{
public ViewModelExample(IServiceClient client)
{
client.DoWorkCompleted += ..
client.DoWorkAsync();
}
}
//This is how the interface looks like
public interface IServiceClient
{
event EventHandler<AsyncCompletedEventArgs> DoWorkCompleted;
void DoWorkAsync();
}
//I was able to put the interface on the generated clients because they are partial classes, like this:
public partial class GeneratedServiceClient : IServiceClient
{
}
The good part: it's relatively easy to mock
The bad part: my service client lives as long as my ViewModel, and when I have concurrent requests I don't know which answer belongs to which request.
Solution 2: Inspired by this answer
WCF Service Client Lifetime.
public class ViewModelExample
{
public ViewModelExample(IServiceFactory factory)
{
var client = factory.CreateClient();
client.DoWorkCompleted += ...
client.DoWorkAsync();
}
}
The good part: each request is on a different client, so no more problems with matching requests with answers.
The bad part: it's more difficult to test. I would have to write mocks for both the factory and the wcf client every time. This is not something I would like to do, since I alreay have 200 tests... :(
So my question is, how do you guys do it? How do your ViewModels talk to the wcf services, where do you inject the dependency, and how do you test that interaction?
I feel that I'm missing something..
Try having a Func<IServiceClient> injected into your VM instead of the a client instance; you'll have a 'language-level factory' injected instead of building a class for this. In the factory method you can instantiate your client however you want (each access could create a new instance for that for example).
The downside is that you'll still have to touch your tests for the most part, but I assume it will be less work:
public ViewModelExample(Func<IServiceClient> factoryMethod)
{
var client = factoryMethod();
client.DoWorkCompleted += ...
client.DoWorkAsync();
}
The WCF service should have it's own tests that confirm the functionality of itself.
You should then be mocking this WCF service and writing unit tests within your consumers.
Unfortunately, it's a pain and something we all have to do. Be pragmatic and get it done, it will save you getting bitten in the future.
Are you using IoC container by a chance? If you had, this problem would be totally mitigated by container (you'll simply register IService dependency to be created as brand new upon each request).
If that's not the case, then
I would have to write mocks for both the factory and the wcf client every time
is how you deal with this kind of "problems". The cost is relatively small, probably 2-3 extra lines of code per test (all you have to do is setup factory mock to return service mock, which you do need either way).
Let's say we have a back-end that needs to talk to N external systems using some kind of Web Services.
What I do is: Create a separate project and generate there the proxy classes (using the service's WSDL in the WCF Service Reference dialog).
About the project name suffix:
I firstly though XxAdapter. But then, I started creating classes with additional logic like CircuitBreakers so I ended up with XxAgent (from ServiceAgent).
What should be the "correct" suffix for the name of such projects.
The most appropriate suffix is "Proxies" because of several reasons:
Your component contains all the web service proxy classes.
In case that you want to make calls to several service proxies transparent, you can create a new class named MyLocalProxy, and perform the action
public class MyServiceProxy
{
public void DoSomething()
{
var serviceProxy1 = new ServiceProxy1();
serviceProxy1.DoOneThing();
var serviceProxy2 = new ServiceProxy2();
serviceProxy2.DoAnotherThing();
}
}
The additional class helps you to not depend on concrete service proxies, so you can interchange them as you wish.
Cheers.