Calling another repository directly from a service? - oop

I am working on a project with a service, repository pattern. AService, ARepository, BService, BRepository. Now, however, it happens that A has a relation to B. So I have to shoot another query against the B database to merge both objects later. For example: give me all objects A and their relation to object B. Can I call the BRepository directly from the AService or should I better go via the BService? Is there a rule here according to cleanCode?

Sure, you can. Imagine situation, when user buy something in online shop. You would need many repositories:
public class OrderService
{
private IUserRepository _userRepository;
private IWareHouseRepository _wareHouseRepository;
OrderService(IUserRepository userRepository,
IWareHouseRepository wareHouseRepository)
{
_userRepository = userRepository;
_wareHouseRepository = wareHouseRepository;
}
}
I would prefer to call repository instead of service because repository is less specific. I mean calling another service ServiceB that contain desired repository can be dangerous as business logic can be added into existing service ServiceB which is not eligible for your ServiceA.
In addition, circular dependencies can be occured if we call service from another service. So try to use dependency injection. Moreover, try to program to interfaces, not implementations, I mean you can create interfaces for your service classes and use this interface from your client classes (pass the concrete implementation to the constructor).

Related

What is the purpose to create Application and Loader in Lagom

I am reading following tutorial on Lagom.
I understand DI but the section also talks of Application and Loader. I am unable to understand the purpose of creating an Application and Loader class. So far, I have been able to run basic services (e.g., hello, world service from GettingStarted) without creating Application and loader class.
Let us consider a sample ApplicationLoader (and this is not the only way to do but an example for the sake of the question)
abstract class FriendModule (context: LagomApplicationContext)
extends LagomApplication(context)
with AhcWSComponents
with CassandraPersistenceComponents
{
persistentEntityRegistry.register(wire[FriendEntity])
override def jsonSerializerRegistry = FriendSerializerRegistry
override lazy val lagomServer: LagomServer = serverFor[FriendService](wire[FriendServiceImpl])
}
class FriendApplicationLoader extends LagomApplicationLoader {
override def load(context: LagomApplicationContext): LagomApplication =
new FriendModule(context) with ConductRApplicationComponents
override def loadDevMode(context: LagomApplicationContext): LagomApplication =
new FriendModule(context) with LagomDevModeComponents
override def describeService = Some(readDescriptor[FriendService])
}
Firstly the reason we create a class FriendModule that extends `LagomApplication, is to mixin all our dependencies. They could be:
If the application relies on cassandra and persistence api, then we mixin that. If the application needs to make HTTP calls then we provide it the WSClient etc
We of-course wire in the compile time dependencies
By doing below, we bind the implementation with the service declared
override lazy val lagomServer: LagomServer = serverForFriendService
But notice, we haven't still coupled our microservice with a Service Locator.
The role of a service locator is to provide the ability to discover application services and communicate with them. For example: If an application has five different microservices running, then each one would need to know the address of every other for the communication to be possible.
Service Locator takes this responsibility of keeping information of the address of the microservices concerned. In the absence of this service locator, we would need to configure the URL of each microservice and make it available to each microservice (may be via a properties file?).
So in the class FriendApplicationLoader we bind our implementation with LagomDevModeComponents in the dev case. LagomDevModeComponentsregisters our service with the registry. This is how magically Lagom microservices can communicate with others in a simple manner.

webapi aspnet 4 Architecture

I've project using Entity Framework 5 Code First, WebApi, ASPNET MVC 4, Repository and Unit of Work pattern, etc.
My architecture is as follows:
One project for the POCOS
One project with the context, Repository, Unit Of Work, etc
One project with the contracts (IRepository, IUnitOfWork, etc)
One WebApi project which holds ApiControllers for each entity of the model (GET, POST, PUT, DELETE).
Now, if I don't want to use SPA (as I don't have time right now to learn it) and I want to do something quick, What should I do? a new ASPNET MVC 4 project with Controllers inheriting from Controller rather than ApiController, and those controllers consuming the WebApi controllers?
Like this?
public ActionResult Index()
{
return View(WebApiProj.Uow.Houses.GetAll());
}
That doesn't seems to be quite good as it should be creating a Get pointing to the WebApi controller in the other project.
I'm thinking about this architecture, because mobile clients, web clients and any other clients would be calling the same services which sounds good.
Any advices on this architecture? Pros or cons?
I am not sure if what you show is possible? WebApiProj.Uow.Houses.GetAll() Is treating Houses as if it was a class with a static GetAll function on it. Houses is an instance class that needs to be instantiated per request and may/should have constructor injection concerns to handle too... GetAll would normally be an instance method.
Given you are in a situation where you are going to have multiple code clients i.e. the WebApi controllers and the MVC controllers you should consider adding a Service Layer to your project. http://martinfowler.com/eaaCatalog/serviceLayer.html.
Your Service Layer will probably take the form of a single class (if this is a small ish project but split it up if needed), it will have the Repositories and the Infrastructure code injected. You should end up with a series of CRUD and UseCase sounding method names that contain the orchestration logic between repositories, factories and unit of work classes.
public interface IMyServiceLayerClass
{
IEnumerable<House> GetAllHouses();
House SaveHouse(House house);
IEnumerable<Windows> GetAllHouseWindows(int houseId);
//etc
}
public class MyServiceLayerClass : IMyServiceLayerClass
{
private readonly IRepository<House> _houseRepository;
private readonly IUnitOfWork _unitOfWork;
private readonly IRepositoryTypeB _repositoryTypeB;
Public MyServiceLayerClass(IUnitOfWork unitofwork, IRepository<House> houseRepository, IRepositoryTypeB repositoryTypeB)
{
//Populate the private readonly's
}
public IEnumerable<House> GetAllHouses()
{
return _houseRepository.GetAll();
}
Your two types of controller can then accept the Service class and have very thin logic just to forward on to the service layer.
public class HomeController : Controller
{
private readonly IMyServiceLayerClass _myServiceLayerClass;
public HomeController(IMyServiceLayerClass myServiceLayerClass)
{
_myServiceLayerClass= myServiceLayerClass;
}
public ViewResult Index()
{
return View(_myServiceLayerClass.GetAllHouses());
}
Same for the Api:
public class HouseController : ApiController
{
private readonly IMyServiceLayerClass _myServiceLayerClass;
public HouseController (IMyServiceLayerClass myServiceLayerClass)
{
_myServiceLayerClass= myServiceLayerClass;
}
public IEnumerable<House> Get()
{
return _myServiceLayerClass.GetAllHouses();
}
This will allow you to reuse the same business logic and orchestration across the controllers abstract the logic away from your WebApi and Mvc applications.
This code could easily live in your project that defines the contracts as it is only dependent upon interfaces. Or you could add its interface into contracts too and then create another project class Domain or Service which can hold the implementation of the service class.
I would strongly suggest you leave you Controllers to do what they do best and let them handle the delegation of the UI specific elements and re-factor non UI specific logic into a reusable service layer. This would allow Unit tests for controllers to focus on testing for the correct action result and status codes etc and allow your domain logic to be tested independently.
Take a look at my answer for another architecture question on MVC. The key for your question is to have an application or domain layer that both the MVC Controller and Web API Controllers can use to access the business model (The M in MVC). You do not want to call the Web API directly from the MVC Controller as it has overhead for serialization and de-serialization that is not required here. Instead call the application/domain layer directly.

IQueryable Repository with StructureMap (IoC) - How do i Implement IDisposable?

If i have the following Repository:
public IQueryable<User> Users()
{
var db = new SqlDataContext();
return db.Users;
}
I understand that the connection is opened only when the query is fired:
public class ServiceLayer
{
public IRepository repo;
public ServiceLayer(IRepository injectedRepo)
{
this.repo = injectedRepo;
}
public List<User> GetUsers()
{
return repo.Users().ToList(); // connection opened, query fired, connection closed. (or is it??)
}
}
If this is the case, do i still need to make my Repository implement IDisposable?
The Visual Studio Code Metrics certainly think i should.
I'm using IQueryable because i give control of the queries to my service layer (filters, paging, etc), so please no architectural discussions over the fact that im using it.
BTW - SqlDataContext is my custom class which extends Entity Framework's ObjectContext class (so i can have POCO parties).
So the question - do i really HAVE to implement IDisposable?
If so, i have no idea how this is possible, as each method shares the same repository instance.
EDIT
I'm using Depedency Injection (StructureMap) to inject the concrete repository into the service layer. This pattern is followed down the app stack - i'm using ASP.NET MVC and the concrete service is injected into the Controllers.
In other words:
User requests URL
Controller instance is created, which receives a new ServiceLayer instance, which is created with a new Repository instance.
Controller calls methods on service (all calls use same Repository instance)
Once request is served, controller is gone.
I am using Hybrid mode to inject dependencies into my controllers, which according to the StructureMap documentation cause the instances to be stored in the HttpContext.Current.Items.
So, i can't do this:
using (var repo = new Repository())
{
return repo.Users().ToList();
}
As this defeats the whole point of DI.
A common approach used with nhibernate is to create your session (ObjectContext) in begin_request (or some other similar lifecycle event) and then dispose it in end_request. You can put that code in an HttpModule.
You would need to change your Repository so that it has the ObjectContext injected. Your Repository should get out of the business of managing the ObjectContext lifecycle.
I would say you definitely should. Unless Entity Framework handles connections very differently than LinqToSql (which is what I've been using), you should implement IDisposable whenever you are working with connections. It might be true that the connection automatically closes after your transaction successfully completes. But what happens if it doesn't complete successfully? Implementing IDisposable is a good safeguard for making sure you don't have any connections left open after your done with them. A simpler reason is that it's a best practice to implement IDisposable.
Implementation could be as simple as putting this in your repository class:
public void Dispose()
{
SqlDataContext.Dispose();
}
Then, whenever you do anything with your repository (e.g., with your service layer), you just need to wrap everything in a using clause. You could do several "CRUD" operations within a single using clause, too, so you only dispose when you're all done.
Update
In my service layer (which I designed to work with LinqToSql, but hopefully this would apply to your situation), I do new up a new repository each time. To allow for testability, I have the dependency injector pass in a repository provider (instead of a repository instance). Each time I need a new repository, I wrap the call in a using statement, like this.
using (var repository = GetNewRepository())
{
...
}
public Repository<TDataContext, TEntity> GetNewRepository()
{
return _repositoryProvider.GetNew<TDataContext, TEntity>();
}
If you do it this way, you can mock everything (so you can test your service layer in isolation), yet still make sure you are disposing of your connections properly.
If you really need to do multiple operations with a single repository, you can put something like this in your base service class:
public void ExecuteAndSave(Action<Repository<TDataContext, TEntity>> action)
{
using (var repository = GetNewRepository())
{
action(repository);
repository.Save();
}
}
action can be a series of CRUD actions or a complex query, but you know if you call ExecuteAndSave(), when it's all done, you're repository will be disposed properly.
EDIT - Advice Received From Ayende Rahien
Got an email reply from Ayende Rahien (of Rhino Mocks, Raven, Hibernating Rhinos fame).
This is what he said:
You problem is that you initialize
your context like this:
_genericSqlServerContext = new GenericSqlServerContext(new
EntityConnection("name=EFProfDemoEntities"));
That means that the context doesn't
own the entity connection, which means
that it doesn't dispose it. In
general, it is vastly preferable to
have the context create the
connection. You can do that by using:
_genericSqlServerContext = new GenericSqlServerContext("name=EFProfDemoEntities");
Which definetely makes sense - however i would have thought that Disposing of a SqlServerContext would also dispose of the underlying connection, guess i was wrong.
Anyway, that is the solution - now everything is getting disposed of properly.
So i no longer need to do using on the repository:
public ICollection<T> FindAll<T>(Expression<Func<T, bool>> predicate, int maxRows) where T : Foo
{
// dont need this anymore
//using (var cr = ObjectFactory.GetInstance<IContentRepository>())
return _fooRepository.Find().OfType<T>().Where(predicate).Take(maxRows).ToList();
And in my base repository, i implement IDisposable and simply do this:
Context.Dispose(); // Context is an instance of my custom sql context.
Hope that helps others out.

How do I pass a service to another plugin?

I have a plugin that I will instantiate at runtime and I want to pass it a WCF service from the application host. The application host is responsible for creating the connection to the service. The reason for this is that a single service can be used by multiple plugins, but the plugins should only know about its interface since there may be several implementation of IMyPluginServices. For instance, the Run method of the plugin instance would be:
public void Run(IMyPluginServices services)
{
services.DoSomething();
}
The problem I am running into is that I don't know how to create a service of type IMyPluginServices and pass it to the Run function. The service reference generated by VS 2010 doesn't seem to create an object of type IMyPluginServices that I can pass to it. Any help would be greatly appreciated. Thanks.
When you add a service reference in VS 2010 for a service it generates an interface named IMyService which contains methods for each OperationContract in your service. It also generates a concrete class named MyServiceClient, which can be constructed and then used to invoke your service.
Now, the problem that you're running into, I believe, is that MyServiceClient is a subclass of ClientBase<IMyService>, and does not implement the generated IMyService interface (which is a real pain).
To get around this problem I ended up making a new interface:
public interface IMyServiceClient : IMyService, IDisposable, ICommunicationObject
{
}
(Note: IDisposable and ICommunicationObject are only required if you want your module to be able to detect/react to faulted channels and other such things).
I then extend MyServiceClient with a partial class (in the assembly that contains my WCF Service reference):
public partial class MyServiceClient : IMyServiceClient
{
}
Now in my modules I can accept an IMyServiceClient instead of an IMyService, and still execute all of the methods that I need to. The application in control of the modules can still create instances of MyServiceClient as it always did.
The beauty of this is that your new interface and partial class don't need any actual code - the definitions suffice to get the job done.

WCF data persistence between sessions

We are developing a WCF based system. In the process we are trying to lock some data from being modified by more than one users. So we decided to have a data structure that will contain the necessary information for the locking logic to execute (by for example storing the ID of the locked objects)
The problem we are having is persisting that data between sessions. Is there anyway we can avoid executing expensive database calls?
I am not sure how can we do that in WCF since it can only persist data (in memory) during an open session.
Static members of the service implementing class are shared between sessions & calls.
One option would be to use static members as Jimmy McNulty said. I have a WCF service that opens network connections based on a user-specified IP address. My service is configured for PerCall service instance mode. In each session, I check a static data structure to see if a network connection is already opened for the specified IP address. Here's an example.
[ServiceContract]
public interface IMyService
{
[OperationContract]
void Start(IPAddress address);
}
[ServiceBehavior(InstanceContextMode=InstanceContextMode.PerCall)]
public class MyService : IMyService
{
private static readonly List<IPAddress> _addresses = new List<IPAddress>();
public void Start(IPAddress address)
{
lock(((ICollection)_addresses).SyncRoot)
{
if (!_addresses.Contains(address)
{
// Open the connection here and then store the address.
_addresses.Add(address);
}
}
}
}
As configured, each call to Start() happens within its own service instance, and each instance has access to the static collection. Since each service instance operates within a separate thread, access to the collection must be synchonized.
As with all synchronization done in multithreaded programming, be sure to minimize the amount of time spent in the lock. In the example shown, once the first caller grabs the lock, all other callers must wait until the lock is released. This works in my situation, but may not work in yours.
Another option would be to use the Single service instance mode as opposed to the PerCall service instance mode.
[ServiceBehavior(InstanceContextMode=InstanceContextMode.Single)]
public class MyService : IMyService
{ ... }
From everything I've read, though, the PerCall seems more flexible.
You can follow this link for differences between the two.
And don't forget that the class that implements your service is just that - a class. It works like all C# classes do. You can add a static constructor, properties, event handlers, implement additional interfaces, etc.
Perhaps a caching framework like velocity help you out.
Create a second class and set its InstanceContextMode to single and move all the expensive methods there, then in your original class use that methods.