linking the EF 4.0 context to the WCF call context - wcf

I would like to create an Entity Framework 4.0 context when a call is received and invoke to save changes when it finish, (something like JPA).
I think it is a good idea because I can use the state for all the call, It is short and encapsulate enogh to be threadsafe and long enough for caching calls and the context itself.
Any idea how is the best way for implement this?

Yes, definitely, that's the best way to go.
By default and by best practice recommendation, WCF service calls are "per-call", e.g. each requests gets a brand new, dedicated instance of the service class all to itself - no messy multithreaded/concurrency stuff to deal with - just a nice clean execution environment.
With EF 4, the "disconnected" scenario of sending back entities through WCF was one of the (many) areas that the EF team focused on. See some of these resources for more information:
Building N-Tier apps with EF4
More on disconnected Entity Framework
Attaching modified entities in EF4

Related

Can a BackgroundService run indefinitely in ASP .NET Core 3.1?

I am constructing a web service that receives data and updates it periodically. When a user pings the service, it will send specific data back to the user. In order to receive this data, I have a persistent that is created on startup and regularly receives updates, but not at periodic intervals. I have already implemented it, but I would like to add DI and make it into a service. Can this type of problem be solved with a BackgroundService or is this not recommended? Is there anything better I should use? I originally wanted to just register my connection object as a singleton, but since singletons are not initialized on startup, that does not work so well for me.
I thought I would add an answer as so expand on my comment. From what you have described, creating a BackgroundService is likely the best solution for what you want to do.
ASP.NET Core provides an IHostedService interface that can be used to implement a background task or service, in your web app. They also provide a BackgroundService class that implements IHostedService and provides a base class for implementing long running background services. These background services are registered within the CreateWebHostBuilder method in Program.cs.
You can consume services from the dependency injection container but you will have to properly manage their scopes when using them. You can decide how to manage your BackgroundService classes in order to fit your needs. It does take an understanding of how to work with Task objects and executing, queueing, monitoring them etc. So I'd recommend giving the docs a thorough read, so you don't end up impacting performance or resource usage.
I also tend to use Autofac as my DI container rather than the built in Microsoft container, since Autofac provides more features for resolving services and managing scopes. So it's worth considering if you find yourself hitting a wall because of the built in container.
Here's the link to the docs section covering this in much more depth. I believe you can also create standalone service workers now, so that might be worth a look depending on use case.
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-3.1&tabs=visual-studio
Edit: Here's another link to a guide an example implementation for a microservice background service. It goes a little more in depth on some of the specifics.
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice#implementing-ihostedservice-with-a-custom-hosted-service-class-deriving-from-the-backgroundservice-base-class

Zend Framework 3 singletons

I'm creating a new application in Zend Framework 3 and i have a question about a design pattern
Without entering in much details this application will have several Services, as in, will be connecting to external APIs and even in multiple databases, the workflow is also very complex, a single will action can have multiple flows depending on several external information (wich user logged in, configs, etc).
I know about dependency injections and Zend Framework 3 Service Manager, however i am worried about instanciating sereval services when the flow will actually use only a few of them in certain cases, also we will have services depending on other services aswell, for this, i was thinking about using singletons.
Is singleton really a solution here? I was looking a way to user singletons in Zend Framework 3 and haven't figured out a easy way since i can't find a way to user the Service Manager inside a service, as I can't retrive the instance of the Service Manager outside of the Factory system.
What is an easy way to implement singletons in Zend Framework 3?
Why use singletons?
You don't need to worry about too many services in your service manager since they are started only when you get them from the service manager.
Also don't use the service manager inside another class except a factory. In ZF3 it's removed from the controllers for a reason. One of them is testability. If all services are inject with a factory, you can easily write tests. Also if you read your code next year, you can easily see what dependencies are needed inside a class.
If you find there are too many services being injected inside a class which are not always needed you can:
Use the ProxyManager. This lazy loads a service but doesn't start it until a method is called.
Split the service: Move some parts from a service into a new service. e.g. You don't need to place everything in an UserService. You can also have an UserRegisterService, UserEmailService, UserAuthService and UserNotificationsService.
In stead of ZF3, you can also think about zend-expressive. Without getting into too much detail, it is a lightweight middleware framework. You can use middleware to detect what is needed for a request and route to the required action to process the request. Something like this can probably also done in ZF3 but maybe someone else can explain how to do it there.

Difference between Entity framework self tracking entities vs Unit of work

What is the difference between using Entity framework self tracking entities and implementing Unit of work architecture? As i understand both are keep tracking of the objects, one db call for commit changes. So i cant figure out the difference of those. can some one point me about what should used in which case?
I'm using entity framework 5 with WCF service application.
The purpose of self tracking entities is that you don't need to keep the DbContext/ObjectContext alive to track changes to the entity object(s). The main feature of this is you can send an entity to another process (or host entirely, such as another WCF service on another host) that makes changes to the entity object, then returns that entity object to the calling process with change tracking still intact.
UoW coordinates changes made between multiple entity objects (greatly simplified explanation).
According to the MSDN, Self Tracking Entities are no longer Recommended
STEs No Longer Recommended
We no longer recommend using the STE template, it continues to be
available to support existing applications. Visit the N-Tier page for
other options we recommend for N-Tier scenarios.
http://msdn.microsoft.com/en-us/data/jj613924.aspx

OData WCF Data Service with NHibernate and corporate business logic

Let me first apologise for the length of the entire topic. It will be fairly long, but I wish to be sure that the message comes over clearly without errors.
Here at the company, we have an existing ASP.NET WebApplication. Written in C# ASP.NET on the .NET Framework 3.5 SP1. Some time ago an initial API was developed for this web application using WCF and SOAP to allow external parties to communicate with the application without relying on the browsers.
This API survived for some time, but eventually the request came to create a new API that was RESTfull and relying on new technologies. I was given this assignment, and I created the initial API using the Microsoft MVC 2 Framework, running inside our ASP.NET WebApplication. This took initially quiet some time to get it properly running, but at the moment we're able to make REST calls on the application to receive XML detailing our resources.
I've attended a Microsoft WebCamp, and I was immediatly sold by the OData concept. It was very similar then what we are doing, but this was a protocol supported by more players instead of our own implementation. Currently I'm working on a PoC (Proof of Concept) to recreate the API that I developed using the OData protocol and the WCF DataService technology.
After searching the Internet for getting NHibernate 2 to work with the Data Services, I succeeded in creating a ReadOnly version of the API that allows us to read out the entities from the internal business layer by mapping the incoming query requests to our Business layer.
However, we wish to have a functional API that also allows the creation of entities using the OData protocol. So now i'm a bit stuck on how to proceed. I've been reading the following article : http://weblogs.asp.net/cibrax/default.aspx?PageIndex=3
The above articly nicely explains on how to map a custom DataService to the NHibernate layer. I've used this as a base to continue on, but I have the "problem" that I don't want to map my requests directly to the database using NHibernate, but I wish to map them to our Business layer (a seperate DLL) that performs a large batch of checks, constraints and updates based upon accessrights, privledges and triggers.
So what I want to ask, I for example create my own NhibernateContext class as in the above articly, but instead rely on our Business Layer instead of NHibernate sessions, could it work? I'd probably have to rely on reflection alot to figure out the type of object I'm working with at runtime and call the correct business classes to perform the updates and deletes.
To demonstrate with a smal ascii picture:
*-----------------*
* Database *
*-----------------*
*------------------------*
* DAL(Data Access Layer) *
*------------------------*
*------------------------*
* BUL (Bussiness Layer) *
*------------------------*
*---------------* *-------------------*
* My OData stuff* * Internal API *
*---------------* *-------------------*
*------------------*
* Web Application *
*------------------*
So, would this work, or would the performance make it useless?
Or am I just missing the ball here?
The idea is that I wish to reuse whatever logic is stored in the BUL & DAL layer from the OData WCF DataService.
I was thinking about creating new classes that inherit from the EntityModel classes in the Data.Services namespace and create a new DataService object that marks all calls to the BUL & DAL & API layers. I'm however not sure where/who to intercept the requests for creating and deleting resources.
I hope it's a bit clear what I'm trying to explain, and I hope someone can help me on this.
The devil is in the details, but it sounds like the design you're proposing should work.
The DataService class is where you get to define the access rights applicable to everyone, configuration settings, and custom operations. In this scenario, I think you will be focusing more on the data context instead (the 'T' in DataService).
For the context, there are really two interesing paths: reads and writes. Reads happen through the IQueryable entry points. Writing a LINQ provider is a good chunk of work, but NHibernate already supports this, although it would return what I imagine we're calling DAL entities. You can use query interceptors to do access checks here if you can express those in terms that the database would understand.
The update path is from what I understand where you are trying to run more business logic (you mentioned validation, extra updates, etc). To do this, you'll want to focus on the IUpdatable implementation (IDataServiceUpdateProvider if you're using the latest version). Here you can use whichever objects you want - they could be DAL objects or business objects. You can do everything in the DAL and then run validation on SaveChanges(), or do everything on business objects if they validate as they go.
There are two places where you might 'jump' from one kind of objects to another. One is in the GetResource() API, where you get an IQueryable, presumably in term of DAL entities. The other is in ResolveResource(), where the runtime is asking for an object to serialize, just like it would get from an IQueryable, so it's presumably also a DAL entity.
Hope this helps - doing uniform access over non-uniform APIs can be hard, but often well worth it!

ado.net data service advantages/disadvantages over WCF service

For me I have a WCF service which acts as DAL and does all the CRUD operations
I just came to know regarding the new ADO.Net Data Service, just read somewhat but not actually sure when & where to use it?
Just to add more, my new project is in ASP.Net MVC, so is it wise to use ADO.NET Data Service rather than WCF service with it which will probably act somewhat like 'M'(Model) of MVC ???
First, my advice would be to write your MVC code so that it is oblivious to what the back-end data model is. Abstract away any dependencies right from the beginning.
As for deciding whether or not to use WCF, I'd suggest that you decide whether or not you'll want to reuse the data component that you write. If you have plans on using your data code in a Silverlight, WPF, or any other format, then I'd suggest sticking with WCF.
Also, remember that you can always simply wrap the ADO.NET data services with a WCF component and still enable the reuse scenario. Get the best of both worlds!
One big advantage is that with the ADO.NET Data Services, you don't have to specifically write all the services for basic CRUD operations as you may with WCF. Since ADO.NET data services basically expose those operations, you can focus more code writing and debugging on business logic.
The big advantage of WCF Data Services, and IMO it fits your need, is when your service layer is used for CRUD only. You do not have (and do not need) any business logic in it.
As Tad pointed out, the reuse is an advantage, but on the other hand, WCF Data Services will give your web app, or any consumer, a very flexible way to query data. With WCF, you'll have to write code to give the consumers the same query flexibility OData gives.
I had a experience recently. I created a service layer with WCF and in many cases, the service operations was used only to call a repository. There wasn't any rule, only query logic. The consumer was able to pass a criteria to have a result back.
The requirements changed and we realized that we could make it more simply (less code to maintain) by using WCF Data Service.