In Building a Desktop To-Do Application with NHibernate, Oren Eini (a.k.a. Ayende Rahien) shares that the general recommended NHibernate practice is to use one session per form in a desktop application. In the example given, this is easily implemented because the form presenters have knowledge of the persistence layer and so can create and dispose of sessions to match their life cycles.
In a more complex application, a service/business logic layer (BLL) usually sits between UI code and the persistence layer. The UI layer knows nothing about persistence (or sessions). In such a case, how does one (or does one?) maintain one session per form ?
Thank you,
Ben
The service layer will still get a session injected, either directly or by using a current session context.
You might think about using the MVC architecture, popular in web applications, where the controller deals directly with the domain model and the NHibernate sessions rather than delegating business logic to a business logic layer. The controller can maintain its own sessions however it likes.
It depends. In complex applications when using service oriented architectures the Nhibernate Session is generally tied to the service's session model.
You should decide wether the service layer will be requiring sessions to the desktop clients or not, and implement sessions in nhibernate following the service needs.
Related
I am working on two different services:
The first one handles all of the write operations through a REST API, it contains all of the required business logic to maintain data in a consistent state, and it persists entities on a database. It also publishes events to a message broker when an entity is changed (creation, update, deletion, etc). It's structured in a DDD fashion.
The second one only handles reads, also with a REST API. It subscribes to the same message broker in order to process the events published by the first service, then it saves the received data to an in memory database for fast reads.
Nothing fancy, just CQRS with eventual consistency.
For the first service, I had a clear mind on how to structure the application:
I have the domain package with subpackages for each different aggregate. Each aggregate has its own domain objects, and its own repository interface.
I have the application package with different application services, and they basically just orchestrate the domain objects and call repositories to persist/update data, and the event publisher to publish domain events. The event publisher interface is also in this package.
I have the infrastructure package, which includes a persistence package, where the repository implementations reside, and a messaging package, where the event publisher implementation resides.
Finally, the interfaces package is where I keep the controllers/handlers for the REST API.
For the second service, I'm very unsure on how to structure it. My doubts are the following:
Should I use the repository pattern? To be fair it seems redundant and not very useful in this scenario. There are no domain objects nor rules here, cause the data to be saved/updated is already validated by the first service.
If I avoid using the repository pattern, I suppose I'd have to inject the database client in my application service, and access the data directly. Is this a good practice? If yes, where would the returned objects fit? Would they also be part of the application layer?
Would it make sense to skip the application service entirely and inject the database client straight up in the controller/handler? What if the queries are a bit complicated? This would pollute the controllers with a lot of db logic, making it harder to switch implementations (there would be no interface in this case).
What do you think?
The Query side will only contain the methods for getting data, so it can/should be really simple.
You are right, an abstraction on top of your persistence like a repository pattern can feel redundant.
You can actually call the database in your controller. Even when it comes to testing, on the query side you only need basically integration tests that test the actual database. Having unit tests won't test much.
On the other hand, it can make sense to wrap the database calling logic in a query service similar to a repository. You would inject only that query service interface in your controller, which should use your ubiquitous language! You would have all the db logic in this query service and keep the db complexity there, while keeping the controller really simple.
You can avoid complex queries by having multiple read models based on your events depending on your needs.
I have business layer and UI layer in separate projects within a same solution. What i need is ,connect this UI with business layer which coded in c#. UI created using MVC3 Razor.
What i should use as model in MVC application? am i need to create Service reference to business layer to generate some proxy?
then can i use those proxy as a model? please help me..
if you can provide me some tutorials
i tried this but no more idea with MVC :
http://www.dotnetfunda.com/articles/article816-understanding-the-basics-of-wcf-service-.aspx
Unless your project (or architect) demands that all methods of your app access a services layer, I would try and avoid using WCF unnecessarily (think of it - it means that all of your data between web server and back end goes over the wire, which has implications such as performance, serialization of data, and also potentially limits the lifespan of database connections and transactions, which can deprive such as lazy loading).
If you concur, the suggestion would be to ensure all accessible interfaces in your business layer are exposed on an interface, and then consume or inject the BLL interface directly into your controller.
You need to be careful about the word "Model" in MVC - ASP NET MVC encourages ViewModels, which are specific to the presentation tier and passed between Views and Controllers, as opposed to "Entities" which represent the more logical domain model as used by business logic and which can be tied to data persistence using an ORM such as EF or NHibernate. The MVC project template lumps everything which isn't View or Controller into "Model" which isn't necessary very helpful.
However, if you do choose to access your BLL via a WCF Services layer you still have some design decisions to make:
Choose whether you share the back end entities on the client side, or do you instead use proxied entities.
Choose whether you consume / inject the WCF service proxies directly in your controller, or do you create another facade layer (e.g. CAB calls these ServiceAgents). The latter would make sense if there are separate teams or vendors building the SOA side vs the Client side in order to accomodate changes to interfaces.
What are some questions I can ask myself about our design to identify if we should use DTOs or Self-Tracking Entities in our application?
Here's some things I know of to take into consideration:
We have a standard n-tier application with a WPF/MVVM client, WCF server, and MS SQL Database.
Users can define their own interface, so the data needed from the WCF service changes based on what interface the user has defined for themselves
Models are used on both the client-side and server-side for validation. We would not be binding directly to the DTO or STE
Some Models contain properties that get lazy-loaded from the WCF service if needed
The Database layer spams multiple servers/databases
There are permission checks on the server-side which affect how the data is returned. For example, some data is either partially or fully masked based on the user's role
Our resources are limited (time, manpower, etc)
So, how can I determine what is right for us? I have never used EF before so I really don't know if STEs are right for us or not.
I've seen people suggest starting with STEs and only implement DTOs if they it becomes a problem, however we currently have DTOs in place and are trying to decide if using STEs would make life easier. We're early enough in the process that switching would not take too long, but I don't want to switch to STEs only to find out it doesn't work for us and have to switch everything back.
If I understand your architecture, I think it is not good for STEs because:
Models are used on both the client-side and server-side for validation. We would not be binding directly to the DTO or STE
The main advantage (and the only advantage) or STEs is their tracking ability but the tracking ability works only if STE is used on both sides:
The client query server for data
The server query EF and receive set of STEs and returns them to the client
The client works with STEs, modifies them and sends them back to the server
The server receives STEs and applies transferred changes to EF => database
In short: There are no additional models on client or server side. To fully use STEs they must be:
Server side model (= no separate model)
Transferred data in WCF (= no DTOs)
Client side model (= no separate model, binding directly to STEs). Otherwise you will be duplicating tracking logic when handling change events on bounded objects and modifying STEs. (The client and the server share the assembly with STEs).
Any other scenario simply means that you don't take advantage of self tracking ability and you don't need them.
What about your other requirements?
Users can define their own interface, so the data needed from the WCF service changes based on what interface the user has defined for them.
This should be probably possible but make sure that each "lazy loaded" part is separate structure - do not build complex model on the client side. I've already seen questions where people had to send whole entity graph back for updates which is not what you always want. Because of that I think you should not connect loaded parts into single entity graph.
There are permission checks on the server-side which affect how the data is returned. For example, some data is either partially or fully masked based on the user's role
I'm not sure how do you want actually achieve this. STEs don't use projections so you must null fields directly in entities. Be aware that you must do this when entity is not in tracking state or your masking will be saved to the database.
The Database layer spams multiple servers/databases
It is something that is not problem of STEs. The server must use a correct EF context to load and save data.
STEs are implementation of change set pattern. If you want to use them you should follow their rules to take full advantage of the pattern. They can save some time if used correctly but this speed up comes with sacrifice of some architectural decisions. As any other technology they are not perfect and sometimes you can find them hard to use (just follow self-tracking-entities tag to see questions). They also have some serious disadvantages but in .NET WPF client you will not meet them.
You can opt STE for given scenario,
All STEs are POCOs, .Net dynamically add one layer to it for change tracking.
Use T4 templates to generate the STEs, it will save your time.
Uses of tools like Automapper will save your time for manually converting WCF returned data contract to Entity or DTO
Pros for STE -
You don't have to manually track the changes.
In case of WCF you just have to say applydbchanges and it will automatically refresh the entity
Cons for STE -
STEs are heavier than POCO, because of dynamic tracking
Pros for POCO -
Light weight
Can be easily bridged with EF or nH
Cons for POCO -
Need to manually track the changes with EF.(painful)
POCO are dynamic proxied and don't play nice on the wire see this MSDN article for the workaround though. So they can be made to but IMO you're better off going STE as I believe they align nicely with WPF/MVVM development.
Let me first apologise for the length of the entire topic. It will be fairly long, but I wish to be sure that the message comes over clearly without errors.
Here at the company, we have an existing ASP.NET WebApplication. Written in C# ASP.NET on the .NET Framework 3.5 SP1. Some time ago an initial API was developed for this web application using WCF and SOAP to allow external parties to communicate with the application without relying on the browsers.
This API survived for some time, but eventually the request came to create a new API that was RESTfull and relying on new technologies. I was given this assignment, and I created the initial API using the Microsoft MVC 2 Framework, running inside our ASP.NET WebApplication. This took initially quiet some time to get it properly running, but at the moment we're able to make REST calls on the application to receive XML detailing our resources.
I've attended a Microsoft WebCamp, and I was immediatly sold by the OData concept. It was very similar then what we are doing, but this was a protocol supported by more players instead of our own implementation. Currently I'm working on a PoC (Proof of Concept) to recreate the API that I developed using the OData protocol and the WCF DataService technology.
After searching the Internet for getting NHibernate 2 to work with the Data Services, I succeeded in creating a ReadOnly version of the API that allows us to read out the entities from the internal business layer by mapping the incoming query requests to our Business layer.
However, we wish to have a functional API that also allows the creation of entities using the OData protocol. So now i'm a bit stuck on how to proceed. I've been reading the following article : http://weblogs.asp.net/cibrax/default.aspx?PageIndex=3
The above articly nicely explains on how to map a custom DataService to the NHibernate layer. I've used this as a base to continue on, but I have the "problem" that I don't want to map my requests directly to the database using NHibernate, but I wish to map them to our Business layer (a seperate DLL) that performs a large batch of checks, constraints and updates based upon accessrights, privledges and triggers.
So what I want to ask, I for example create my own NhibernateContext class as in the above articly, but instead rely on our Business Layer instead of NHibernate sessions, could it work? I'd probably have to rely on reflection alot to figure out the type of object I'm working with at runtime and call the correct business classes to perform the updates and deletes.
To demonstrate with a smal ascii picture:
*-----------------*
* Database *
*-----------------*
*------------------------*
* DAL(Data Access Layer) *
*------------------------*
*------------------------*
* BUL (Bussiness Layer) *
*------------------------*
*---------------* *-------------------*
* My OData stuff* * Internal API *
*---------------* *-------------------*
*------------------*
* Web Application *
*------------------*
So, would this work, or would the performance make it useless?
Or am I just missing the ball here?
The idea is that I wish to reuse whatever logic is stored in the BUL & DAL layer from the OData WCF DataService.
I was thinking about creating new classes that inherit from the EntityModel classes in the Data.Services namespace and create a new DataService object that marks all calls to the BUL & DAL & API layers. I'm however not sure where/who to intercept the requests for creating and deleting resources.
I hope it's a bit clear what I'm trying to explain, and I hope someone can help me on this.
The devil is in the details, but it sounds like the design you're proposing should work.
The DataService class is where you get to define the access rights applicable to everyone, configuration settings, and custom operations. In this scenario, I think you will be focusing more on the data context instead (the 'T' in DataService).
For the context, there are really two interesing paths: reads and writes. Reads happen through the IQueryable entry points. Writing a LINQ provider is a good chunk of work, but NHibernate already supports this, although it would return what I imagine we're calling DAL entities. You can use query interceptors to do access checks here if you can express those in terms that the database would understand.
The update path is from what I understand where you are trying to run more business logic (you mentioned validation, extra updates, etc). To do this, you'll want to focus on the IUpdatable implementation (IDataServiceUpdateProvider if you're using the latest version). Here you can use whichever objects you want - they could be DAL objects or business objects. You can do everything in the DAL and then run validation on SaveChanges(), or do everything on business objects if they validate as they go.
There are two places where you might 'jump' from one kind of objects to another. One is in the GetResource() API, where you get an IQueryable, presumably in term of DAL entities. The other is in ResolveResource(), where the runtime is asking for an object to serialize, just like it would get from an IQueryable, so it's presumably also a DAL entity.
Hope this helps - doing uniform access over non-uniform APIs can be hard, but often well worth it!
I am new to WCF but have been working on asmx services for a while.
We have an effort underway where we want to introduce a service layer between our UI/aspx pages and Database Layer. Most of the business logic exists in codebehind. So the current setup is UI/aspx->DAL->Database. We want to do UI/aspx.vb->WCF->Business Layer->DAL->Database i.e by moving everything from codebehind in WCF...Is this a good approach?
Our future goal is to get the flexbility in replacing business layer, so there is no dependency between UI and business layer or database.
Need some guidance on how we can use WCF in right way to do layered architecture approach..
Your help is much appreciated.
Thanks.
In principle service layers are a good idea but if you're simply introducing one for the sake of it then it's a lot of effort for no return.
The main benefit you will get from a service layer is that you will have a lot more flexibility in the type of clients you can connect to it. At the moment it's likely that the only application that can use your DAL is your ASP.Net app by adding a service layer and exposing it using WCF end points you'll have the option of connecting other SOAP or REST clients which is a good thing for future enhancement.
However if you're only ever planning on using your DAL with your existing ASP.Net app then there's no guarantee that you will gain anything and in fact could make your life harder by adding a service layer. If you do all your data retrieval server-side, i.e. don't use AJAX, then a service layer would be pretty pointless.