How use SOA with nHibernate? - nhibernate

First of all, I'll clarify some words: when I use the word "user" you have to understand "application user" and the "patient" is an "item" from the model layer.
Let's now explain the context:
A client application has a button "get patient" and "update", a text box "patient name" and a grid to display the patient returned after the click on the "Get patient" button.
At server side I've got a WCF method GetPatient(string name) that searches the reclaimed patient and does some business logic to a PatientEntity used with nHibernate. That method returns a PatientDto (a mapping from PatientEntity). And I've got an Update(PatientDto patient) method to update the modified patient.
The user can modify the returned PatientDto and click on the "Update" button.
So far I have two ideas to manage a "session" through this senario:
First idea: I expose an "ID" property in my DTO so when the user clicks on update, I search, at server side, the "patient" with the specified ID using nHibernate's "GetByID()", I update the result with the data from PatientDto and call the nHibernate's "Update()" method.
Second idea: I create manually at server side a CustomSession (I use this name for clarity) class that encapsulates an ISession and exposes a session's unique id that will travel between the client and the server. So, when the client sends to the server the PatientDto and the unique session id, I can get the CutsomSession and update the patient with the Update() methods of the ISession
I don't like these ideas. Because the first is a lot of overhead and it doesn't use the features of nHibernate. And the second idea demands to the developer to manage himself the id of the CustomSession between the calls: It is error prone.
Furthermore, I'm sure nHibernate provides such a mechanism although I googled and found nothing about this.
Then my questions are:
What mechanism (pattern) should I use? Of course, the mechanism should support an entity's object graph and not a single entity!"
Does nHibenrate provides such a mechanism?*
Thank you in advance for your help,

I don't think this is a Hibernate issue and in my opinion is a common misunderstanding. Hibernate is a OR-Mapper and therefor handles your database objects and provides basic transactional support. Thats almost it.
The solution for Sessionmanagement in Client-Server environments is for example the use e.g. Spring.net which does provide solutions (Search for OpenSessionInView) for your problem and integrates quite well with NHibernate.
The stateless approach you mentioned offers many advantages compared to a session-based solution. For example think about concurrency. If your comitt is stateless you can simply react on a failed Save() operation on the client side for example by reloading the view.
Besides your 2 good arguments for the use of Hibernae is, if done right, security aggainst SQL-Injection.

One reason that I usually don't bother with ORM tools/frameworks in client-server programming is that you land at, usually, your first solution with them. It helps in making the server side more stateless (and thus more scalable) at the expense of some reasonably cheap database calls (a fetch-by-PK is usually very cheap, and if you immediate write it anyway, guess what the database is likely to do first on a write? Grab the old record - so SELECT/UPDATE may be only marginally slower than just UPDATE because it seeds the cache).
Yes, you're doing stuff manually that you want to push out to the ORM - such is life. And don't fret over performance until you've measured it - for this particular case, I wonder if you really can measure it.

Here's a sumary of what has been said:
A nHibernate session lasts the time of the service call. That's, the time of the call of "GetPatient(string name)" no more.
The server works with entities and returns DTO's to the client.
The client displays and update DTO's. And calls the service "Update(PatientDto patient)"
When the client triggers the service "Update(PatientDto patient)", the mapper gets the patient entities thanks to the ID contained in the DTO with a "GetById(int id)" and updates the properties which has to be.
And finally, the server calls the nHibernate's "Update()" to persists all the changes.

Related

Change the connection string dynamically (per request) on Entity Framework 7 / MVC 6

I have a MVC 6 application on which I need to connect to a different database (i.e. physical file but same schema) depending on who is accessing to it.
That is: each customer of the web application will have it's data isolated in an SQL database (on Azure, with different performances, price levels, etc.) but all those databases will share the same relational schema and of course, the Entity Framework context class.
var cadConexion = #"Server=(localdb)\mssqllocaldb;Database=DBforCustomer1;Trusted_Connection=True;";
services.AddEntityFramework().AddSqlServer().AddDbContext<DAL.ContextoBD>(options => options.UseSqlServer(cadConexion));
The problem is that if I register the service this way I've tied it to a concrete database for a concrete customer, and I don't know if I can change latter when the middleware execution starts (this would be a good point as I can know then who is ringing at the door).
I know I can construct the Database Context passing the connection string as a parameter but this would imply that I should be creating the Database Context at runtime (early in the pipeline) for every request adn I don't know if this could be potentially unefficient or a bad practice. Furthermore I think this way I can't register the Database Context as a service for injecting it on my controllers...
What is the correct approach for this? Anybody has a similar configuration working on production?
Thanks in advance
I would have preferred not to answer my own question, but I feel that I must offer guidance to those with a similar problem, after a long and deep research over internet so I can save them a lot of time testing multi-connection scenarios, wich is quite laborious...
I've finally used a (very recent) feature and APIs of Azure called "Elastic Database Tools" wich, to be concise, is a set of tools from Microsoft aimed to address this concrete problem, specially for SaaS (software as a service) scenarios (as mine is).
Here is a good link to start with:
https://azure.microsoft.com/en-us/documentation/articles/sql-database-elastic-scale-get-started/
Good luck with your projects!
First of all, I do not recommend swapping connection strings per request.
But that's not the question. You can do this. You will need to pass your DbContext a new connection string.
.AddDbContext caches the connection string in the dependency injection container, so you cannot use DI to make this scenario work. Instead, you will need to instantiate your DbContext yourself and pass it a new connection string.

Entity Framework Code First DTO or Model to the UI?

I am creating a brand new application, including the database, and I'm going to use Entity Framework Code First. This will also use WCF for services which also opens it up for multiple UI's for different devices, as well as making the services API usable from other unknown apps.
I have seen this batted around in several posts here on SO but I don't see direct questions or answers pertaining to Code First, although there are a few mentioning POCOs. I am going to ask the question again so here it goes - do I really need DTOs with Entity Framework Code First or can I use the model as a set of common entities for all boundaries? I am really trying to follow the YAGNI train of thought so while I have a clean sheet of paper I figured that I would get this out of the way first.
Thanks,
Paul Speranza
There is no definite answer to this problem and it is also the reason why you didn't find any.
Are you going to build services providing CRUD operations? It generally means that your services will be able to return, insert, update and delete entities as they are = you will always expose whole entity or single exactly defined serializable part of the entity to all clients. But once you do this it probably worth to check WCF Data Services.
Are you going to expose business facade working with entities? The facade will provide real business methods instead of just CRUD operations. These buisness methods will get some data object and decompose it to multiple entities in wrapped business logic. Here it makes sense to use specific DTO for every operation. DTO will transfer only data needed for the operation and return only date allowed to the client.
Very simple example. Suppose that your entities keep information like LastModifiedBy. This is probably information you want to pass back to the client. In the first scenario you have single serializable set so you will pass it back to the client and client pass it modified back to the service. Now you must verify that client didn't change the field because he probably didn't have permissions to do that. You must do it with every single field which client didn't have permission to change. In the second scenario your DTO with updated data will simply not include this property (= specialized DTO for your operation) so client will not be able to send you a new value at all.
It can be somehow related to the way how you want to work with data and where your real logic will be applied. Will it be on the service or on the client? How will you ensure that client will not post invalid data? Do you want to restrict passing invalid data by logic or by specific transferred objects?
I strongly recommend a dedicated view model.
Doing this means:
You can design the UI (and iterate on it) without having to wait to design the data model first.
There is less friction when you want to change the UI.
You can avoid security problems with auto-mapping/model binding "accidentally" updating fields which shouldn't be editable by the user -- just don't put them in the view model.
However, with a WCF Data Service, it's hard to ignore the advantage of being able to write the service in essentially one line when you expose entities directly. So that might make the most sense for the WCF/server side.
But when it comes to UI, you're "gonna need it."
do I really need DTOs with Entity Framework Code First or can I use the model as a set of common entities for all boundaries?
Yes, the same set of POCOs / entities can be used for all boundaries.
But a set of mappers / converters / configurators will be needed to adapt entities to some generic structures of each layer.
For example, when entities are configured with DataContract and DataMember attributes, WCF is able to transfer domain objects' state without creating any special classes.
Similarly, when entities are mapped using Entity Framework fluent mapping api, EF is able to persist domain objects' state in database without creating any special classes.
The same way, entities can be configured to be used in any layer by means of the layer infrastructure without creating any special classes.

Linq-to-SQL entites unstanding? please help?

I’m having a little bit of difficulty understanding some architectural principles when developing a service. If you make a call to a WCF service and it returns a collection of items(Orders) (which are custom made classes made up From LINQ-to-SQL entity data) to a client and each item has a collection of items(OrderItems) (one-to-many) that are also made up from the same LINQ-to-SQL context. If I make another call to the service and request a particular OrderItem and modify its details on the client side, how then does the first collection of Items realise that one of its Orders OrderItem has changed from the client side. I am taking the approach of when changing the OrderItem I send the OrderItem object to the WCF service for storage via LINQ-to-SQL commands but to update the collection that the client first called I use IList interface to search and replace each instance of the OrderItem. Also subscribing each item to the PropertyChanged event give some control. This does work with certain obvious limitations but how would one 'more correctly' approach this by perhaps managing all of the data changing from the service itself.. ORM? static classes? If this is too difficult question to answer, perhaps some link or even chat group that I can discuss this as I understand that this site is geared for quick Q/A type topics rather than guided tutorial discussions.
Thanks all the same.
Chris Leach
If you have multiple clients changing the same data at the same time, at the end of the day you system must implement some sort of Concurrency Control. Broadly thats going to fall into one of two categories: pessimistic or optimistic.
In your case it sounds like you are venturing down the optimistic route, whereby anyone can access the resource via the service - it does not get locked or accessed exclusively. What that means is ultimately you need to detect and resolve conflicts that will arise when one client changes the data before another.
The second architectural requirement you seem to be describing is some way to synchronize changes between clients. This is a very difficult problem. One way is to build some sort of publish/subscribe system whereby, after a client retrieves some resources from the service, it also subscribes to get updates to changes to resource. You can do this either in a push or pull based fashion (pull is probably simpler, i.e. just poll for changes).
Fundamentally you are trying to solve a reasonably complex problem, but its also one which pops up quite frequently in software.

How would I know if I should use Self-Tracking Entities or DTOs/POCOs?

What are some questions I can ask myself about our design to identify if we should use DTOs or Self-Tracking Entities in our application?
Here's some things I know of to take into consideration:
We have a standard n-tier application with a WPF/MVVM client, WCF server, and MS SQL Database.
Users can define their own interface, so the data needed from the WCF service changes based on what interface the user has defined for themselves
Models are used on both the client-side and server-side for validation. We would not be binding directly to the DTO or STE
Some Models contain properties that get lazy-loaded from the WCF service if needed
The Database layer spams multiple servers/databases
There are permission checks on the server-side which affect how the data is returned. For example, some data is either partially or fully masked based on the user's role
Our resources are limited (time, manpower, etc)
So, how can I determine what is right for us? I have never used EF before so I really don't know if STEs are right for us or not.
I've seen people suggest starting with STEs and only implement DTOs if they it becomes a problem, however we currently have DTOs in place and are trying to decide if using STEs would make life easier. We're early enough in the process that switching would not take too long, but I don't want to switch to STEs only to find out it doesn't work for us and have to switch everything back.
If I understand your architecture, I think it is not good for STEs because:
Models are used on both the client-side and server-side for validation. We would not be binding directly to the DTO or STE
The main advantage (and the only advantage) or STEs is their tracking ability but the tracking ability works only if STE is used on both sides:
The client query server for data
The server query EF and receive set of STEs and returns them to the client
The client works with STEs, modifies them and sends them back to the server
The server receives STEs and applies transferred changes to EF => database
In short: There are no additional models on client or server side. To fully use STEs they must be:
Server side model (= no separate model)
Transferred data in WCF (= no DTOs)
Client side model (= no separate model, binding directly to STEs). Otherwise you will be duplicating tracking logic when handling change events on bounded objects and modifying STEs. (The client and the server share the assembly with STEs).
Any other scenario simply means that you don't take advantage of self tracking ability and you don't need them.
What about your other requirements?
Users can define their own interface, so the data needed from the WCF service changes based on what interface the user has defined for them.
This should be probably possible but make sure that each "lazy loaded" part is separate structure - do not build complex model on the client side. I've already seen questions where people had to send whole entity graph back for updates which is not what you always want. Because of that I think you should not connect loaded parts into single entity graph.
There are permission checks on the server-side which affect how the data is returned. For example, some data is either partially or fully masked based on the user's role
I'm not sure how do you want actually achieve this. STEs don't use projections so you must null fields directly in entities. Be aware that you must do this when entity is not in tracking state or your masking will be saved to the database.
The Database layer spams multiple servers/databases
It is something that is not problem of STEs. The server must use a correct EF context to load and save data.
STEs are implementation of change set pattern. If you want to use them you should follow their rules to take full advantage of the pattern. They can save some time if used correctly but this speed up comes with sacrifice of some architectural decisions. As any other technology they are not perfect and sometimes you can find them hard to use (just follow self-tracking-entities tag to see questions). They also have some serious disadvantages but in .NET WPF client you will not meet them.
You can opt STE for given scenario,
All STEs are POCOs, .Net dynamically add one layer to it for change tracking.
Use T4 templates to generate the STEs, it will save your time.
Uses of tools like Automapper will save your time for manually converting WCF returned data contract to Entity or DTO
Pros for STE -
You don't have to manually track the changes.
In case of WCF you just have to say applydbchanges and it will automatically refresh the entity
Cons for STE -
STEs are heavier than POCO, because of dynamic tracking
Pros for POCO -
Light weight
Can be easily bridged with EF or nH
Cons for POCO -
Need to manually track the changes with EF.(painful)
POCO are dynamic proxied and don't play nice on the wire see this MSDN article for the workaround though. So they can be made to but IMO you're better off going STE as I believe they align nicely with WPF/MVVM development.

WCF service design question

Is it ok from your real-world-experience to define service contract with one method which will accept some object as a form of request and return some other object as a result of that request. What I mean is instead of having method for creating, deleting, editing and searching customers I would have these activities encapsulated within DataContracts and what service would do after receiving such DataContract would be take some action accordingly. But service interface would be simple as that:
interface ISomeService
{
IMessageResult Process(IMessageRequest msg);
}
So IMessageRequest would have filed named OperationType = OperationTypes.CreateCustomer and rest of fields would provide enough information for the service that it could create Customer object or record in database or whatever. And IMessageResult could have field with some code for indication that customer was created or not.
What I'm trying to achieve by such design is an ability to easy delegate IMessageRequest to other internal services that client side wouldn't even know about. Another benefit I see is that if we will have to add some operation on customers we only provide additional DataContract for this operation and don't have to change anything on service interface side (I want to avoid this at all costs, I mean not new operations but changing service interface :)
So, what do you think? Is it good way of handling complicated business processes? What are pitfals, what could be better.
If I duplicated some other thread and there are some answers to my question please provide me with links because I didn't find them.
Short answer: yes, this could be a very good idea (and one I have implemented in one form or another a couple of times).
A good starting point for this type of approach are the posts by Davy Brion on what he calls the request/response layer. He consolidated his initial ideas & thoughts into a very usable OSS project called Agatha, which I am proposing at a customer site as I write this.
This is exactly what we're doing here where I work. It works great and is easy for all the developers to understand, and really easy to wire up new methods/class/etc.