traversing object graph from n-tier client - wcf

I'm a student currently dabbling in a .Net n-tier app that uses Nhibernate+WCF+WPF.
One of the things that is done quite terribly is object graph serialisation, In fact it isn't done at all, currently associations are ignored and we are using DTOs everywhere.
As far as I can tell one method to proceed is to predefine which objects and collections should be loaded and serialised to go across the wire, thus being able to present some associations to the client, however this seems limited, inflexible and inconsistent (can you tell that I don't like this idea).
One option that occurred to me was to simply replace the NHProxies that lazy load collection on the client tier with a "disconnectedProxy" that would retrieve the associated stuff over the wire. This would mean that we'd have to expand our web service signature a little and do some hackery on our generated proxies but this seemed like a good T4/other code gen experiment.
As far as I can tell this seems to be a common stumbling block but after doing a lot of reading I haven't been able to figure out any good/generally accepted solutions. I'm looking for a bit of direction as much as any particular solution, but if there is an easy way to make the client "feel" connected please let me know.

You ask a very good question that unfortunately does not have a very clean answer. Even if you were able to get lazy loading to work over WCF (which we were able to do) you still would have issues using the proxy interceptor. Trust me on this one, you want POCO objects on the client tier!
What you really need to consider...what has been conceived as the industry standard approach to this problem from the research I have seen, is called persistence vs. usage or persistence ignorance. In other words, your object model and mappings represent your persistence domain but it does not match your ideal usage scenarios. You don't want to bring the whole database down to the client just to display a couple properties right??
It seems like such a simple problem but the solution is either very simple, or very complex. On one hand you can design your entities around your usage scenarios but then you end up with proliferation of your object domain making it difficult to maintain. On the other, you still want the rich object model relationships in order to write granular business logic.
To simplify this problem let’s examine the two main gaps we need to fill…between the database and the database/service layer and the service to client gap. NHibernate fills the first one just fine by providing an ORM to load data into your objects. It does a decent job, but in order to achieve great performance it needs to be tweaked using custom loading strategies. I digress…
The second gap, between the server and client, is where things get dicey. To simplify, imagine if you did not send any mapped entities over the wire to the client? Try creating a mechanism that exchanges business entities into DTO objects and likewise DTO objects into business entities. That way your client deals with only DTOs (POCO of course), and your business logic can maintain its rich structure. This allows you to leverage not only NHibernate’s lazy loading mechanism, but other benefits from the session such as L1 cache.
For brevity and intellectual property reasons I will not go into the design of said mechanism, but hopefully this is enough information to point you in the right direction. If you don’t care about performance or latency at all…just turn lazy loading off all together and work through the serialization issues.

It has been a while for me but the injection/disconnected proxies may not be as bad as it sounds. Since you are a student I am going to assume you have some time and want to muck around a bit.
If you want to inject your own custom serialization/deserialization logic you can use IDataContractSurrogate which can be applied using DataContractSerializerOperationBehavior. I have only done a few basic things with this but it may be worth looking into. By adding some fun logic (read: potentially hackish) at this layer you might be able to make it more connected.
Here is an MSDN post about someone who came to the same realization, DynamicProxy used by NHibernate makes it not possible to directly serialize NHibernate objects doing lazy loading.

If you are really determined to transport the object graph across the network and preserve lazy loading functionality. Take a look at some code I produced over here http://slagd.com/?page_id=6 . Basically it creates a fake session on the other side of the wire and allows the nhibernate proxies to retain their functionality. Not saying it's the right way to do things, but it might give you some ideas.

Related

Structuring a Data Access Layer

For my application, I am looking at using an ORM and currently trying to decide if the domain layer should interface with it through a Data Access Object, Repositories, or something else? I am hesitant to pair an ORM with repositories because they can become redundant if the ORM entities are identical with the domain objects, but having one big DAO seems cludgy. I want to keep my SQL centralized, but I can't figure which of these options, if any, would make the most sense. Any suggestions on an appropriate design pattern?
This is very opinion-based, but I tend toward creating separate entities from my domain models. The domain model needs to closely match your domain, whereas your entities need to closely model your storage. They may initially match very closely, and seem really redundant, but they very often drift dramatically from each other very quickly.
That being said, wrappers that do nothing but map domain entities to persistence entities often feels horrible, and a giant waste of time. Additionally, it doesn't pay off until much later in the game, when you are doing refactoring, and you realize that your domain isn't quite right, but you don't want to modify your persistence layer.
The good news is, most languages/frameworks have some form of a mapping library that will let you automatically map from one object to another that is similarly structured. This is a great way to speed this up initially, while still giving yourself flexibility to create a manual mapping when the requirements change out from under you.

Why map java beens

I run across Orika recently.
And I couldn't find a good explanation of why I should use it. If I have, say, a User domain object, why not use that? Why do I need to create a UserDTO with more or less the same members.
Of course there are times when I need to hide some fields. But that doesn't explain the need to have dozens of libraries.
Can someone explain to me why I should not re-use domain objects from one architectural boundary to another? Saying boundary to include layers or micro-service interfaces or anything similar.
It all depends! Good designpatterns for big systems are often overkill for smaller ones. Is the data you are getting really the same as your logical intuitive domain objects or are there extra data there.
Do you find yourself in the situation described in the answer to this post, then DTO it up. DTO's exist to limit the amount of expensive network operations by transmitting more data in each request. Say you have 'User' and 'AddressDetail' domain objects, and that you could get the data for both these objects in a single call(and the data is useful in the same area of the application) then you'd use a dto and send all the data at once.
It can be hard predicting how your system will grow(especially if you are working against a living API which someone else controls), and data transfer object on some level provides a clear separation of responsibility which is often a good thing.
I'd say reuse domain objects with caution in large systems.

ORM with or without DAL wrapper

In all the examples I have seen, ORM's tend to be used directly or behind some kind of DAL repository (presumably so that they can be swapped out in the future).
I am no fan of direct ORM use as it will be hard to swap out, but i am equally no fan of losing the full domain change tracking it provides!
In the past I would have written a data mapper class (Fowler) for each object in my domain, but I have learnt through experience that this CRUD coding drains around 1/3 of my time.
I a realize that changing my data access strategy is rather unlikely (I have never had to do so before) but I am really concerned that by using an ORM directly I will be locking myself into using it until the end of time.
I have been thinking about wrapping the ORM (no decision on the ORM itself yet) in a generic ORM container and passing this around to finder classes for each of the domain objects. However, I am totally unsure what a generic ORM wrapper class would look like!
Has anyone got any real life advise here? Please don't feel it nessecary to sugar coat your answers!!
The repository has a number of functions:
It allows for unit testing with a mock implementation
It allows you to hide the full implementation of the ORM from the consumer, and implement security functions
It provides a layer of abstraction for business logic (although some people use a separate service layer for this), and
It allows you to change the ORM implementation, if necessary.
Another container to genericize your ORM feels like overengineering to me. As you pointed out, it is unlikely that you will ever change your underlying implementation, but if you do, your repositories seem like the sensible place to do it.
To point you in the direction of someone much wiser than me on these matters, one of the issues with having a generic ORM wrapper as highlighted by Ayende in his blog post The false myth of encapsulating data access in the DAL is that different ORMs are inherently too different to encapsulate effectively, having different methods for transaction handling, etc.
And on top of that, there's really not much point in switching ORMs anyway - one of the main reasons for encapsulating the DAL in case of change was to cope with switching databases, but most modern ORMs are able to work with many different databases anyway.

NHibernate classes as Data Contracts

I'm exposing some CRUD methods through WCF service, for some data objects persisted in a database through NHibernate. Is it a good approach to use NHibernate classes as data contracts, or is it better to wrap them or replace them with some other data contracts? What is your approach?
Our team just went through a good few months debating this design point, so I've got a lot of links to share ;-)
Short answer: You "should" translate from your NHibernate classes into a domain model.
Long answer: I think the answer to this is a matter of principle. If you ever want to be interoperable, you should not use Datasets as your DTOs (I love Hanselman's post on this). I'm not saying that it's never a good idea; clearly people have had success doing so. Just know that you are cutting corners and it's a risky proposition.
If you have complete control over the classes you are pushing the data into, you could build a nice domain model and just map the NHibernate data into those classes. You will more than likely have serious issues doing that, as IList<> (which a <bag> maps to) is not serializeable. You'd have to write your own serializer, or use something like NetDataContractSerializer, but you lose interoperability.
You will need to measure the amount of work involved in building some wrapper classes, and the translation between, but then you have complete flexibility in what your domain model will look like. Then, you're able to do things (as we have done) like code generation for your NHibernate maps and objects. Then, your data contracts serve as an abstraction from your data, as they should.
P.S. You might want to take a look at ADO.NET Data Services, which is a RESTful way to expose your data, which, at this point, seems to be the most interoperable choice to expose your data.
You would not want to expose your domain model directly, but map the domain to some kind of message as it hits the process boundary. You could leverage NHibernate to do the mapping work for you. In this case you would have 2 mappings, one for you domain model and another for your lightweight messages.
I don't have direct experience in doing this, but I have sent Datasets across via WCF before and that works just fine. I think your biggest issue in using NHibernete as data objects over WCF will be a lack of interoperability (as is also the case when using Datasets). Not only does the client have to use .NET, the client must also use NHibernate. This goes against SOA principles, but if you know for sure that you won't be reusing this component then there's not a great reason not to.
It's at least worth a try.

LinqToSql and WCF

Within an n-tier app that makes use of a WCF service to interact with the database, what is the best practice way of making use of LinqToSql classes throughout the app?
I've seen it done a couple of different ways but they seemed like they burned a lot of hours creating extra interfaces, message classes, and the like which reduces the benefit you get from not having to write your data access code.
Is there a good way to do it currently? Are we stuck waiting for the Entity Framework?
LINQ to SQL isn't really suitable for use with a distributed app. The change tracking and lazy loading is part of the DataContext which is tied to the database so cannot travel across the wire. You can move L2S entities across the wire, modify them, move them back and update the database by reattaching them to the DataContext but that is pretty limited and you lose all concurrency checks as the old values are never kept around.
BTW I believe the same is true for L2E.
It is certainly not a good idea to pass the linq-to-sql object around to other parts of a distributed system. If you do that, you would couple your clients to the structure of the database, which is never a good idea. This was/is one of the major problems with DataSets by the way.
It is better to create your own classes for the transfer of data object. Those classes, of course, would be implemented as DataContracts. In your service layer, you'd convert between the linq-to-sql objects and instances of the data carrier objects. It is tedious but it decouples the clients of the service from the database schema. It also has the advantage of giving you better control of the data that is passed around in your system.