Lazy Loading with a WCF Service Domain Model? - wcf

I'm looking to push my domain model into a WCF Service API and wanted to get some thoughts on lazy loading techniques with this type of setup.
Any suggestions when taking this approach?
when I implemented this technique and step into my app, just before the server returns my list it hits the get of each property that is supposed to be lazy loaded ... Thus eager loading. Could you explain this issue or suggest a resolution?
Edit: It appears you can use the XMLIgnore attribute so it doesn’t get looked at during serialization .. still reading up on this though

Don't do lazy loading over a service interface. Define explicit DTO's and consume those as your data contracts in WCF.
You can use NHibernate (or other ORMs) to properly fetch the objects you need to construct the DTOs.

As for any remoting architecture, you'll want to avoid loading a full object graph "down the wire" in an uncontrolled way (unless you have a trivially small number of objects).
The Wikipedia article has the standard techniques pretty much summarised (and in C#. too!). I've used both ghosts and value holders and they work pretty well.
To implement this kind of technique, make sure that you separate concerns strictly. On the server, your service contract implementation classes should be the only bits of the code that work with data contracts. On the client, the service access layer should be the only code that works with the proxies.
Layering like this lets you adjust the way that the service is implemented relatively independently of the UI layers calling the service and the business tier that's being called. It also gives you half a chance of unit testing!

You could try to use something REST based (e.g. ADO.NET Data Services) and wrap it transpariently into your client code.

Related

What is the counterpart (pattern) to services on the client side?

Let's say i do have a service, which is just a REST-API. This rest api provides some data.
As far as i understand, which makes sense, I can encapsulate data, which is sent from and to this service into DTO's. This totaly makes sense, since you'll have some business objects but often you'll need to serialize them in a way. So as far as i understand this would be a generally accepted and know way to abstract it regarding this part.
Then this DTO's are sent trough the REST-API. Regarding the server side it seams pretty straight forward, having some controllers which provide the data or receive them, I'm not seeing any issues there (at least for now).
So regarding my question. On the client side there are objects, which will access this API, this object, in my implementation contains a http client (not sure maybe i decouple them from this objects) and also it contains methods to access the api. So in one way or another, abstracting the use of http client and accessing the API away.
HOW DO YOU NAME THIS OBJECTS ACCESSING THE API?
I'm now naming them XXXManager/XXXHandler/..., but this names feel far to generic and i feel like there has to be some convention or pattern for this? Naming them XXXService also does not feel not completely right, because service for me is like the server side part, this object are accessing the service.
So how would you name this kind of objects and are there some deeper patterns to handle this kind of service/api accessors?
The model/pattern that would work here, is a classical layered architecture, which works like that:
The HttpClient should be wrapped around a class (let's name it ApiClient) that exposes methods for accessing the REST API. In each of those methods, the httpClient is used to execute the HTTP call.
There is a layer of Service/Manager classes that use the ApiClient and also apply their own business logic.
There is a layer of UI components which also inject the Services/Managers to grab the data and render it on the UI.
In this way you decouple the layers, which improves both the scalability and the testability of your code.
The naming somehow depends on the type of the client-side implementation/framework that you have.
If you have a web-frontend client, then the name TransactionService would tell me that this class talks to some external transaction service (Service is not a naming tied to server-side components).
This naming model applies to Angular, for example.
Patterns of Enterprise Application Architecture suggests Gateway, but I'd just go with Client.

Converting a Library to WCF web service

As the subject line describes, I am in the process of exposing a C# library into a WCF Service. Eventually we want to expose all the functionality, but at present the scope is limited to a subset of the library API. One of the goals of this exercise is also to make sure that the WCF service uses a Request / Response message exchange pattern. So the interface /API will change as the existing library does not use this pattern
I have started off by implementing the Service Contracts and the Request/Response objects, but when it comes to designing the DataContracts, I am not sure which way to go.
I am split between going back and annotating the existing library classes with DataContract/DataMember attributes VS defining new classes which are like surrogate classes to the existing classes.
Does anyone have any experience with similar task or have any recommendations on which way works best ? I would like to point out that our team owns the existing library so do have the source code for it. Any pointers or best practices will be helpful
My recommendation is to use the Adapter pattern, which in this case basically means create brand new DataContracts and ServiceContracts. This will allow everything to vary independently, and will allow you to optimize the WCF stuff for WCF and the API stuff for the API (if that makes sense). The last thing you want is to go down the modification route and find that something just won't map right once you are almost done.
Starting from .NET 3.5 SP1 you no longer need to decorate objects that you want to expose with [DataContract]/[DataMember] attributes. All public properties will be automatically exposed. This being said personally I prefer to use special DTO objects that I expose and decorate with those attributes. I then use AutoMapper to map between the actual domain models and the objects I want to expose.
If you are going to continue to use the existing library but want to have control over what you expose as the web service API, I would recommend defining new classes as wrapper(s) around the library.
What I mean to say is don't "convert" the existing library even if you think you're not going to continue to use it in other contexts. If it has been tested and proven, then take advantage of that fact and wrap around it.

Dynamic data contracts in WCF

There are some pain points around transmitting entities between a client and a WCF service.
Defeating lazy loading by serializing all properties
Serialized data can be unecessarily bloated
Some coupling between UI and business layer
One way to address these issues is to transmit DTOs instead of entities but I am aware that this technique has its own set of caveats (the biggest one I am aware of is the typing required to maintain these function-specific DTOs).
I think it would be great if the service implementation could generate these DTOs dynamically and this appears to be possible. Unfortunately, it looks like the contract would be loosely defined on the client side (i.e. "object") and that smells like a possible risk.
Is it advisable to use dynamic DTOs in this fashion or is there another way to use DTOs without creating/maintaining classes for each one?
I think the holy grail would be where the implementation dynamically generates DTOs but the client sees well-defined contracts. I'm guessing this isn't possible with WCF.
I guess the issue is what are you going to generate them from? You have to have some description somewhere of what the data you want to transmit looks like. If all you have is the domain objects then you end up in a similar position of transmitting the data that you would of via the domain object.
One of the key things the DTO enables is decoupling so you can evolve your domain objects without breaking the consumers of your service accidently. If you dynamically generate the DTOs then you will cascade the changes - unless you view the dynamic creation as a one-off exercise to get you started with a DTO
DTO is data contract as any other and must be defined. When you choose to go with DTOs you are adding a layer of complexity which you have to maintain. There are tools which can help you with mapping between domain objects and DTOs (like AutoMapper) but your responsibility is to define what DTO should transfer - that is something which can hardly be done automatically. Even with automated tool you will still have to maintain some definition of DTOs which will be used to generate code.

Is it possible to use lazy-loading function in EntityFramework4 in conjunction with WCF?

my application is using WPF for UI, WCF for WebService, EF4 for DataAccess.
I read some materials from internet and msdn that EF4 has self-tracking function using custom T4 template even if using together with WCF for ntier. Does this mean that lazy loading function is still possible with WCF?
Thanks
The Self-Tracking entities are kind of hacky, IMHO. They are designed so that, once deserialized (i.e, on the far end of your WCF channel), they start tracking changes to themselves. That's great for when you send them back home, because you can reconnect them to a context and everything (hypothetically) works.
Self-tracking and lazy loading are two different things. EF self-tracking entities are disconnected to the data context, and on your client end there IS no data context. So they cannot lazily load anything.
There is no plug-and-play framework mixing WCF and EF that, from the client's perspective, is seamless. Could be done, of course. A few new T4 templates and you'll have an autogenerated WCF service contract your entities could use to perform lazy loading.
Of course, you'd have to write that.
Edit On second thought, you might have more luck going with WCF Data Services.

How complex an object can be passed to silverlight from server, using WCF?

Please note that my experience in Silverlight/.Net and WCF is about two weeks of googling and deciphering tutorials. I need to attempt and provide feedback to a client on if Silverlight will be a possible solution to their application needing a RIA front end.
The client has a rather large .Net based application with a UI layer built which greatly relies on the creation and manipulation of specific (personal) classes and objects from the backend (which would be the server side).
A summery of what I understand to be the general procedure: one can pass simple objects containing simple data types, or more complex .Net type objects. Basically anything which can be understood by both client and server side, after serializing.
But what is the limitation to the complexity of an object I can pass? Or phrased otherwise, would silverlight and WCF be able to support the passing of a personalized object which may contain references to other classes/objects and variables etc?
Additional Info (in case it can help):
I am not allowed direct access to their backend code but with the information I have been given I can safely say their classes heavily use inheritance and overloading of functions/methods in the classes.
As far as I know there is nothing specific to Silverlight. There are some things to keep in mind though.
WCF serialization doesn´t like circular references.
All types need to specified in the contract. So watch out with inheritance etc.
In general using DTO's (Data Transfer Objects) and not exposing your business objects is the way to go.
The metaphor is one of message passing as opposed to passing objects. DTO's as Maurice said.
You can get pretty complex, but each object needs to have its contract defined.