I will have a asp.net MVC that will connect to a WCF webservice. That service defines the database connection.
I noticed I will have 3 different Model/data classes.
First off is the ViewModel guy from MVC. I guess it could be somewhat different from how data is represented in the DB.
Second is the DataModels, poco that define how the objects look in the database.
Then there is the DataContract guy that defines how the objects look that are transferred over the WCF service. Guess it will either be pretty much a representation of a ViewModel or a DataModel.
Is this a overkill or a necessary evil? Should I define the DataContracts as the ViewModel guys perhaps, or even the DataModels.
How would you do it and how would you split it into assemblies?
I would keep them all separate as you mentioned.Tthey each have a different layer to deal (ie belong to) with and should be separate objects to deal with any future changes.
The wcf items will be created by referencing the service so belong in your wcf service. Your data model should be in a model project or a data access project and your ViewModels in your MVC application although you could break then from there if you want but since it's a fairly tight coupling with the MVC app it's debatable.
Should I define the DataContracts as the ViewModel guys perhaps,
Certainly not. The ViewModels are screen (use-case) oriented.
But you might do this in a DataService that is consumed by a SilverLight or jQuery client.
or even the DataModels.
That could make sense. And one of the reasons for POCOs is that they could even be the same classes.
Related
I have been using EF since it first came out. Used to hand build POCOs in 3.5 and was glad to see Self Tracked Entities(STE) in EF4.0.
I have use STEs in a couple of very large projects(500+ entities, some with multiple models). In these projects I use a generic Repository and a generic Unit of Work to persist the entities i.e. 2 small generic classes no mapping. By electing a core entity as the "aggregrate root", other entities are added and updated on the client side and the core entity graph containing these changes is sent to the WCF service and used in the Logic Layer which creates the Repository<[core entity]> and uses the UnitOfWork<[core entity]>.Save(Repository<[core entity]>) to persist the STEs and their children to the database.
Now Microsoft is recommending that we not use STEs. See this article
So my question is, What is(are) the patterns that are now recommended by Microsoft for applications that are persisting client changes to WCF Services that use EF?
I created a EF5 Model and examined the generated code. The there are no attributes for a WCF Service i.e. DataContract, DataMember etc
EF4 had a "ADO.NET DbContext Generator with WCF Support" template, but there isn't a EF5 equivalent.
One site suggested I should use a partial class file and decorate the same properties in that file with these attributes. But unless .net 4.5 has introduced partial properties, I cannot see how that can be done.
Another blog suggested using DTO and Automapper, which means more mapping which is error prone; especially when entity fields change type.
So now that DBContext generated code classes are not Service enabled, does this mean that we need to write another set of classes (POCOs) that:
needs to be mapped FROM the DBContext generated code classes after querying the database.
holds the data state for the WCF Service client(s)
is updatable by that client(s)
is mapped by the client(s)
has the ability to hold changed state so this can be sent back to the WCF Service
needs to be mapped TO the DBContext generated code classes for persistence
It seems we just took a great leap backwards to EF3.
If you code both client and service that runs on your hardware, you don't need to be concerned about data structures at the client as they belong to you.
If you also need to expose some of your service methods to non.NET clients you should do the 5 points above for those services anyway and use DTOs and Automapper in those occasions.These should be in a different WCF Service but implemented against the same Logic Layer, after mapping.
But how many of these type of non.NET client services are be created in the day to day building of web applications in most software teams?
This latest recommendation is confusing as it has not been explained as to WHY STEs are ALWAYS ill-conceived and what now, are the recommended patterns to be used for persisting client changes to WCF Services that use EF.
Can anybody inform me where I can find a good resource that solves this architectural design issue?
P.S.
Please don't recommend WCF Data Services or WCF RIA as we need a lot of control over how your data is retrieved and saved by clients.
Please don't recommend Code First as we use Database First as we want to have and need to control the structure of that database and not have to generated for us.
Ok so i thought the same thing when I first read this article, it seems a bit weird to deprecate a whole branch of EF like this and the intention wasn't terribly well communicated (IMO). I think a couple of things are important here:
STEs as referred to in this article refer to object context based self tracking entities (which act a little like autonomous contexts)
ObjectContext is generally being moved away from in favor of the cleaner DbContext structure (this is for both DB first and Code First)
STEs != DB first generation, you can still use an EDMX model in EF and this isn't likely to change.
When i originally saw this article I mistook STEs for POCO Proxy entities which are still available and AFAIK there are no plans to deprecate. (these achieve a similar technical solution to the problem of change detection but with a nicer interface. Check out this article for the differences EF4: Difference between POCO , Self Tracking Entities , POCO Proxies
So what does this all mean
Basically STEs in terms of the old implementation of a change tracker are being deprecated in favor of the newer forms of change tracking (Snapshot or POCO Proxies). This means that if snapshot tracking doesn't suit you you should look into POCO Proxies which are similar to the old STEs.
You can still use all previous techniques for context generation (DB First, Model First, Code First, and DB-> Code)
I have a web client that calls my WCF business service layer, which in turn, calls external WCF services to get the actual data. Initially, I thought I would use DTOs and have separate business entities in the different layers... but I'm finding that the trivial examples advocating for DTOs, to be, well, trivial. I see too much duplicate code and not much benefit.
Consider my Domain:
Example Domain
I have a single UI screen (Asp.net MVC View) that shows a patient's list of medications, the adverse reactions (between medications), and any clinical conditions (like depression or hypertension) the patient may have. My domain model starts at the top level with:
MedicationRecord
List<MedicationProfile> MedicationProfiles
List<AdverseReactions> Reactions
List<ClinicalConditions> ClinicalConditions
MedicationProfile is itself a complex object
string Name
decimal Dosage
Practitioner prescriber
Practioner is itself a complex object
string FirstName
string LastName
PractionerType PractionerType
PractionerId Id
Address Address
etc.
Further, when making the WCF requests, we have a request/response object, e.g.
MedicationRecordResponse
MedicationRecord MedicationRecord
List<ClientMessage> Messages
QueryStatus Status
and again, these other objects are complex objects
(and further, complicates matter is that they exist in a different, common shared namespace)
At this point, my inclination is that the MedicationRecordResponse is my DTO. But in pure DataContracts and DTO and separation of design, am I suppose to do this?
MedicationRecordResponseDto
MedicationRecordDto
List<ClientMessageDto>
QueryStatusDto
and that would mean I then need to do
MedicationProfileDto
PractitionerDto
PractitionerTypeDto
AddressDto
etc.
Because I have show almost all the information on the screen, I am effectively creating 1 DTO for each domain object I have.
My question is -- what would you do? Would you go ahead and create all these DTOs? Or would you just share your domain model in a separate assembly?
Here's some reading from other questions that seemed relevant:
WCF contract know the domain
Alternatives for Translation Layer in SOA: WCF
SOA Question: Exposing Entities
Take a look on excellent articles
Why You Shouldn’t Expose Your Entities Through Your Services
DTO’s Should Transfer Data, Not Entities
above links don't work, looks like a domain issue (I hope it'll be fix), here is the source:
DTO’s Should Transfer Data, Not Entities
Why You Shouldn’t Expose Your Entities Through Your Services
I've always had an aversion to the duplicate class hierarchy resulting from DTOs. It seems to be a flagrant violation of the DRY principle. However, upon closer examination, the DTO and the corresponding entity or entities serve different roles. If you are indeed applying domain-driven design then your domain entities consist of not only data but behavior. By contrast, DTOs only carry data and serve as an adapter between your domain and WCF. All of this makes even more sense in the context of a hexagonal architecture also called ports and adapters as well as the onion architecture. Your domain is at the core and WCF is a port which exposes your domain externally. A DTO is part of how WCF functions and if you agree that it is a necessary evil your problem shifts from attempting to eliminate them to embracing them and instead focusing on how to facilitate the mapping between DTOs and domain objects. A popular solution is AutoMapper which reduces the amount of boiler plate mapping code you need to write. Aside from the drawbacks, DTOs also bring a lot of benefits. One is that they furnish a buffer between your domain entities and the outside world. This can be of great help in refactoring because you can keep your core domain very well encapsulated. Another benefit is that you can design your DTOs such that they fulfill requirements of the service consumer, requirements which may not always be in full alignment with the shape of your domain objects.
Personally, I don’t like using MessageContract as entities. Unfortunately, I have an existing WCF service that use MessageContract as entities – i.e. data is filled into the MessageContract directly in the data access layer. There is no translation layer involved.
I have an existing C# console application client using this service. Now, I have a new requirement. I need to add a new field in the entity. This is not needed by the client. The new field is only for the internal calculations in the service. I had to add a new property named “LDAPUserID” in the MessageContract which also act as a entity.
This may or may not break the client depending on whether the client support Lax Versioning. Refer Service Versioning.
It is easy to mistakenly believe that adding a new member will not break existing clients. If you are unsure that all clients can handle lax versioning, the recommendation is to use the strict versioning guidelines and treat data contracts as immutable.
With this experience, I believe it is not good to use MessageContract as entities.
Also, refer MSDN - Service Layer Guidelines
Design transformation objects that translate between business entities and data contracts.
References:
How do I serialize all properties of an NHibernate-mapped object?
Expose object from class library using WCF
Serialize subset of properties only
There are some pain points around transmitting entities between a client and a WCF service.
Defeating lazy loading by serializing all properties
Serialized data can be unecessarily bloated
Some coupling between UI and business layer
One way to address these issues is to transmit DTOs instead of entities but I am aware that this technique has its own set of caveats (the biggest one I am aware of is the typing required to maintain these function-specific DTOs).
I think it would be great if the service implementation could generate these DTOs dynamically and this appears to be possible. Unfortunately, it looks like the contract would be loosely defined on the client side (i.e. "object") and that smells like a possible risk.
Is it advisable to use dynamic DTOs in this fashion or is there another way to use DTOs without creating/maintaining classes for each one?
I think the holy grail would be where the implementation dynamically generates DTOs but the client sees well-defined contracts. I'm guessing this isn't possible with WCF.
I guess the issue is what are you going to generate them from? You have to have some description somewhere of what the data you want to transmit looks like. If all you have is the domain objects then you end up in a similar position of transmitting the data that you would of via the domain object.
One of the key things the DTO enables is decoupling so you can evolve your domain objects without breaking the consumers of your service accidently. If you dynamically generate the DTOs then you will cascade the changes - unless you view the dynamic creation as a one-off exercise to get you started with a DTO
DTO is data contract as any other and must be defined. When you choose to go with DTOs you are adding a layer of complexity which you have to maintain. There are tools which can help you with mapping between domain objects and DTOs (like AutoMapper) but your responsibility is to define what DTO should transfer - that is something which can hardly be done automatically. Even with automated tool you will still have to maintain some definition of DTOs which will be used to generate code.
my application is using WPF for UI, WCF for WebService, EF4 for DataAccess.
I read some materials from internet and msdn that EF4 has self-tracking function using custom T4 template even if using together with WCF for ntier. Does this mean that lazy loading function is still possible with WCF?
Thanks
The Self-Tracking entities are kind of hacky, IMHO. They are designed so that, once deserialized (i.e, on the far end of your WCF channel), they start tracking changes to themselves. That's great for when you send them back home, because you can reconnect them to a context and everything (hypothetically) works.
Self-tracking and lazy loading are two different things. EF self-tracking entities are disconnected to the data context, and on your client end there IS no data context. So they cannot lazily load anything.
There is no plug-and-play framework mixing WCF and EF that, from the client's perspective, is seamless. Could be done, of course. A few new T4 templates and you'll have an autogenerated WCF service contract your entities could use to perform lazy loading.
Of course, you'd have to write that.
Edit On second thought, you might have more luck going with WCF Data Services.
WCF promotes good design by using interfaces and contracts etc. What baffles me is that, for example in my case if I have 2 sets of business functionality like ICustomerMgmtBIZ
and IProductMgmtBiz. If these two are ServiceContracts, and I have an interface like
IBusinessService:IProductMgmtBIZ,ICustomerMgmtBIZ
and implementation class BusinessService. I see that BusinessService class will be having too much implementation. The workaround I have been using so far is by implementing partial classes.
So bluntly put, can a WCF service have only 1 implementation and 1 service contract ??
No, it is possible to implement more than one Service contract on a WCF Service type (the class that is attributed with the ServiceBehavior attribute), since it is just a matter of having the class implement multiple interfaces. If you are using any of the Visual Studio templates or other kinds of code generators, this may not be immediately clear. However, although you can implement more than one Service Contract interface on a Service type, it does not do you much good if you need the service, presumably a singleton in this case(?), to behave as one service. IBusinessService implies that you need all of the service's functionality to be callable from one client proxy, so that all operations may operate in the same logical session (similar to ASPX web session). If that is not the case, then you are free to define individual proxies for each contract interface, but that will also require that you support one endpoint for each contract.
Is it an absolute requirement that you can only have on WCF ServiceHost instance for your implementation? What factors are influencing your decision?
By the way, partial classes do not trouble me anymore. The idea of splitting out code into multiple files now seems rather natural. For example, storing partial classes in files like ServiceType_IProductMgmtBiz.cs and ServiceType_ICustomerMgmtBIZ.cs seems natural enough, in addition to storing the core logic in ServiceType.cs.
Finally, the following question might be of use...
WCF and Interface Inheritance - Is this a terrible thing to do?
Bluntly put, no - sort of - yes, but. Any workaround is non-optimal and involves using an "IBlank" as a master WCF interface (where your interfaces derive from IBlank), and two endpoints, one implementing IProductMgmtBIZ and the other implementing ICustomerMgmtBIZ. I don't have my dev machine in front of me, this might involve some other overrides. So, at the WCF level you're screwed unless you want to have two WCF ServiceHosts (which is perfectly reasonable).
In short, the workaround is inelegant. Its easier to have two WCF endpoints on the same port with a different extension.