I try to set-up a clean and flexbible application-framework for data-centric applications with silverlight-only UI. I want to have a strict seperation of concerns, and want to be as flexible as possible (e.g. exchange the ORM later) while still reducing the amount of code.
It took me weeks to figure out an appropriate architecture, and although my latest approach seems to fit my requirements I'm still not completely convinced, that this way will be the best and is technically possible.
Here is how my solutions-explorer looks like:
MyCompany.MyApplication.Entities
Class library - project, which contains only the domain (business) objects, such as Customers, Adresses, etc. These are POCOs with the [Serializable] - attribute, but do not any other code. All properties are marked as virtual, so that classes could derive and overwrite the properties.
MyCompany.MyApplication.DataAccess
Class library - project, which contains the nHibernate - specific code (Sessions) to load, save and delete the domain objects. This project has references to the Entities-project and also to the nHibernate-libraries.
MyCompany.MyApplication.Core
Class library - project, which contains the business logic, and often just maps the methods form the DataAccess - project, such as GetAllCustomers, SaveCustomer, etc.
It has references to the Entities-project and the DataAccess-project.
MyCompany.MyApplication.Web
Web-application - project, which hosts the silverlight-client-app and also the WCF-services to communicate with the client-side. To expose the domain-objects to the client-side, these classes are derived and all the properties are overwritten and marked with the [DataMember] - attribute. It has references to the Entities-projects and the Core-projects.
MyCompandy.MyApplication.Silverlight
Sivlerlight 3.0 - project, which represents the userinterface. It has only service-references to the WCF-Services exposed by the Web-project. The actual domain-objects aren't accesssible, but the auto-generated proxy-classed replace them.
Please tell me, what do you think about this architecture, and if there are any conflicts! Further question: Is there any way, to avoid the properties of the domain-objects being virtual and the need to overwrite them in order to make them accessible trough WCF?
Best regards,
Daniel Lang
Daniel, you are not going to get around the nhiberante requirement of virtual properties. Have you thought about using Dto's?
Related
I've been working with NHibernate for quite a while and have come to realize that my architecture might be a bit...dated. This is for an NHibernate library that is living behind several apps that are related to each other.
First off, I have a DomainEntity base class with an ID and an EntityID (which is a guid that I use when I expose an item to the web or through an API, instead of the internal integer id - I think I had a reason for it at one point, but I'm not sure it's really valid now). I have a Repository base class where T inherits from DomainEntity that provides a set of generalized search methods. The inheritors of DomainEntity may also implement several interfaces that track things like created date, created by, etc., that are largely a log for the most recent changes to the object. I'm not fond of using a repository pattern for this, but it wraps the logic of setting those values when an object is saved (provided that object is saved through the repository and not saved as part of saving something else).
I would like to rid myself of the repositories. They don't make me happy and really seem like clutter these days, especially now that I'm not querying with hql and now that I can use extension methods on the Session object. But how do I hook up this sort of functionality cleanly? Let's assume for the purposes of discussion that I have something like structuremap set up and capable of returning an object that exposes context information (current user and the like), what would be a good flexible way of providing this functionality outside of the repository structure? Bonus points if this can be hooked up along with a convention-based mapping setup (I'm looking into replacing the XML files as well).
If you dislike the fact that repositories can become bloated over time then you may want to use something like Query Objects.
The basic idea is that you break down a single query into an individual object that you can then apply it to the database.
Some example implementation links here.
I have been using EF since it first came out. Used to hand build POCOs in 3.5 and was glad to see Self Tracked Entities(STE) in EF4.0.
I have use STEs in a couple of very large projects(500+ entities, some with multiple models). In these projects I use a generic Repository and a generic Unit of Work to persist the entities i.e. 2 small generic classes no mapping. By electing a core entity as the "aggregrate root", other entities are added and updated on the client side and the core entity graph containing these changes is sent to the WCF service and used in the Logic Layer which creates the Repository<[core entity]> and uses the UnitOfWork<[core entity]>.Save(Repository<[core entity]>) to persist the STEs and their children to the database.
Now Microsoft is recommending that we not use STEs. See this article
So my question is, What is(are) the patterns that are now recommended by Microsoft for applications that are persisting client changes to WCF Services that use EF?
I created a EF5 Model and examined the generated code. The there are no attributes for a WCF Service i.e. DataContract, DataMember etc
EF4 had a "ADO.NET DbContext Generator with WCF Support" template, but there isn't a EF5 equivalent.
One site suggested I should use a partial class file and decorate the same properties in that file with these attributes. But unless .net 4.5 has introduced partial properties, I cannot see how that can be done.
Another blog suggested using DTO and Automapper, which means more mapping which is error prone; especially when entity fields change type.
So now that DBContext generated code classes are not Service enabled, does this mean that we need to write another set of classes (POCOs) that:
needs to be mapped FROM the DBContext generated code classes after querying the database.
holds the data state for the WCF Service client(s)
is updatable by that client(s)
is mapped by the client(s)
has the ability to hold changed state so this can be sent back to the WCF Service
needs to be mapped TO the DBContext generated code classes for persistence
It seems we just took a great leap backwards to EF3.
If you code both client and service that runs on your hardware, you don't need to be concerned about data structures at the client as they belong to you.
If you also need to expose some of your service methods to non.NET clients you should do the 5 points above for those services anyway and use DTOs and Automapper in those occasions.These should be in a different WCF Service but implemented against the same Logic Layer, after mapping.
But how many of these type of non.NET client services are be created in the day to day building of web applications in most software teams?
This latest recommendation is confusing as it has not been explained as to WHY STEs are ALWAYS ill-conceived and what now, are the recommended patterns to be used for persisting client changes to WCF Services that use EF.
Can anybody inform me where I can find a good resource that solves this architectural design issue?
P.S.
Please don't recommend WCF Data Services or WCF RIA as we need a lot of control over how your data is retrieved and saved by clients.
Please don't recommend Code First as we use Database First as we want to have and need to control the structure of that database and not have to generated for us.
Ok so i thought the same thing when I first read this article, it seems a bit weird to deprecate a whole branch of EF like this and the intention wasn't terribly well communicated (IMO). I think a couple of things are important here:
STEs as referred to in this article refer to object context based self tracking entities (which act a little like autonomous contexts)
ObjectContext is generally being moved away from in favor of the cleaner DbContext structure (this is for both DB first and Code First)
STEs != DB first generation, you can still use an EDMX model in EF and this isn't likely to change.
When i originally saw this article I mistook STEs for POCO Proxy entities which are still available and AFAIK there are no plans to deprecate. (these achieve a similar technical solution to the problem of change detection but with a nicer interface. Check out this article for the differences EF4: Difference between POCO , Self Tracking Entities , POCO Proxies
So what does this all mean
Basically STEs in terms of the old implementation of a change tracker are being deprecated in favor of the newer forms of change tracking (Snapshot or POCO Proxies). This means that if snapshot tracking doesn't suit you you should look into POCO Proxies which are similar to the old STEs.
You can still use all previous techniques for context generation (DB First, Model First, Code First, and DB-> Code)
Before I break my application, my current namespace look something like this:
CompanyAbc.Core
CompanyAbc.AppXyz.Web
CompanyAbc.AppXyz.Business
The CompanyAbc.Core namespace contain common code used by all applications in our company. An example would be a class called "ClientMessage" which we use as a container to carry messages from one tier to another tier (e.g. to abstract out and support showing success or error messages when saving data at the data-tier all the way to the UI-tier)
We are now making the CompanyAbc.AppXyz.Business into a WCF Service. My question is this: What is the best practice for 'sharing' (or not sharing) these base/common entities?
For example, would you:
a) add the [DataContract] attributes directly to classes in the CompanyAbc.Core namespace, even though it has nothing to do with WCF.
CompanyAbc.Core.Entities
ClientMessage.cs
OR
b) create a Data Transfer Object that is an exact copy from the CompanyAbc.Core namespace?
CompanyAbc.Core.Entities
ClientMessage.cs
CompanyAbc.AppXyz.Business.DataContracts
ClientMessageDto.cs
OR
c) Other options?
Another complexity is that we intend to share these assemblies. But to enforce decoupling and not share assemblies/business entities, would you go all out crazy and do something like this?
CompanyAbc.Core
CompanyAbc.Core.Shared.Entities
ClientMessage.cs
CompanyAbc.AppXyz.Web
CompanyAbc.AppXyz.Web.Entities
ClientMessage.cs --> derives from the Core.Shared, or just duplicate code?
CompanyAbc.AppXyz.Business.Entities
ClientMessage.cs --> derives from the Core.Shared, or just duplicate code?
CompanyAbc.AppXyz.Business.DataContracts
ClientMessageDto.cs
I would suggest you make "hierarchy groups" in your assemblies, in order to make it more concise and its use be more intuitive. For instance:
(Taking the #learner example as a base)
ABC.Common (It communicates better the intention)
ABC.Core
ABC.Core.Web
ABC.Core.Windows
ABC.Services.DataContracts
ABC.Services.ServicesContracts
And so on ...
Making a clear hierarchy stimulates its use by the developers, as it fits with the previous assembly already deployed.
Take a look at the .NET assemblies for a reference.
Here is how my namespaces look like, as I used it so far:
ABC.Core
ABC.Data
ABC.Business
ABC.Web;
ABC.Services (common)
ABC.Services.DTO (common)
ABC.Services.Svc1
ABC.Services.Svc1.DTO
ABC.Services.Svc2
ABC.Services.Svc2.DTO
In practice you won't like to have so many shared classes between the services, because they would probably want to be separated one of the other. The better are independent one of the other, the better you can version them, but is true that it would require a bit of duplicated code in the DTO level of each service. Many people use Automapper to project the Core classes into DTOs.
Let me know what you think.
Well, I am creating a WCF service, that have a large number of classes to communicate with the client, and this classes have also many properties.
Mainly, this classes are the POCO classes that is created with the code generator from the edmx, and I have the .tt file.
To can use this classes and properties, I have to use the DataContract and DataMember, so in each classes I have to set the DataContract and in each property of the each class the DataMemeber. So this a big work, so if I need to do some modify to the data base, I must generate again the tt file and then repeat the work.
Is there any way to do this automatically? I am using .NET 4.0 and EF 4.1.
The whole point of the .tt file being added to your project is so that you can modify the template to suit your needs. All you need to do is change the template so that it adds [DataContract] to the entity class definition and [DataMember] to the entity property definitions.
From there, any time the DB is changed you simple use the "Update model from database" feature and your entities will automatically have their code regenerated using the existing template.
All that said, I'm going to recommend you do not expose your DB entities, POCO or not, directly from your service layer. You should really be designing with domain separation and using messaging and CQRS type patterns at the service level. Then you just have some simple mapping methods that translate the data between those messages/commands to your entities.
There is Entity Framework Provider with WCF data services, might be it can help you.
As the subject line describes, I am in the process of exposing a C# library into a WCF Service. Eventually we want to expose all the functionality, but at present the scope is limited to a subset of the library API. One of the goals of this exercise is also to make sure that the WCF service uses a Request / Response message exchange pattern. So the interface /API will change as the existing library does not use this pattern
I have started off by implementing the Service Contracts and the Request/Response objects, but when it comes to designing the DataContracts, I am not sure which way to go.
I am split between going back and annotating the existing library classes with DataContract/DataMember attributes VS defining new classes which are like surrogate classes to the existing classes.
Does anyone have any experience with similar task or have any recommendations on which way works best ? I would like to point out that our team owns the existing library so do have the source code for it. Any pointers or best practices will be helpful
My recommendation is to use the Adapter pattern, which in this case basically means create brand new DataContracts and ServiceContracts. This will allow everything to vary independently, and will allow you to optimize the WCF stuff for WCF and the API stuff for the API (if that makes sense). The last thing you want is to go down the modification route and find that something just won't map right once you are almost done.
Starting from .NET 3.5 SP1 you no longer need to decorate objects that you want to expose with [DataContract]/[DataMember] attributes. All public properties will be automatically exposed. This being said personally I prefer to use special DTO objects that I expose and decorate with those attributes. I then use AutoMapper to map between the actual domain models and the objects I want to expose.
If you are going to continue to use the existing library but want to have control over what you expose as the web service API, I would recommend defining new classes as wrapper(s) around the library.
What I mean to say is don't "convert" the existing library even if you think you're not going to continue to use it in other contexts. If it has been tested and proven, then take advantage of that fact and wrap around it.