I have a sizable investment in an MVC4 site that has a large array of custom model binders, value providers etc.
I now want to add some WebAPI controllers to the site and re-use some of the MVC4 components. It appears that there are many parallel concepts in MVC and WebAPI (e.g. model binding), but the base classes and interfaces live in separate namespaces, meaning the types are not interchangeable.
Is there an established pattern for adapting MVC classes to their WebAPI equivalents? Specifically I'm interested in being able to reuse a System.Web.Mvc.IModelBinder as a System.Web.Http.ModelBinding.IModelBinder.
The problem are not the interfaces but the fact that the model binding rules used by the two model binders are completely different and based on different concepts. That's why the interfaces are defined in two different namespaces otherwise they would have used exactly the same interface. There is a parallelism between the two interfaces but they represent two different concepts. In other words the syntax is similar but the semantic is different so you cannot have a one to one mapping among them
Related
I'm working on a Blogging platform in .NET Core and one of the key requirements is to have different translations based on the user's selected language. It's clear to me that majority of this part belongs to the UI layer, but I want to let bloggers submit different translations of their posts by themselves.
So far I modified my Domain that it now contains the Language property and I also created Localized attribute to mark properties that are multilingual. In my approach I want to keep all localization-related logic in the Infrastructure layer so it saves/loads proper translations to/from another table containing translations for Localized properties automatically without the Application layer (or services) knowing about it.
I'm also implementing UnitOfWork pattern and normally I would use repositories through it, like in a following way: UnitOfWork.BlogPosts.Add(post) and after all operations are done: UnitOfWork.CommitChanges(), but I assume that now UnitOfWork would contain both repositories: for BlogPost and for Localization and the whole logic of saving/loading localized data would need to be implemented in a UnitOfWork so instead I would have to call a method that manages both repositories, like this: UnitOfWork.AddBlogPost(post) (also IUnitOfWork interface would require these methods).
So my question is: is it a good design appraoch and is UnitOfWork a proper place to implement such logic? I want to keep it as automated as possible and if it doesn't cause any issues that I'm currently not aware of - to keep it directly in a persistance layer.
Edit: My second idea would be to simply keep the two repositories in UnitOfWork and implement saving/loading BlogPost + Localization in an Application layer (in commands handlers and queries using CQRS pattern). But unfortunately this way I would have to implement the same logic for saving/loading for every command and query...
Usually i create dto's to get data from microservice (WebApi) to client (MVC).
But sometimes it's cumbersome to duplicate structure of data entity to dto, especially if entity has multiple fields and many embedded relationships.
So i have to duplicate fields and relations.
Can i use data entity instead of dto?
I use a special assembly for dto's to be exchanged between client (MVC) and given microservice. Should my data entities live in this assembly?
This is a common complaint that derives from not understanding the concept of bounded contexts. Because you're deep in the code, you just see two things that look like the same thing, and you have had, like all developers, the idea beaten into your brain that you should not repeat yourself (DRY).
However, the key word above is that the two things look the same. They are in fact not the same, and that is the critical salient point. They are representations of domain objects from different contexts (data store and application layer, for example). If you use the same object, you are tightly coupling those contexts to the point where they're now inseparable. As such, the very concept of having multiple layers becomes moot.
A related concept here is anti-corruption layers. This is a layer you add to your application to facilitate communication between two different application contexts or domains. An API is a form of anti-corruption layer. Again, because you're building all the apps, it seems like they're all the same thing. However, imagine your MVC app as a third-party application being built by someone else to consume your API. Should they use your entities directly? No. They probably would have their own entity classes and their own data store. The DTO your API uses provides a way for these two different applications to communicate via a common language. If you use your entity classes directly, then any change to your data, necessitates a change to your API, which in turn necessitates a change to any consumers of your API. Imagine if Google changed a database column, and because of that, every single developer using their API(s) had to immediately make changes to their own applications or they would break.
In short, just because two classes look the same, doesn't mean they are the same. Your entity and your DTO are each representations of a concept in different contexts, and therefore you need and should have both.
I have a web client that calls my WCF business service layer, which in turn, calls external WCF services to get the actual data. Initially, I thought I would use DTOs and have separate business entities in the different layers... but I'm finding that the trivial examples advocating for DTOs, to be, well, trivial. I see too much duplicate code and not much benefit.
Consider my Domain:
Example Domain
I have a single UI screen (Asp.net MVC View) that shows a patient's list of medications, the adverse reactions (between medications), and any clinical conditions (like depression or hypertension) the patient may have. My domain model starts at the top level with:
MedicationRecord
List<MedicationProfile> MedicationProfiles
List<AdverseReactions> Reactions
List<ClinicalConditions> ClinicalConditions
MedicationProfile is itself a complex object
string Name
decimal Dosage
Practitioner prescriber
Practioner is itself a complex object
string FirstName
string LastName
PractionerType PractionerType
PractionerId Id
Address Address
etc.
Further, when making the WCF requests, we have a request/response object, e.g.
MedicationRecordResponse
MedicationRecord MedicationRecord
List<ClientMessage> Messages
QueryStatus Status
and again, these other objects are complex objects
(and further, complicates matter is that they exist in a different, common shared namespace)
At this point, my inclination is that the MedicationRecordResponse is my DTO. But in pure DataContracts and DTO and separation of design, am I suppose to do this?
MedicationRecordResponseDto
MedicationRecordDto
List<ClientMessageDto>
QueryStatusDto
and that would mean I then need to do
MedicationProfileDto
PractitionerDto
PractitionerTypeDto
AddressDto
etc.
Because I have show almost all the information on the screen, I am effectively creating 1 DTO for each domain object I have.
My question is -- what would you do? Would you go ahead and create all these DTOs? Or would you just share your domain model in a separate assembly?
Here's some reading from other questions that seemed relevant:
WCF contract know the domain
Alternatives for Translation Layer in SOA: WCF
SOA Question: Exposing Entities
Take a look on excellent articles
Why You Shouldn’t Expose Your Entities Through Your Services
DTO’s Should Transfer Data, Not Entities
above links don't work, looks like a domain issue (I hope it'll be fix), here is the source:
DTO’s Should Transfer Data, Not Entities
Why You Shouldn’t Expose Your Entities Through Your Services
I've always had an aversion to the duplicate class hierarchy resulting from DTOs. It seems to be a flagrant violation of the DRY principle. However, upon closer examination, the DTO and the corresponding entity or entities serve different roles. If you are indeed applying domain-driven design then your domain entities consist of not only data but behavior. By contrast, DTOs only carry data and serve as an adapter between your domain and WCF. All of this makes even more sense in the context of a hexagonal architecture also called ports and adapters as well as the onion architecture. Your domain is at the core and WCF is a port which exposes your domain externally. A DTO is part of how WCF functions and if you agree that it is a necessary evil your problem shifts from attempting to eliminate them to embracing them and instead focusing on how to facilitate the mapping between DTOs and domain objects. A popular solution is AutoMapper which reduces the amount of boiler plate mapping code you need to write. Aside from the drawbacks, DTOs also bring a lot of benefits. One is that they furnish a buffer between your domain entities and the outside world. This can be of great help in refactoring because you can keep your core domain very well encapsulated. Another benefit is that you can design your DTOs such that they fulfill requirements of the service consumer, requirements which may not always be in full alignment with the shape of your domain objects.
Personally, I don’t like using MessageContract as entities. Unfortunately, I have an existing WCF service that use MessageContract as entities – i.e. data is filled into the MessageContract directly in the data access layer. There is no translation layer involved.
I have an existing C# console application client using this service. Now, I have a new requirement. I need to add a new field in the entity. This is not needed by the client. The new field is only for the internal calculations in the service. I had to add a new property named “LDAPUserID” in the MessageContract which also act as a entity.
This may or may not break the client depending on whether the client support Lax Versioning. Refer Service Versioning.
It is easy to mistakenly believe that adding a new member will not break existing clients. If you are unsure that all clients can handle lax versioning, the recommendation is to use the strict versioning guidelines and treat data contracts as immutable.
With this experience, I believe it is not good to use MessageContract as entities.
Also, refer MSDN - Service Layer Guidelines
Design transformation objects that translate between business entities and data contracts.
References:
How do I serialize all properties of an NHibernate-mapped object?
Expose object from class library using WCF
Serialize subset of properties only
I will have a asp.net MVC that will connect to a WCF webservice. That service defines the database connection.
I noticed I will have 3 different Model/data classes.
First off is the ViewModel guy from MVC. I guess it could be somewhat different from how data is represented in the DB.
Second is the DataModels, poco that define how the objects look in the database.
Then there is the DataContract guy that defines how the objects look that are transferred over the WCF service. Guess it will either be pretty much a representation of a ViewModel or a DataModel.
Is this a overkill or a necessary evil? Should I define the DataContracts as the ViewModel guys perhaps, or even the DataModels.
How would you do it and how would you split it into assemblies?
I would keep them all separate as you mentioned.Tthey each have a different layer to deal (ie belong to) with and should be separate objects to deal with any future changes.
The wcf items will be created by referencing the service so belong in your wcf service. Your data model should be in a model project or a data access project and your ViewModels in your MVC application although you could break then from there if you want but since it's a fairly tight coupling with the MVC app it's debatable.
Should I define the DataContracts as the ViewModel guys perhaps,
Certainly not. The ViewModels are screen (use-case) oriented.
But you might do this in a DataService that is consumed by a SilverLight or jQuery client.
or even the DataModels.
That could make sense. And one of the reasons for POCOs is that they could even be the same classes.
I'm looking for how to structure the layer of my app between the presentation layer and the model / business object layer. I see examples using Controller classes and others using Service classes. Are these the same things with different names for different methodologies, or is there a more fundamental difference?
Edit:
To put the question in context, this is a PHP app using Doctrine as the ORM.
I would say terms like Controller are basically same names for potentially very different things depending on what methodology / framework you are using. At a very high level, they may perform the same action - hence the generic name usage - but their responsibilities and scope within the context of the framework will usually be much more specific and different.
Eg: The Controller in MVC has little or nothing in common with the Controller layer in WCSF.
I think these terms like Controller / Service etc are generic and hence have been used in many frameworks but they have a special meaning within the framework of reference.
Also, specifically, a controller and a service to me are two completely differing concepts.
Controller is something like a layer that is responsible for orchestrating logic within the application / or an aspect of the application
Service , to me, is basically the external API through which you expose aspects of your application in a standard manner