ASP.NET Boilerplate misleading naming convention and files structure - naming-conventions

ASP.NET Boilerplate creators claim their framework uses already familiar tools and implements best practices around them to provide you a SOLID development experience.
But I can't understand what the basis of some part of their naming convention and files structure.
For example, this is a part of their official SimpleTaskApp example:
[AutoMapFrom(typeof(Task))]
public class TaskListDto : EntityDto, IHasCreationTime
{
public string Title { get; set; }
public string Description { get; set; }
public DateTime CreationTime { get; set; }
public TaskState State { get; set; }
}
TaskListDto is very misleading! Isn't TaskDto more understandable?
Or why folders containing entities should be plural: Tasks/Task.cs or People/Person.cs.
Why are they kept apart at all? Is it not better to put all entities into one Entites folder in the Domain layer?

I Do not have any experience with this framework but there is a concept in DDD called Domain Aggregates. It defines that entites with the same concept should be kept close together. For example we have a concept called people that some entities carry the same concept. We take Person as an aggregate root and all the other entities with the same concept are accessible via that aggregate root. So we keep the entites with the same concept under the same folder.
As of the TaskListDto I think the name should serve the purpose of the Class. And thus here we want to have a list of Tasks not a single task.
reading further about Domain Aggregates: https://www.jamesmichaelhickey.com/domain-driven-design-aggregates/

Related

Proper design technique

I am creating an asp.net mvc4 application that will ask the user a set of questions based on a particular criteria that they enter. Each question is stored in a table and only those questions that meet the criteria will be displayed to the end user.
I am using a viewmodel that combines information from a couple of different tables. Basically it has a list of Questions and an inspection id to tie all the test together. My question is what is the proper oo design technique for populating the viewmodel.
Should the method / methods for populating the viewmodel reside within the viewmodel class itself? Basically passing the entities into the viewmodel and allow it to populate itself.
Should there be a new class that you send in the entities in and it returns the viewmodel?
Or is there a better way to do this.
yes, your approach is valid.
Consider the following example in your model:
public List<Questions> Questions
{
get {
QuestionRepository Rep = new QuestionRepository();
return Rep.ObtainQuestions(ClientAge,ClientType)
}
}
public int ClientAge { get; set; }
public ClientTypeEnum ClientType { get; set; }
the getter in the Questions property includes all the logic. as long as the clientAge and the ClientType properties have valid values, the question list will be populated. this avoids having to set the data in every action method where you need to populate the property.
in the example I am getting the data from a repository but you could get it from an ORM like entity framework, or any other source.
you can google the term skinny controllers and read up more on the recomendations and best practices.

How to deserialize data from ApiController

I have some POCO objects that are set up for use with Entity Framework Code First.
I want to return one of those objects from an ApiController in my ASP.NET MVC 4 website, and then consume it in a client application.
I originally had problems with the serialization of the object at the server end, because the Entity Framework was getting in the way (see Can an ApiController return an object with a collection of other objects?), and it was trying to serialize the EF proxy objects rather than the plain POCO objects. So, I turned off proxy generation in my DbContext to avoid this - and now my serialized objects look OK (to my eye).
The objects in question are "tags" - here's my POCO class:
public class Tag
{
public int Id { get; set; }
public int ClientId { get; set; }
public virtual Client Client { get; set; }
[Required]
public string Name { get; set; }
[Required]
public bool IsActive { get; set; }
}
Pretty standard stuff, but note the ClientId and Client members. Those are EF Code First "navigation" properties. (Every tag belongs to exactly one client).
Here's what I get from my ApiController:
<Tag xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.datacontract.org/2004/07/Foo">
<Client i:nil="true"/>
<ClientId>1</ClientId>
<Id>1</Id>
<IsActive>true</IsActive>
<Name>Example</Name>
</Tag>
The Client member is nil because having disabled proxy generation I don't get automatic loading of the referenced objects. Which is fine, in this case - I don't need that data at the client end.
So now I'm trying to de-serialize those objects at the client end. I had hoped that I would be able to re-use the same POCO classes in the client application, rather than create new classes. DRY and all that. So, I'm trying:
XmlSerializer xmlSerializer = new XmlSerializer(typeof(Tag));
var tag = xmlSerializer.Deserialize(stream);
But I've run into two problems, both of which are due to EF Code First conventions:
Problem 1: Because my Tag class has a Client member, the XmlSerializer is complaining that it doesn't know how to de-serialize that. I guess that's fair enough (though I had hoped that because the member was Nil in the XML it wouldn't care). I could pass in extra types in the XmlSerializer constructor, when I tried that, it then complained about other classes that Client uses. Since Client references all sorts of other objects, I'd end up having to pass in them all!
I tried using the [DataContract] and [DataMember] attributes to remove the Client member from the XML (by not marking it as a DataMember). That did remove it from the XML, but didn't stop the XmlSerializer from whining about it. So I guess it's not the fact that it's in the XML that's the problem, but that it's in the class definition.
Problem 2: When I did try passing in typeof(Client) as an extra type, it also complained that it couldn't de-serialize that class because it contains an interface member. That's because - again due to EF Code First conventions - it has a Tags member as follows:
`public virtual ICollection<Tag> Tags { get; set; }`
So it looks like even if I get over the referenced-types problem, I'm still not going to be able to use my POCO classes.
Is there a solution to this, or do I have to create new DTO classes purely for use at the client side, and return those from my ApiController?
I just tried using DataContractSerializer instead of XmlSerializer, and for the Tag class that seems to work. I've yet to try it with a class that has a virtual ICollection<T> member...
Update: tried it, and it "works". It still manages to reconstruct the object, and leaves the ICollection member at null.
Update 2: OK, that turned out to be a dead end. Yes, it meant that I could correctly serialize and de-serialize the classes, but as everyone kept telling me, DTO classes were a better way to go. (DTO = Data Transfer Objects - classes created specifically for transferring the data across the wire, probably with a subset of the fields of the original).
I'm now using AutoMapper (thanks Cuong Le) so that I can easily transform my POCO entities into simpler DTO classes for serialization, and that's what I'd recommend to anyone faced with the same problem.

Choose between one-to-one and component in NHibernate

I am trying to map classes User and Profile, just as the following:
public class User
{
public long ID { get; set; }
public string LoginName { get; set; }
public Profile Profile { get; set; }
}
public class Profile
{
//the ID is the same with User's ID
public long ID { get; protected set; }
public string NickName { get; set; }
public Gender Gender { get; set; }
}
So, they can be mapped as both one-to-one and componenet relationship. And I find some people appraise the component and think one-to-one is a bad practise, why? For performance reason? But they are designed as two separate tables in many scenarios(asp.net2.0 Membership, for example).
How should I choose? Which aspects should I consider? I know component means "value object" but not an enitity, but does this mean some further things?
ps: And what confused me more is the opinion that the many-to-one should be used even it's one-to-one relationship in real world!
The key should be in your use cases for this class. Don't take ASP.NET Membership as an example, because its design is terrible.
You need to answer these questions:
Does a Profile make sense as an entity of its own?
Do you have references to the Profile anywhere else in your domain?
Can you have a User without a Profile?
Does it have a behavior of its own?
Would you extend (inherit) Profile for some reason?
Do most use cases just deal with the user (and its LoginName, not just the ID) but not the profile?
If most questions are true, you have a good case for using one-to-one (I disagree with #Falcon; this is actually one of the legitimate uses for one-to-one)
Otherwise, a Component will work fine. It doesn't have an ID, so you can remove that property.
You should use neither.
One-To-One
You have the user and the profile in different database tables but both share a mutually exclusive PK:
See http://jagregory.com/writings/i-think-you-mean-a-many-to-one-sir/
Pretty bad design practice for relational databases, it's messy and does not necessarily enforce constraints for the relationship.
Component
You can use component to get a clean Object-Model from a messy relational database, profile and user data are both stored in the same database table but they should be separated in your object model (like you want it, judging from your code). Lazy loading probably isn't supported, which will cause high database traffic.
Reference
Imho, you should use a Reference. It's conceptual kinda like one-to-one but a user references a profile. The profiles can be stored in their own table, can be loaded lazily (performance) and are not dependent on a user.
Regarding your confusion:
Just read the link I supplied. Technically, you need a many to one for a properly designed database-scheme, as that is what is technically possible and will be mapped. I know it's confusing. If you just need to map one-side, think of a reference instead of a one-to-one.

Web Service Contract Design - Single-Responsibility

I'm curious as to see how most developers go about designing the contracts to their web services. I am quite new to service architecture and especially new to WCF.
In short, I'd like to find out what type of objects you are returning in your operations, and does each operation in your service return the same object?
For example consider the following:
Currently, all services I create inherit from a ServiceBase object that looks similar to:
public abstract class AppServiceBase<TDto> : DisposableObjectBase where TDto : IDto
{
protected IAppRequest Request { get; set; }
protected IAppResponse<TDto> Response { get; set; }
}
Response represents the return object which composes something like:
public interface IAppResponse<TDto> where TDto : IDto
{
List<TDto> Data { get; }
ValidationResults ValidationResults { get; }
RequestStatus Status { get; }
}
Therefore, any derived service would return a response composed of the same object.
Now initially, I felt with would be a good design as this forces each service to be responsible for a single object. For the most part this has worked out, but as my services grow, I've found myself questioning this design.
Take this for example:
You have music service you're writing and one of your services would be "Albums".
So you write basic CRUD operations and they pretty much all return a collection of AlbumDto.
What if you want to write an operation that returns the types of albums. (LP, Single, EP, etc)
So you have an object AlbumTypesDto. Would you create a new service just for this object or have your Albums service return many different objects?
I can imagine a complex service with several varying return types to be cumbersome and poor design, yet writing a whole new service for what maybe, only one or two service operation methods to be overkill.
What do you think?
It is a good idea to design your services around your domain problem. By exposing a CRUD pattern on the service, essentially you are using services for data access. The risk of this is your business logic will end up on whatever is consuming your service.
You service should expose methods relavent to the problem you are trying to solve (which loosely models onto the operation on the UI typically)
From here you will see your data contracts start to fit more naturally to the problem you are trying to solve, instead of creating "one size fits all" contracts.
For a good starter, Google "Domain Driven Design" But there is plenty of reference material on this.

Silverlight 3 Validation MVVM WCF EF

My application is SL2 reading and writing data through an Entity Framework Model exposed via WCF. We have resisted writing any UI validation due to the exicting new validation controls coming from SL3.
...However after doing a trial update on our project yesterday, we realised that most of the standard practices for attaching validation properties to business objects can't readily be applied when the objects are created from the EF model.
Has anyone had any similiar experiences yet, if so how did you work around this?
Thanks,
Mark
You are correct, you have 2 options.
In your model, or viewmodel, depending on your implementation of MVVM, in the setters for your properties, do some validation there, and throw an exception if there is a problem, then use SL3 ValidatesOnException property in your databinding on the view for each control being validated.
use MetaDataClasses to provide addon functionality to ur existing domain model
[MetadataClass(typeof(MyMetadataClass))]
public partial class MyClass
{
public int MyProperty { get; set; }
}
public class MyMetadataClass
{
[Range(1,100)]
public int MyProperty{ get; set; }
}