How does abstractions help in DRY? - oop

When we search "Don't repeat yourself" on Wikipedia, the first sentence is:
In software engineering, don't repeat yourself (DRY) is a principle of
software development aimed at reducing repetition of software
patterns, replacing them with abstractions...
I know that abstractions in software engineering mean hiding implementation complexity of how the behaviors of an API are realized, but it seems that "abstractions" in this sentence is not what I know before. Could someone explain to me what abstraction means here? It would be better if you could give me an example.

I know that abstractions in software engineering mean hiding
implementation complexity of how the behaviors of an API are realized
Yes it means that (absstraction#wikipedia) and very same concept can also be leveraged to reduce repetitions! Or in other words, it can be used to practice DRY.
Let me try to explain that with an example. First I'll show non DRY code (without abstraction), then with use of abstraction I'd try to reduce repetitions.
Let's assume that you wanted to build an email view model based on application form details filled out by applicant and there is an email view class which consumes this emailViewModel to show all non-null details from application form. You could write it like in below example (first attempt)
public class ApplicationForm
{
public AddressDetail AddressDetail { get; set; }
public CustomerDetail CustomerDetail { get; set; }
public ProductDetail ProductDetail { get; set; }
}
public class EmailViewModel
{
public EmailViewModel(ApplicationForm applicationForm)
{
Address = GetAddressDetail(applicationForm.AddressDetail);
Customer = GetCustomerDetail(applicationForm.CustomerDetail);
Product = GetProductDetail(applicationForm.ProductDetail);
}
public string Address { get; set; }
public string Customer { get; set; }
public string Product { get; set; }
}
//view code assume razor view
#if(Model.Address!=null)
{
// method for showing address
}
#if(Model.Customer!=null)
{
// method for showing customer
}
//and other properties
I've kept above code quite simple; only three properties and haven't showed declaration for conversion methods. What if there were 50 properties! In this first approach it would be cumbersome changes that you'd be making in three places. Now I'll show you second example code of how you could create an interface (a way of abstraction) implement DRY.
interface IFormDetail
{
IFormDetailView GetDetail();
}
interface IFormDetailView
{
string ShowView();
}
public class ApplicationForm
{
public List<IFormDetail> FormDetails {get;set;}
}
public class EmailViewModel
{
public EmailViewModel(ApplicationForm applicationForm)
{
if(applicationForm.FormDetails!=null)
{
FormDetails = new List<IFormDetailView>();
foreach(var detail in applicationForm.FormDetails)
{
FormDetails.Add(detail.GetDetail());
}
}
}
public List<IFormDetailView> FormDetails { get; set; }
}
//view code assume razor view
#f(Model.FormDetails!=null)
{
foreach(var detail in Model.FormDetails){
detail.ShowView();
}
}
In this second code example , when you've a new property, you'll only make one change when a new application form property is created.
So while we are hiding complexity of how detail is presented etc., we are also leveraging it to reduce repetition.

Related

Does including Collections in Entities violate what an entity is supposed to be?

I am building a Web API using Dapper for .NET Core and trying to adhere to Clean Architecture principles. The API is consumed by an external Angular front-end.
I have repositories that use Dapper to retrieve data from the database, and this data then passes through a service to be mapped into a DTO for display to the user.
It is my understanding that an entity should be an exact representation of the database object, with no extra properties, and that I should use DTOs if I require some additional properties to show the user (or if I wish to obscure certain properties from the user too).
Suppose I have a DTO:
public class StudentDTO
{
public Guid Id { get; set; }
public string Name { get; set; }
public List<Assignment> Assignments { get; set;}
}
and its corresponding Entity:
public class Student
{
public Guid Id { get; set; }
public string Name { get; set; }
}
With this model, should I want to get a student with all of their assignments, I'd need to have two repository calls, and do something like this in the service:
public StudentDTO GetById(Guid id)
{
var student = this.studentRepository.GetById(id);
var assignments = this.assignmentRepository.GetByStudentId(id);
return SomeMapperClass.Map(student, assignments);
}
But this seems inefficient and unnecessary. My question is, should I not just retrieve the Assignments when I get the student entity in the repository, using a JOIN? Or would this violate what an entity is supposed to be?
I apologise, I do realise this is a rather simple question, but I'd really like to know which method is the best approach, or if they both have their use cases
I think it would be more efficient, since map uses reflections, that is slower tens times
public StudentDTO GetById(Guid id)
{
var student = this.studentRepository.GetById(id);
student.Assignments = this.assignmentRepository.GetByStudentId(id);
return student;
}
but the common way is
return _context.Students.Include(i=>i.Assignments).FirstOrDefault(i=> i.Id==id);
This is why the generic repository is a bad idea in the most casses, since it is hard to guess what set of data you will need.

method within model in MVC

I have a model in my MVC app, 'designation'. I have a method getdesig() to simply return all of the designations. I originally had this in my controller but moved to my model recently, with the aim of decluttering my controller and making it thinner. Is this model a logical place to put such a method?
public class designation
{
[Key]
public int DesignationID { get; set; }
public string DesignationName { get; set; }
public virtual ICollection<user> users { get; set; }
private ClaimContext db = new ClaimContext();
public List<designation> getdesig()
{
{
return (from c in db.designations
select c).ToList();
}
}
}
yes, however in more complicated scenarios where conditions exists the controller is the spot to determine "What" needs to be loaded or "how much" needs to be loaded based off the scenario/arguments. In this very simple example it is fine as you are just dumping all the data without regard to any context.
It is a good practice to keep your ViewModels as simple and specific to the View as possible, the ViewModel's job is to simply be a common store to drive the view. The view relies on the model to be set with the appropriate data, it is the job of the controller to determine context and what should be populated in the model.

loosing dataAnottation when upload model from database

I have a big database existing database to comunicate with, and I'm using EF 5.0 database first, the problem I'm having is that if I create any data decoration like [stringlength(50)] on the class and then the databases is uploaded, when I "upload from database" all data annotations are gone. How can I do to keep them?
It's very simple: You Can't! Because those codes are auto-generated and will be over written on each model update or change.
However you can achieve what you need through extending models. Suppose that EF generated the following entity class for you:
namespace YourSolution
{
using System;
using System.Collections.Generic;
public partial class News
{
public int ID { get; set; }
public string Title { get; set; }
public string Description { get; set; }
public int UserID { get; set; }
public virtual UserProfile User{ get; set; }
}
}
and you want do some work arounds to preserve your you data annotations and attributes. So, follow these steps:
First, add two classes some where (wherever you want, but it's better to be in Models) like the following:
namespace YourSolution
{
[MetadataType(typeof(NewsAttribs))]
public partial class News
{
// leave it empty.
}
public class NewsAttribs
{
// Your attribs will come here.
}
}
then add what properties and attributes you want to the second class - NewsAttribs here. :
public class NewsAttrib
{
[Display(Name = "News title")]
[Required(ErrorMessage = "Please enter the news title.")]
public string Title { get; set; }
// and other properties you want...
}
Notes:
1) The namespace of the generated entity class and your classes must be the same - here YourSolution.
2) your first class must be partial and its name must be the same as EF generated class.
Go through this and your attribs never been lost again ...
The accepted answer may work for standard data operations, but I am trying to validate the model prior to the call to DbSet.Add using TryValidateObject. With the accepted answer, it is still not picking up on the data annotations.
What did work for me I found in a .NET Runtime GitHub thread, as proposed by what I'm inferring is one of the .NET developers.
Basically, this is a bug, and you have to force the model to recognize the metadata decorations using TypeDescriptor.AddProviderTransparent . . .
TypeDescriptor.AddProviderTransparent(new AssociatedMetadataTypeTypeDescriptionProvider(typeof(News), typeof(NewsAttrib)), typeof(News));
Once I make this call, TryValidateObject recognizes the data annotations and returns false when any of the constraints are not met.
Here's the link. I little more than half-way down, there's a working code sample in a .zip file.
https://github.com/dotnet/runtime/issues/46678

OO programming issue - State Design Pattern

I have spent the last day trying to work out which pattern best fits my specific scenario and I have been tossing up between the State Pattern & Strategy pattern. When I read examples on the Internet it makes perfect sense... but it's another skill trying to actually apply it to your own problem. I will describe my scenario and the problem I am facing and hopefully someone can point me in the right direction.
Problem: I have a base object that has different synchronization states: i.e. Latest, Old, Never Published, Unpublished etc. Now depending on what state the object is in the behaviour is different, for example you cannot get the latest version for a base object that has never been published. At this point it seems the State design pattern is best suited... so I have implemented it and now each state has methods such as CanGetLatestVersion, GetLatestVersion, CanPublish, Publish etc.
It all seems good at this point. But lets say you have 10 different child objects that derive from the base class... my solution is broken because when the "publish" method is executed for each state it needs properties in the child object to actually carry out the operation but each state only has a reference to the base object. I have just spent some time creating a sample project illustrating my problem in C#.
public class BaseDocument
{
private IDocumentState _documentState;
public BaseDocument(IDocumentState documentState)
{
_documentState = documentState;
}
public bool CanGetLatestVersion()
{
return _documentState.CanGetLatestVersion(this);
}
public void GetLatestVersion()
{
if(CanGetLatestVersion())
_documentState.CanGetLatestVersion(this);
}
public bool CanPublish()
{
return _documentState.CanPublish(this);
}
public void Publish()
{
if (CanPublish())
_documentState.Publish(this);
}
internal void Change(IDocumentState documentState)
{
_documentState = documentState;
}
}
public class DocumentSubtype1 : BaseDocument
{
public string NeedThisData { get; set; }
}
public class DocumentSubtype2 : BaseDocument
{
public string NeedThisData1 { get; set; }
public string NeedThisData2 { get; set; }
}
public interface IDocumentState
{
bool CanGetLatestVersion(BaseDocument baseDocument);
void GetLatestVersion(BaseDocument baseDocument);
bool CanPublish(BaseDocument baseDocument);
bool Publish(BaseDocument baseDocument);
SynchronizationStatus Status { get; set; }
}
public class LatestState : IDocumentState
{
public bool CanGetLatestVersion(BaseDocument baseDocument)
{
return false;
}
public void GetLatestVersion(BaseDocument baseDocument)
{
throw new Exception();
}
public bool CanPublish(BaseDocument baseDocument)
{
return true;
}
public bool Publish(BaseDocument baseDocument)
{
//ISSUE HERE... I need to access the properties in the the DocumentSubtype1 or DocumentSubType2 class.
}
public SynchronizationStatus Status
{
get
{
return SynchronizationStatus.LatestState;
}
}
}
public enum SynchronizationStatus
{
NeverPublishedState,
LatestState,
OldState,
UnpublishedChangesState,
NoSynchronizationState
}
I then thought about implementing the state for each child object... which would work but I would need to create 50 classes i.e. (10 children x 5 different states) and that just seems absolute crazy... hence why I am here !
Any help would be greatly appreciated. If it is confusing please let me know so I can clarify!
Cheers
Let's rethink this, entirely.
1) You have a local 'Handle', to some data which you don't really own. (Some of it is stored, or published, elsewhere).
2) Maybe the Handle, is what we called the 'State' before -- a simple common API, without the implementation details.
3) Rather than 'CanPublish', 'GetLatestVersion' delegating from the BaseDocument to State -- it sounds like the Handle should delegate, to the specific DocumentStorage implementation.
4) When representing external States or Storage Locations, use of a separate object is ideal for encapsulating the New/Existent/Deletion state & identifier, in that storage location.
5) I'm not sure if 'Versions' is part of 'Published Location'; or if they're two independent storage locations. Our handle needs a 'Storage State' representation for each independent location, which it will store to/from.
For example:
Handle
- has 1 LocalCopy with states (LOADED, NOT_LOADED)
- has 1 PublicationLocation with Remote URL and states (NEW, EXIST, UPDATE, DELETE)
Handle.getVersions() then delegates to PublicationLocation.
Handle.getCurrent() loads a LocalCopy (cached), from PublicationLocation.
Handle.setCurrent() sets a LocalCopy and sets Publication state to UPDATE.
(or executes the update immediately, whichever.)
Remote Storage Locations/ Transports can be subtyped for different methods of accessing, or LocalCopy/ Document can be subtyped for different types of content.
THIS, I AM PRETTY SURE, IS THE MORE CORRECT SOLUTION.
[Previously] Keep 'State' somewhat separate from your 'Document' object (let's call it Document, since we need to call it something -- and you didn't specify.)
Build your heirarchy from BaseDocument down, have a BaseDocument.State member, and create the State objects with a reference to their Document instance -- so they have access to & can work with the details.
Essentially:
BaseDocument <--friend--> State
Document subtypes inherit from BaseDocument.
protected methods & members in Document heirarchy, enable State to do whatever it needs to.
Hope this helps.
Many design patterns can be used to this kind of architecture problem. It is unfortunate that you do not give the example of how you do the publish. However, I will state some of the good designs:
Put the additional parameters to the base document and make it
nullable. If not used in a document, then it is null. Otherwise, it
has value. You won't need inheritance here.
Do not put the Publish method to the DocumentState, put in the
BaseDocument instead. Logically, the Publish method must be part
of BaseDocument instead of the DocumentState.
Let other service class to handle the Publishing (publisher
service). You can achieve it by using abstract factory pattern. This
way, you need to create 1:1 document : publisher object. It may be
much, but you has a freedom to modify each document's publisher.
public interface IPublisher<T> where T : BaseDocument
{
bool Publish(T document);
}
public interface IPublisherFactory
{
bool Publish(BaseDocument document);
}
public class PublisherFactory : IPublisherFactory
{
public PublisherFactory(
IPublisher<BaseDocument> basePublisher
, IPublisher<SubDocument1> sub1Publisher)
{
this.sub1Publisher = sub1Publisher;
this.basePublisher = basePublisher;
}
IPublisher<BaseDocument> basePublisher;
IPublisher<SubDocument1> sub1Publisher;
public bool Publish(BaseDocument document)
{
if(document is SubDocument1)
{
return sub1Publisher.Publish((SubDocument1)document);
}
else if (document is BaseDocument)
{
return basePublisher.Publish(document);
}
return false;
}
}
public class LatestState : IDocumentState
{
public LatestState(IPublisherFactory factory)
{
this.factory = factory;
}
IPublisherFactory factory;
public bool Publish(BaseDocument baseDocument)
{
factory.Publish(baseDocument);
}
}
Use Composition over inheritance. You design each interface to each state, then compose it in the document. In summary, you can has 5 CanGetLatestVersion and other composition class, but 10 publisher composition class.
More advancedly and based on the repository you use, maybe you can use Visitor pattern. This way, you can has a freedom to modify each publishing methods. It is similiar to my point 3, except it being declared in one class. For example:
public class BaseDocument
{
}
public class SubDocument1 : BaseDocument
{
}
public class DocumentPublisher
{
public void Publish(BaseDocument document)
{
}
public void Publish(SubDocument1 document)
{
// do the prerequisite
Publish((BaseDocument)document);
// do the postrequisite
}
}
There may be other designs available but it is dependent to how you access your repository.

Class Naming Conventions with Layered Architecture and Entity Framework

I'm designing a layered architecture (Service/Business Logic Layer, Data Access Layer) and am struggling with the intersection of a few problems.
Entity Framework 4.1 does not support interfaces directly
My Interfaces contain collections of other interfaces with read/write properties
This means using an implementing class won't work either, since it would still refers to another interface type
Example (please excuse the poorly written code, this is ad-hoc from my brain):
Data Access Layer
public interface IEmployer
{
string Name { get; set; }
ICollection<IEmployee> Employees { get; set; }
}
public interface IEmployee
{
string Name { get; set; }
}
public class Employer : IEmployer
{
public string Name { get; set; }
public ICollection<IEmployee> Employees { get; set; }
}
public class Employee : IEmployee
{
public string Name { get; set; }
}
public class DataManager
{
public IEmployer GetEmployer(string name) { ... }
public IEmployee CreateEmployeeObject(string name) { ... }
public void Save(IEmployer employer) { ... }
public void Save(IEmployee employee) { ... }
}
Service Layer
[DataContract]
public class Employee
{
[DataMember]
public string Name { get; set; }
}
public class HireService
{
public void HireNewEmployee(Employee newEmployee, string employerName)
{
DataManager dm = new DataManager();
IEmployer employer = dm.GetEmployer(employerName);
IEmployee employee = dm.CreateEmployeeObject(newEmployee.Name);
dm.Save(employee);
employer.Employees.Add(employee);
dm.Save(employer);
}
}
Without EF, the above works fine. The IEmployee type is used in the service layer, and does not conflict with the Employee data contract type. However, EF cannot use an interface, so I would be required to use class instead of an interface.
I see a few options:
Change IEmployer/IEmployee to classes, leaving the same names
Change IEmployer/IEmployee to classes, rename to EmployerDAL/EmployeeDAL
Change IEmployer/IEmployee to classes, rename to Employer/Employee, sprinkle using EmployerDL = DataLayer.Employer at the beginning of any service classes using it
What naming convention should I follow for class names which are defined in both the business and data layer?
Similar question to this: What's the naming convention for classes in the DataAccess Project? except that EF causes a problem with interfaces.
Actually the class defined in your DAL should be the one used in your business layer - those are your real domain objects. Classes exposed from your business layer are just data transfer objects so if you want to build any convention you should imho rename your data contracts.
Anyway the naming convention is something really subjective. Choose the way which best fits your needs and be consistent in that naming.