Dependency Inversion Principle - Where should the interfaces go? - oop

I've been scratching my head about this for a few months and I've still been able to satisfactorily convince myself that I have the right answer. We have a, very typical, situation where we have dependencies between multiple layers of our application where each layer is in its own assembly. As an example, our application layer uses the repository layer to retrieve data so pretty standard. My question is, where would the abstraction (interface in this case) live and why? In the example given, should it go in the Application layer or the Repository layer or a separate abstractions assembly?
Based on the diagram and description in The Clean Architecture description (not something we're particularly adhering to) I've placed them in the Application layer so that all of the dependencies are pointing inwards but I'm not sure if this is right. I've read quite a few other articles and looked at countless examples but there is very little in the way of reasoning as to where the abstractions should live.
I've seen this question but I don't believe it answers my question unless of course the actual answer is it doesn't matter.

It is called Dependency Inversion Principle, because the classic dependency direction from a higher level module to a lower level is inverted as follows:
| HigherLevelClass -> RequiredInterface | <= LowerLevelClassImplementingTheInterface |
So the inverted dependency is pointing from the lower level module to the required abstraction for your higher level module.
As the client module (your application layer) requires a certain lower level functionality, the related abstraction (your repository interface) is placed near the client module.
All descriptions I know use the package construct for explaining this.
However, I see no reason why this should not be true for modules or layers.
For details, e.g. see: http://en.wikipedia.org/wiki/Dependency_inversion_principle

Related

understanding the model layer and its intricacy

I have been working with MVC frameworks (PHP) for a while now, and I believe I understand the notion of layers separation pretty well.
For whoever is not there yet I'm talking about:
M => Model, data layer;
V => View, the UI of the application;
C => Controller, where business logic and incoming requests are processed;
Recently I came across a few projects that extend this concept by using other layers and extending the model one.
These layers use classes such as services, repositories, transformers, value objects, data mappers, etc.
I also understand the essential idea of DDD but, I'd like to know what this type of architecture mentioned above is called, if these additional layers are connected with DDD and/or any design patterns and if you guys can share some resources (blog post, books, videos, etc) where to learn this stuff from whit the other users of this community.
for reference, I found tereško's aswer on this question which is something very similar to what I am looking for.
Many thanks
These layers use classes such as services, repositories, transformers,
value objects, data mappers, etc.
It's hard to ascribe those to any specific architecture. For example Value objects are aka Data Transfer Objects (DTOs), aka Plain Old CLR/Java Objects (POCO/POJO's) are commonly found in .Net / Java based OO solutions.
More fundamentally, as you might already know, logical Layers ('...are merely a way of organizing your code.'*) are a fundamental concept in software architecture, so you'll find them all over the place, and not specific to any one architecture.
See Panos's in-depth answer for 'What's the difference between “Layers” and “Tiers”?'.
share some resources (blog post, books, videos, etc)
In terms of architectures, architectural styles that use layers, and into which the concepts you list would fit:
5-Layer Architecture (one I documented in 2011, which I still use)
Ports & Adaptors aka Hexagonal Architecture There seems to be a lot about Hexagonal architecture around at the moment, of which this post is the best I have seen.
A lot of the key concepts in both of these are actually very similar.
The general ideas behind both of these are very similar. You'll find other architectures out there, I'm sure, but how much they substantively differ is another question.
I'll make this a community wiki so others can add any resources they know of.

Domain services seem to require only a fraction of the total queries defined in repositories -- how to address that?

I'm currently facing some doubts about layering and repositories.
I was thinking of creating my repositories in a persistence module. Those repositories would inherit (or implement/extend) from repositories created in the domain layer module, being kept "persistence agnostic".
The issue is that from all I can see, the necessities of the domain layer regarding its repositories are quite humble. In general, they tend to be rather CRUDish.
It's in general at the application layer level, when solving particular business use-cases that the queries tend to be more complex and contrived (and thus, the number of repository's methods to explode).
So this raises the question of how to deal with this:
1) Should I just leave the domain repository interfaces simple and then just add the extra methods in the repository implementations (such that the application layer, that does know about the repository implementations, can use them)?
2) Should I just add those methods at the domain level repository implementations? I think not.
3) Should I create another set of repositories to be used just at the application layer level? This would probably mean moving to a more CQRSesque application.
Thanks
I think you should react to the realities of your business / requirements.
That is, if your use-cases are clearly not "persistence agnostic" then don't hold on to that particular restriction. Not everything can be reduced to CRUD. In fact I think most things worth implementing can't be reduced to CRUD persistence. Most database systems relational or otherwise have a lot of features nowadays, and it seems quaint to just ignore those. Use them.
If you don't want to mix SQL with other code, there are still a lot of other "patterns" that let you do that without requiring you to abstract access to something you actually don't need abstraction to.
On the flipside, you build a dependency to a particular persistence system. Is that a problem? Most of the time it actually isn't, but you have to decide for yourself.
All in all I would choose option 4: Model the problem. If I need a complicated SQL to build a use-case, and I don't need database independence (I rarely if ever do), then just write it where it is used, end of story.
You can use other tools like refactoring later to correct design issues.
The Application layer doesn't have to know about the Infrastructure.
Normally it should be fine working with just what Repository interfaces declared in the Domain provide. The concrete implementations are injected at runtime.
Declaring repository interfaces in the Domain layer is not only about using them in domain services but also elsewhere.
Should I create another set of repositories to be used just at the
application layer level? This would probably mean moving to a more
CQRSesque application.
You could do that, but you would lose some reusability.
It is also not related to CQRS - CQRS is a vertical division of the whole application between queries and commands, not giving horizontal layers different ways of fetching data.
Given that a repository is not about querying but about working with full aggregates most of the time perhaps you could elaborate on why you may need to create a separate set of repositories that are used only in your application/integration layer?
Perhaps you need to have a read-specific implementation that is optimised for data retrieval:
This would probably mean moving to a more CQRSesque application
Well, you'd probably want to implement read-specific bits that make sense. I usually have my data access separated either by namespace and, at times, even in a separate assembly. I then use I{Aggregate}Query implementations that return the relevant bits of data in as simple a type as possible. However, it is quite possible to even map to a more complex read model that even has relations but it is still only a read model and is not concerned with any command processing. To this end the domain is never even aware of these classes.
I would not go with extending the repositories.

How can a Domain Model interact with UI and Data without being dependent on them?

I have read there are good design patterns that resolve the following conflicting requirements: 1.) A domain model (DM) shouldn't be dependent on other layers like the UI and data persistence layers. 2.) The DM needs to interact with the UI and data persistence layers. What patterns resolve this conflict?
I'm not sure if you can call it a design pattern or not, but I believe that what you are looking for is the Dependency Inversion Principle (DIP).
The principle states that:
A. High-level modules should not depend on low-level modules. Both
should depend on abstractions.
B. Abstractions should not depend on details. Details should depend on
abstractions. - Wikipedia
When you apply this principle to the traditionnal Layered Architecture, you end up pretty much with the widely adopted Onion/Hexagonnal/Port & Adapters/etc.... architecture.
For instance, instead of the traditionnal Presentation -> Application -> Domain -> Infrastructure where the domain depends on infrastructure details you inverse the dependency and make the Infrastructure layer depend on an interface defined in the Domain layer. This allows the domain to depend on nothing else but itself.
The DM needs to interact with the UI
About that, I cannot see any scenario where the domain should be aware of the UI.
This all really comes down to the use case of the software project. Use cases do not specify any sort of implementation in a project. You can do whatever you want, as long as you meet these specific project requirements.
There are fundamental building blocks that are necessary to meet these project requirements. For example, you cannot print a business report with last year's pencil taxes without having the actual number to print. You need that data, no matter what.
Then databases become the next level of implementation. Everything in the database is a fundamental building block that is required to complete the use case. You just simply cannot complete the use cases without it.
We don't want our users to just have a command line SQL program and do all the use cases by that, because that would take forever. Imagine every user having to know and understand the domain model behind your software, just to figure out what value to read to determine the font color of your title screen. Nobody is going to buy your software.
We may need more than a simply domain model to satisfy the use case from our customer. Let's build a program that will serve as a tool for the user to access the data, and update the data. We can simplify the knowledge and time required to perform this use case. For example, we can just make a button that loads the screen.
While the model, view, and controller are all viewed as being right next to each other on all the diagrams we see, they really belong stacked on top of each other. You can have a database without a view or a controller, but not vice versa. To build a view or controller, you must know what you are interacting with. You still need the fundamental pieces of data required to accomplish the purpose (which, you can find in the database).

ASP.NET MVC4 n-Tier Architecture: best approach

I developing a 3 tier architecture for an MVC4 webapp + EntityFramwork5.
I want to keep separete the layer, so only DAL knows that I'm using EF, for example.
Actually I have a lot of classes to manage that:
DAL
Entity POCO
Entity DataContext : DbContext
Entity Repository
BL
Entity ViewModel
Entity Service(instantiate Entity Repository)
WEB
Entity Controllers (instantiate Entity Service)
This is working but is quite hard to mantain. I was thinking to remove the Entity Repository in DAL and use directly the DataContext (if I'm not wrong, after all DbContext has been desingned to be a Repository and a Unit of Work), but that will force me to add a reference to EntityFramework.dll in my BL. Is not a big issue, but I0m not sure it is the best choice.
Any advice?
(I hope I gave enough informations, if you need more, just ask)
You can use this this and this article.
An experienced Architect does not need to go through every single step in the book to get a reasonable design done for a small web
application. Such Architects can use their experience to speed up the
process. Since I have done similar web applications before and have
understood my deliverable, I am going to take the faster approach to
get the initial part of our DMS design done. That will hopefully
assist me to shorten the length of this article.
For those who do not have experience, let me briefly mention the general steps that involved in architecturing a software below...
Understand the initial customer requirement - Ask questions and do research to further elaborate the requirement
Define the process flow of the system preferably in visual (diagram) form. I usually draw a process-flow diagram here. In my
effort, I would try to define the manual version of the system first
and then would try to convert that into the automated version while
identifying the processes and their relations. This process-flow
diagram that we draw here can be used as the medium to validate the
captured requirements with the customer too.
Identify the software development model that suite your requirements
When the requirements are fully captured and defined before the design start, you can use the 'Water-Fall' model. But when the
requirements are undefined, a variant of 'Spiral' can be used to deal
with that.
When requirements are not defined, the system gets defined while it is being designed. In such cases, you need to keep adequate spaces
in respective modules, which later expansions are expected.
Decide what architecture to be used. In my case, to design our Document Management System (DMS), I will be using a combination of
ASP.NET MVC and Multitier Architecture (Three Tier Variant).
Analyze the system and identify its modules or sub systems.
Pick one sub system at a time and further analyze it and identify all granular level requirements belonging to that part of the systems.
Recognize the data entities and define the relationships among entities (Entity Relationship Diagram or ER Diagram). That can
followed by identifying the business entities (Some business entities
directly map with the classes of your system) and define the business
process flow.
Organized your entities. This is where you normalize your database, and decide what OOP concepts and design pattern to be used
etc.
Make your design consistent. Follow the same standards across all modules and layers. This includes streamlining the concepts (as an
example, if you have used two different design patterns in two
different modules to achieve the same goal, then pick the better
approach and use that in both the places), and conventions used in the
project.
Tuning the design is the last part of the process. In order to do this, you need to have a meeting with the project team. In that
meeting you need to present your design to your team and make them ask
questions about it. Take this as an opportunity to honestly evaluate/
adjust your design.

Should I be more concerned with coupling between packages or between units of distribution?

I have been looking at metrics for coupling and also look at DSM.
One of the tools I've been using looks at coupling between 'modules' with a module being a unit of distribution (in this case a .net assembly).
I feel that I should be more interested in looking at coupling between packages (or namespaces) than with units of distribution.
Should I be more concerned with coupling between packages/namespaces (ensure that abstractions only depend on abstractions, concrete types depend on abstractions and their are no cycles in the dependencies so that refactoring and extending is easy) or should I be concerned with whether I can deploy new versions without needing to update unchanged units of distribution?
What does anyone else measure?
For what it's worth, my gut feel is that if I focus on the package/namespace coupling then the unit of distribution coupling will come for free or at least be easier.
First, it's easy to go overboard looking at dependencies and coupling. Make sure you aren't over complicating it.
With that disclaimer out of the way, here's what I suggest.
There's really 3 different views to dependency/coupling management:
1) physical structure (i.e. assembly dependencies)
2) logical structure (i.e. namespace dependencies)
3) implementation structure (i.e. class dependencies)
For large apps, you will need to at least examine all 3, but you can usually prioritize.
For client deployed apps, number 1 can be very important (i.e. for things like plug-ins). For apps deployed inside the enterprise (i.e. asp.net), item #1 usually turns out to be not so important (excluding frameworks reused across multiple apps). You can usually deploy the whole app easy enough not to take the overhead of a complicated structure for #1.
Item #2 tends to be more of a maintainability issue. Know your layer boundaries and their relationship to namespaces (i.e. are you doing 1 layer per namespace or are you packaged differently at the logical level). Sometimes tools can help you enforce your layer boundaries by looking at the logical dependency structure.
Item #3 is really about doing good class design. Every good developer should put forth a pretty good amount of effort into ensuring he is only taking on the proper dependencies in his classes. This is easier said than done, and is typically a skill that has to be acquired over time.
To get a bit closer to the heart of your question, item #1 is really about how the projects are laid out in the VS solution. So this isn't an item to measure. It's more of something you setup at the beginning and let run. Item #2 is something you might use a tool to check during builds to see if the developers have broken any rules. It's more of a check than a measure really. Item #3 is really the one you'd want to take a good look at measuring. Finding the classes in your codebase which have a high amount of coupling are going to be pain points down the road, ensure the quality on those guys. Also, measuring at this level allows you to have some insight into the quality (overall) of the codebase as it's evolved. In addition, it can give you a red flag if someone checks some really raunchy code into your codebase.
So, if you want to prioritize, take a quick look at #1 and #2. Know what they should look like. But for most apps, item #3 should be taking the most time.
This answer, of course, excludes huge frameworks (like the .NET BCL). Those babies need very careful attention to #1. :-)
Otherwise, you end up with problems like this:
"Current versions of the .NET Framework include a variety of GUI-based libraries what wouldn't work properly in Server Core"
http://www.winsupersite.com/showcase/win2008_ntk.asp
Where you can't run .NET on a GUI-less install of Windows Server 2008 because the framework takes dependencies on the GUI libraries...
One final thing. Make sure you are familiar with the principles behind good dependency/coupling management. You can find a nice list here:
http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod
coupling and dependency cycles between units of distribution is more "fatal" because it can make it really difficult to deploy your program - and sometimes it can also make it really difficult to even compile your program.
you are mostly right, a good top-level design that divides the code into logical packages and clear and predefined dependencies only will get you most of the way, the only thing missing is correct separation of the packages into units of distributions.