Can DDD repositories be aware of user context? - repository

Say you were to develop a system which availability of entities and domain logic is highly dependent on user context. Would it make sense to handle the user context sensitivity within repositories by making individual repository instances user context aware? I'm considering adopting this methodology as a way pulling the reliance on user context away from my Entities but I'm not sure whether there are any pitfalls that I may not be aware of with going this direction. The way I'm planning approaching this first is to add a UserContext parameter to the constructors of repositories that need this context information. The other obvious option would be to feed user context information into each query method in my repositories but this would likely mean that the majority of all methods would require such parameter which would in turn greatly increase the verbosity of each method call.
Also I would like to point out that I'm aware that even if I am to make repositories user context aware that this does not necessarily help directly when a service or entity needs that same user context information for reasons such as determining behavior based on user configuration. I'm interested in other solutions for these cases as well but for now I'm trying to tackle one thing at a time so I'm focusing on the repositories first.
Any suggestions would be appreciated.

I sense a design smell here :-). Things by the time they reach domain layer should pretty much be translated into domain entities/attributes and should not have dependency on context. What I mean is the context should be used to change/represent the new state of entity. Here it more seems like that context is going to be used to determine how the entity is going to be persisted. Have I understood this correctly?
Having said that, if your dependency on context is more from an infrastructure perspective rather than business functionality perspective then having context sensitive Repositories is the right model you have come up with.
Towards that, could you consider passing usercontext through thread local like Spring does with Hibernate Session? This way your Repository class's constructors or methods would be less polluted. It, however, does reduce the readablility of your code a bit.
Hope that helps.

Related

Clean Architecture: UseCase Output Port

I have a question regarding the "Use Case Output Port" in Uncle BobĀ“s Clean Architecture.
In the image, Uncle Bob describes the port as an interface. I am wondering if it has to be that way or if the invoked Use Case Interactor could also return a "simple" value. In either case the Application and Business Rules Layer would define its interface that the Interface Adapters Layer has to use. So I think for simple invocations just returning a value would not violate the architectural idea.
Is that true?
Additionally, I think this Output Port Interface implemented by the presenter should work like the Observer pattern. The presenter simply observes the interactor for relevant "events". In the case of .NET where events are first-class citizens, I think using one of these is the same idea.
Are these thoughts compatible with the ideas behind Clean Architecture?
Howzit OP. I see your question is still unanswered after all these years and I hope we can reason about this and provide some clarity. I also hope I am understanding your question correctly. So with that in mind, here is how I see the solution:
The short answer is, a use case interactor should be able to return a simple value (by which I assume string, int, bool etc) without breaking any architectural rules.
If we go over the onion architecture, which is very similar to the clean architecture, the idea is to encapsulate the core business logic in the center of the architecture, the domain. The corresponding concept in the clean architecture is the entities and the use cases on top of it. We do this because we want to dictate our understanding of the business in a consistent way when we write our business rules.
The interface adapters allow us to convert the outside world to our understanding. What we want is a contract in our domain (use cases or entities) that ensures we will get what we need from the outside world, without knowing any implementation details. We also don't care what the outside world calls it, we convert their understanding to ours.
A common way to do this, is to define the interface in the domain to establish a contract that says, we expect to give "x", and you must then tell us what "y" is. The implementation can then sit outside the domain.
Now to get to the core of your question. Let's assume that the core of our application is to track some complicated process with various stages. During one of these stages, we need to send data to a couple of external parties and we want to keep a reference of some sort for auditing purposes. In such a case our interface may sit in the domain and state we send our complicated object to some party, and we expect a string reference back. We can then use this string reference and fire some domain event etc. The implementation can sit completely outside of the domain and call external APIs and do it's thing, but our core domain is unaffected. Hence returning a simple value has no impact on the architecture. The reverse of the above scenario may also hold true. We can say that we have a reference id of some sort, and the outside world needs to return us our understanding of some object.
For the second part of your question. I would imagine it depends on the use case itself. If you present some idea out there and need to constantly react to it, domain events will get involved and you will have a structure very similar to the observer pattern. .NET encapsulates events very nicely and fits very well with clean architecture and Domain driven design.
Please let me know if the above makes sense or if I can clarify it in any way.

What criteria should one used to determine if Dependency Injection Framework should be used? [duplicate]

I've had a certain feeling these last couple of days that dependency-injection should really be called "I can't make up my mind"-pattern. I know this might sound silly, but really it's about the reasoning behind why I should use Dependency Injection (DI). Often it is said that I should use DI, to achieve a higher level of loose-coupling, and I get that part. But really... how often do I change my database, once my choice has fallen on MS SQL or MySQL .. Very rarely right?
Does anyone have some very compelling reasons why DI is the way to go?
Two words, unit testing.
One of the most compelling reasons for DI is to allow easier unit testing without having to hit a database and worry about setting up 'test' data.
DI is very useful for decoupling your system. If all you're using it for is to decouple the database implementation from the rest of your application, then either your application is pretty simple or you need to do a lot more analysis on the problem domain and discover what components within your problem domain are the most likely to change and the components within your system that have a large amount of coupling.
DI is most useful when you're aiming for code reuse, versatility and robustness to changes in your problem domain.
How relevant it is to your project depends upon the expected lifespan of your code. Depending on the type of work you're doing zero reuse from one project to the next for the majority of code you're writing might actually be quite acceptable.
An example for use the use of DI is in creating an application that can be deployed for several clients using DI to inject customisations for the client, which could also be described as the GOF Strategy pattern. Many of the GOF patterns can be facilitated with the use of a DI framework.
DI is more relevant to Enterprise application development in which you have a large amount of code, complicated business requirements and an expectation (or hope) that the system will be maintained for many years or decades.
Even if you don't change the structure of your program during development phases you will find out you need to access several subsystems from different parts of your program. With DI each of your classes just needs to ask for services and you're free of having to provide all the wiring manually.
This really helps me on concentrating on the interaction of things in the software design and not on "who needs to carry what around because someone else needs it later".
Additionally it also just saves a LOT of work writing boilerplate code. Do I need a singleton? I just configure a class to be one. Can I test with such a "singleton"? Yes, I still can (since I just CONFIGURED it to exist only once, but the test can instantiate an alternative implementation).
But, by the way before I was using DI I didn't really understand its worth, but trying it was a real eye-opener to me: My designs are a lot more object-oriented as they have been before.
By the way, with the current application I DON'T unit-test (bad, bad me) but I STILL couldn't live with DI anymore. It is so much easier moving things around and keeping classes small and simple.
While I semi-agree with you with the DB example, one of the large things that I found helpful to use DI is to help me test the layer I build on top of the database.
Here's an example...
You have your database.
You have your code that accesses the database and returns objects
You have business domain objects that take the previous item's objects and do some logic with them.
If you merge the data access with your business domain logic, your domain objects can become difficult to test. DI allows you to inject your own data access objects into your domain so that you don't depend on the database for testing or possibly demonstrations (ran a demo where some data was pulled in from xml instead of a database).
Abstracting 3rd party components and frameworks like this would also help you.
Aside from the testing example, there's a few places where DI can be used through a Design by Contract approach. You may find it appropriate to create a processing engine of sorts that calls methods of the objects you're injecting into it. While it may not truly "process it" it runs the methods that have different implementation in each object you provide.
I saw an example of this where the every business domain object had a "Save" function that the was called after it was injected into the processor. The processor modified the component with configuration information and Save handled the object's primary state. In essence, DI supplemented the polymorphic method implementation of the objects that conformed to the Interface.
Dependency Injection gives you the ability to test specific units of code in isolation.
Say I have a class Foo for example that takes an instance of a class Bar in its constructor. One of the methods on Foo might check that a Property value of Bar is one which allows some other processing of Bar to take place.
public class Foo
{
private Bar _bar;
public Foo(Bar bar)
{
_bar = bar;
}
public bool IsPropertyOfBarValid()
{
return _bar.SomeProperty == PropertyEnum.ValidProperty;
}
}
Now let's say that Bar is instantiated and it's Properties are set to data from some datasource in it's constructor. How might I go about testing the IsPropertyOfBarValid() method of Foo (ignoring the fact that this is an incredibly simple example)? Well, Foo is dependent on the instance of Bar passed in to the constructor, which in turn is dependent on the data from the datasource that it's properties are set to. What we would like to do is have some way of isolating Foo from the resources it depends upon so that we can test it in isolation
This is where Dependency Injection comes in. What we want is to have some way of faking an instance of Bar passed to Foo such that we can control the properties set on this fake Bar and achieve what we set out to do, test that the implementation of IsPropertyOfBarValid() does what we expect it to do, i.e. return true when Bar.SomeProperty == PropertyEnum.ValidProperty and false for any other value.
There are two types of fake object, Mocks and Stubs. Stubs provide input for the application under test so that the test can be performed on something else. Mocks on the other hand provide input to the test to decide on pass\fail.
Martin Fowler has a great article on the difference between Mocks and Stubs
I think that DI is worth using when you have many services/components whose implementations must be selected at runtime based on external configuration. (Note that such configuration can take the form of an XML file or a combination of code annotations and separate classes; choose what is more convenient.)
Otherwise, I would simply use a ServiceLocator, which is much "lighter" and easier to understand than a whole DI framework.
For unit testing, I prefer to use a mocking API that can mock objects on demand, instead of requiring them to be "injected" into the tested unit from a test. For Java, one such library is my own, JMockit.
Aside from loose coupling, testing of any type is achieved with much greater ease thanks to DI. You can put replace an existing dependency of a class under test with a mock, a dummy or even another version. If a class is created with its dependencies directly instantiated it can often be difficult or even impossible to "stub" them out if required.
I just understood tonight.
For me, dependancy injection is a method for instantiate objects which require a lot of parameters to work in a specific context.
When should you use dependancy injection?
You can use dependancy injection if you instanciate in a static way an object. For example, if you use a class which can convert objects into XML file or JSON file and if you need only the XML file. You will have to instanciate the object and configure a lot of thing if you don't use dependancy injection.
When should you not use depandancy injection?
If an object is instanciated with request parameters (after a submission form), you should not use depandancy injection because the object is not instanciated in a static way.

DDD: Where to put persistence logic, and when to use ORM mapping

We are taking a long, hard look at our (Java) web application patterns. In the past, we've suffered from an overly anaemic object model and overly procedural separation between controllers, services and DAOs, with simple value objects (basically just bags of data) travelling between them. We've used declarative (XML) managed ORM (Hibernate) for persistence. All entity management has taken place in DAOs.
In trying to move to a richer domain model, we find ourselves struggling with how best to design the persistence layer. I've spent a lot of time reading and thinking about Domain Driven Design patterns. However, I'd like some advice.
First, the things I'm more confident about:
We'll have "thin" controllers at the front that deal only with HTTP and HTML - processing forms, validation, UI logic.
We'll have a layer of stateless business logic services that implements common algorithms or logic, unaware of the UI, but very much aware of (and delegating to) the domain model.
We'll have a richer domain model which contains state, relationships, and logic inherent to the objects in that domain model.
The question comes around persistence. Previously, our services would be injected (via Spring) with DAOs, and would use DAO methods like find() and save() to perform persistence. However, a richer domain model would seem to imply that objects should know how to save and delete themselves, and perhaps that higher level services should know how to locate (query for) domain objects.
Here, a few questions and uncertainties arise:
Do we want to inject DAOs into domain objects, so that they can do "this.someDao.save(this)" in a save() method? This is a little awkward since domain objects are not singletons, so we'll need factories or post-construction setting of DAOs. When loading entities from a database, this gets messy. I know Spring AOP can be used for this, but I couldn't get it to work (using Play! framework, another line of experimentation) and it seems quite messy and magical.
Do we instead keep DAOs (repositories?) completely separate, on par with stateless business logic services? This can make some sense, but it means that if "save" or "delete" are inherent operations of a domain object, the domain object can't express those.
Do we just dispense with DAOs entirely and use JPA to let entities manage themselves.
Herein lies the next subtlety: It's quite convenient to map entities using JPA. The Play! framework gives us a nice entity base class, too, with operations like save() and delete(). However, this means that our domain model entities are quite closely tied to the database structure, and we are passing objects around with a large amount of persistence logic, perhaps all the way up to the view layer. If nothing else, this will make the domain model less re-usable in other contexts.
If we want to avoid this, then we'd need some kind of mapping DAO - either using simple JDBC (or at least Spring's JdbcTemplate), or using a parallel hierarchy of database entities and "business" entities, with DAOs forever copying information from one hierarchy to another.
What is the appropriate design choice here?
Martin
Your questions and doubts ring an interesting alarm here, I think you went a bit too far in your interpretation of a "rich domain model". Richness doesn't go as far as implying that persistence logic must be handled by the domain objects, in other words, no, they shouldn't know how to save and delete themselves (at least not explicitely, though Hibernate actually adds some persistence logic transparently). This is often referred to as persistence ignorance.
I suggest that you keep the existing DAO injection system (a nice thing to have for unit testing) and leave the persistence layer as is while trying to move some business logic to your entities where it's fit. A good starting point to do that is to identify Aggregates and establish your Aggregate Roots. They'll often contain more business logic than the other entities.
However, this is not to say domain objects should contain all logic (especially not logic needed by many other objects across the application, which often belongs in Services).
I am not a Java expert, but I use NHibernate in my .NET code so my experience should be directly translatable to the Java world.
When using ORM (like Hibernate you mentioned) to build Domain-Driven Design application, one of good (I won't say best) practices is to create so-called application services between the UI and the Domain. They are similar to stateless business objects you mentioned, but should contain almost no logic. They should look like this:
public void SayHello(int id, String helloString)
{
SomeDomainObject target = domainObjectRepository.findById(id); //This uses Hibernate to load the object.
target.sayHello(helloString); //There is a single domain object method invocation per application service method.
domainObjectRepository.Save(target); //This one is optional. Hibernate should already know that this object needs saving because it tracks changes.
}
Any changes to objects contained by DomainObject (also adding objects to collections) will be handled by Hibernate.
You will also need some kind of AOP to intercept application service method invocations and create Hibernate's session before the method executes and save changes after method finishes with no exceptions.
There is a really good sample how to do DDD in Java here. It is based on the sample problem from Eric Evans' 'Blue Book'. The application logic class sample code is here.

Should I encapsulate my IoC container?

Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I'm trying to decide whether or not it makes sense to go through the extra effort to encapsulate my IoC container. Experience tells me that I should put a layer of encapsulation between my apps and any third-party component. I just don't know if this is bordering on overkill.
I can think of situations where I might want to switch containers. For instance, my current container ceases to be maintained, or a different container is proven to be more light-weight/performant and better fits my needs. If this happens, then I'll potentially have a lot of re-wiring to do.
To be clear, I'm considering encapsulation of the registration and resolution of types. I think it's a no-brainer to encapsulate resolution - I'd hope it's common practice to have a helper/util class delegating to the container.
EDIT:
The assumption is that I prefer to wire-up my types programmatically for type-safety, compile-time checking and refactorability. It's this code and its dependency on the container that I'm looking to protect myself from.
I've also been using an IoC container for several other projects that share a lot of the same relationships, but the container is a pain to work with so I want change. But, a change means I lose the reusability of the registration code. Hence, why I'm contemplating encapsulation. It's not a huge burden, but one that I'd, nevertheless, like to mitigate.
I'm looking to:
Minimize the impact of change in containers / versions of containers
Provide some level of type-registration consistency across projects that may use different containers
Provide interface methods that make sense to me (RegisterSingleton<T,T> rather than RegisterType<T,T>( SomeLifetimeProvider ) - using Unity as an example).
Augment the container as conditions/scalability requirements change e.g. adding better caching, logging, etc during resolution/registration.
Provide my own model for registering type mappings.
Say I want to create a bunch of RegistrationHandler objects in an assembly/package and so I can easily segregate registration responsibilities across multiple classes and automatically pickup these handlers without changing code anywhere else.
I realize this is a bit subjective, so pros/cons might be helpful
Thanks!
Do it later, and only if you actually have the need to change IOC containers.
Pick an IOC container that is non-invasive. That is, one where the objects being connected to each other don't have any dependencies on the IOC container. In this case, there's nothing to encapsulate.
If you have to pick an IOC container that requires that you have dependencies on the container, choose one with the simplest dependencies/API you can. If you need to replace this IOC container (and you probably won't), implement adapters that bridge the new API to the old one.
In other words, let the first IOC container be the one that defines the interfaces for any future container so that you don't have to invent your own, and you can delay any of this sort of work until you absolutely need it.
EDIT:
I don't see a way of guaranteeing type-safety short of either:
Designing a relatively complex implementation of the Builder pattern along with visitor implementations that would write IOC configuration files, or something equivalent.
Implementing a type-safe IOC configuration DSL. (My choice if I had multiple apps that required swappable IOC containers.)
Yeah go for it. It's not a whole lot of extra effort and like you say, it gives you better isolation from third party components.
It also means that you can easily switch out the IoC container if you find something that's better. I recently did this with swapping out the Spring.net IoC container for structuremap.
The ASP.NET MVC Contrib project on codeplex is a pretty good place to start. This is what I based my implementation off.
It's best practice to do something only if there's an actual need for it, and never code something that you guess to be required sometimes in the future (that's the so-called YAGNI-principle). If your architecture is ok, you can easily change the container, if it actually should become necessary...
If you think you need this kind of flexibility, you may look at the Common Service Locator project at CodePlex. It does exactly what you look for: providing a common facade for various IoC containers.
HTH!
Rather than encapsulating the IOC container itself, I prefer to isolate the locus of interaction with the IOC container. For example, in ASP.Net MVC, I generally limit the exposure to the container to the controller factory and the global.aspx.cs file, where it's usually setup.
In my mind, having a lot of code that knows about the IOC container is an antipattern that increases complexity. I've seen a fair amount of code in which objects feel free to ask the IOC container for their dependencies, and then they've basically reduced the IOC container to a high-maintenance Service Locator.
Since IOC containers can resolve dependencies to an arbitrary degree of depth, it's pretty easy to make the controller factory the component that's responsible for involving the inversion of control containers. The constructor for each controller essentially specifies the services/repositories/gateways it needs.
For any of my apps, swapping the IOC container would essentially be a matter of rewriting the code the configures the container (specifies the bindings, etc.) and hooks up the controller factory. For apps exposed as services, the same basic idea should be reasonably manageable, though depending on the constraints of your runtime, you might have to use setter injection rather than constructor injection.

Should I inject things into my entities?

When using an IoC container, is it considered good design to inject other classes into them? i.e. a persistence class
Generally I advise against it. Entities are just that and should represent some identifiable and important part of your core domain. They should have one responsibility and be very, very good at doing it. If the entity requires additional services in order to complete a task (say persist itself) you're starting to let things like infrastructure creep into your domain. Even the notion of an Invoice being able to calculate it's billing value isn't necessarily the responsibility of the Invoice class. It may require things like sales tax, shipping costs, customer discounts. Once you open those doors and try to start bringing those items into your Invoice entity, it becomes an everything class. Domain services are better suited for co-ordination of entities and providing services to them. Infrastructure services are better suited for things like persistance and external communications. Both of those are fine to inject other services into via IoC (and encouraged so they themselves don't become bloatware).
This is the Spreadsheet Conundrum: do you write repository.store(entity) or entity.storeIn(repository)?
Each has its merits. I generally tend to favor repository.store(entity) for the main reason that I keep the methods of my entities domain-focused. It makes sense to write pen.dispenseInkOnto(Surface) because that is what pens do. It makes a little less sense to write pen.storeIn(penRepository).
The downside is you need to provide access to the internals of the entity class to the persistence class. Aside from getters, which introduce the same problem as entity.storeIn(), I'd go with a friend class, package protected access, internal access, or a friend class pattern of some kind to restrict access to internal to only those who need it.
As far general injection of classes, in the pen.dispenseInkOnto(Surface) example, you could very well make Surface an interface and use injection. I see no problem with this as long as you inject other entities, value objects, or services.
I advise against it too, but would recommend reading the DDD forum as there are many posts about it on there. Its questionable whether you should even inject into domain services, in more complex domains I think not.
As Bil said services are great for cross-aggregate coordination and especially any co-ordination with anything outside the domain.
I'd generally recommend against it.
It generally keeps your domain cleaner when your entities are given the things they need to do to perform their duties. When they have to look things up they are often taking shortcuts, shortcuts that can be avoided by doing more analysis into the domain and the relationships between members of the domain.
Application and Domain services are generally a better place to allow injection in my opinion. They can also be responsible for creating/persisting entities.
Absolutely. That's how you don't tie the class to some specific persistence implementation. Sometimes I write mock DAO classes that "persist" to memory structures only, and I inject these when unit testing.