How do I gracefully avoid dependencies on infrastructure services from domain entities using DDD? - oop

Background
Suppose I am tasked with building a system in the domain of notification sending using Domain Driven Design (DDD). One of the key requirements of this system is that it needs to support various "types" of notifications, such as SMS, email, etc.
After several iterations on developing the domain model, I continue to land on having a Notification base class as an entity, with subclasses SMSNotification, EmailNotification, etc. as child classes (each being an entity as well).
Notification
public abstract class Notification extends Entity<UUID> {
//...fields...
public abstract void send();
}
SMSNotification
public class SMSNotification extends Notification {
public void send(){
//logic for sending the SMS notification using an infrastructure service.
}
}
EmailNotification
public class EmailNotification extends Notification {
public void send(){
//logic for sending the email notification using an infrastructure service.
}
}
Problem(s)
With this current design approach, each subclass of Notification is interacting with an infrastructure service, where the infrastructure is tasked with interfacing with some external system.
Eric Evans dedicates a little page space about this on page 107 in his book Domain-Driven Design when introducing the concept of domain services:
..., in most development systems, it is awkward to make a direct interface between a domain object and external resources. We can dress up such external services with a facade that takes inputs in terms of the model, ... but whatever intermediaries we may have, and even though they don't belong to us, those services are carrying out the domain responsibility...
If instead, I procure a SendNotificationService in my domain model using Evans' advice instead of having a send method on each subclass of Notification, I am not sure how I can avoid the need for knowing what type of notification was provided, so that the appropriate infrastructure action can be taken:
SendNotificationService (Domain Service)
public class SendNotificationService {
public void send(Notification notification){
//if notification is an SMS notification...
// utilize infrastructure services for SMS sending.
//if notification is an email notification...
// utilize infrastructure services for email sending.
//
//(╯°□°)╯︵ ┻━┻)
}
}
What am I missing here?
Object oriented design principles are pushing me in favor of having the model first suggested, with the Notification, SMSNotification, and EmailNotification classes. Implementing the send method on each subclass of Notification makes sense, as all notifications need to be sent (justifies its placement in Notification) and each "type" or subclass of Notification will have specialized behavior in how the notification is sent (justifies making send abstract in Notification). This approach also honors Open/Closed Principle (OCP), since the Notification class will be closed to modification, and as new notification types are supported, a new subclass of Notification can be created to extend functionality. Regardless, there seems to be consensus on not having entities interface with external services, as well as not having subclasses of entities at all in DDD.
If the behavior of sending notifications is removed from Notification, then where it is placed must be aware of the "type" of notification, and act accordingly, which I can only conceptualize as chain of if...else... statements, which directly contradicts OCP.

TLDR: If you need some infrastructure logic to be executed against your domain and you need some input to it from domain - don't build it in, just declare intentions with appropriate data/markers. You'll then process this declared intentions later, in infrastructure layer.
Do notifications of various kind differ in any way other that delivery mechanism? If not - there could be enough to use a Notification value object (or Entity, if your domain model requires so) with additional field (Enum, if the list is known, or some kind of marker) to store a delivery method name. Maybe, there could be numerous such methods per single notification instance.
Then you have a business logic - a domain service - to fire a notification. A domain service should only depend on domain vocabulary. E.g NotificationDeliveryMethodProvider.
In your adapters layer you can implement various delivery method providers to interact with infrastructure. And a factory to get providers according to a value in DeliveryMethod enum (or marker).
Basically, it's not an aggregate's responsibility to "send" itself of manipulate in any way. Its responsibility should be to maintain its state, execute state transitions in a consistent way and coordinate states of its enclosed entities/values. And fire events about its state changes.
In one of my projects I used the following subpackages under my domain package:
provides - interfaces of domain services provided to clients
cousumes - interfaces of upstream dependencies
businesslogic - implementation of domain services
values - value objects with code to enforce their invariants
...
Besides domain package there were also:
adapters package dealing with infrastructure
App object, where all interfaces were bound to implementations.
[There could also be] config package, but in my case it was very light.
These domain, adapters, App and config could be deployed as different jar-files with clear dependency structure, if you need to enforce it for somebody other.

I agree with you that the main responsibility of a Notification should be, that it can send itself. That is the whole reason it exists, so it's a good abstraction.
public interface Notification {
void send();
}
The implementations of this interface are the infrastructure services you are looking for. They will not (should not) be referenced directly by other "business" or "core" classes.
Note about making in an Entity: My own takeaway from reading the blue book is, that DDD is not about using Entity, Services, Aggregate Roots, and things like that. The main points are Ubiquitous Language, Contexts, how to work the Domain itself. Eric Evans himself says that this thinking can be applied to different paradigms. It does not have to always involve the same technical things.
Note about the "conventional" design from the other comment (#VoiceOfUnreason): In object-orientation at least, "holding state" is not a real responsibility. Responsibilities can only directly come from the Ubiquitous Language, in other words from the business. "Conventional" (i.e. procedural) design separates data and function, object-orientation does exactly the opposite. So be sure to decide which paradigm you are aiming for, then it may be easier to choose a solution.

After several iterations on developing the domain model, I continue to land on having a Notification base class as an entity, with subclasses SMSNotification, EmailNotification, etc. as child classes
That's probably an error.
public abstract class Notification extends Entity<UUID> {
public abstract void send();
}
That almost certainly is. You can make it work, if you twist enough, but you are going the wrong way around.
The responsibility of the entities in your domain model is the management of state. To also have the entity be responsible for the side effect of dispatching a message across your process boundary violates separation of concerns. So there should be a collaborator.
For Evans, as you will have noted, the collaboration takes the form of a domain service, that will itself collaborate with an infrastructure service to produce the desired result.
The most straight forward way to give the entity access to the domain service is to simply pass the domain service as an argument.
public class SMSNotification extends Notification {
public void send(SMSNotificationService sms) {
//logic for sending the SMS notification using an infrastructure service.
}
The SMSNotification supports a collaboration with an SMSNoticationService provider, and we make that explicit.
The interface you've offered here looks more like the Command Pattern. If you wanted to make that work, you would normally wire up the specific implementations in the constructor
public class SMSCommand extends NotificationCommand {
private final SMSNotificationService sms;
private final SMSNotification notification;
public final send() {
notification.send(sms);
}
}
There are some things you can do with generics (depending on your language choice) that make the parallels between these different services more apparent. For example
public abstract class Notification<SERVICE> extends Entity<UUID> {
public abstract void send(SERVICE service);
}
public class SMSNotification extends Notification<SMSNotificationService> {
public void send(SMSNotificationService service){
//logic for sending the SMS notification using an infrastructure service.
}
}
public class NotificationCommand<SERVICE> {
private final SERVICE service;
private final Notification<SERVICE> notification;
public final send() {
notification.send(service);
}
}
That's the main approach.
An alternative that sometimes fits is to use the poor man's pattern match. Instead of passing in the specific service needed by a particular type of entity, you pass them all in....
public abstract class Notification extends Entity<UUID> {
public abstract void send(SMSNotificationService sms, EmailNotificationService email, ....);
}
and then let each implementation choose precisely what it needs. I wouldn't expect this pattern to be a good choice here, but it's an occasionally useful club to have in the bag.
Another approach that you will sometimes see is to have the required services injected into the entity when it is constructed
SMSNotificationFactory {
private final SMSNotificationService sms;
SMSNotification create(...) {
return new SMSNotification(sms, ...);
}
}
Again, a good club to have in the bag, but not a good fit for this use case -- you can do it, but suddenly a lot of extra components need to know about the notification services to get them where they need to be.
What's best between notification.send(service) and service.send(notification)
Probably
notification.send(service)
using "Tell, don't ask" as the justification. You pass the collaborator to the domain entity, and it decides (a) whether or not to collaborate, (b) what state to pass to the domain service, and (c) what to do with any state that gets returned.
SMSNotification::send(SMSNotificationService service {
State currentState = this.getCurrentState();
{
Message m = computeMessageFrom(currentState);
service.sendMessage(m);
}
}
At the boundaries, applications are not object oriented; I suspect that as we move from the core of the domain toward the domain, we see entities give way to values give way to more primitive representations.
after reading a bit on pure domain models and the fact there shouldn't be any IO in there I'm not sure anymore
It is, in truth, a bit of a tangle. One of the motivations of domain services is to decouple the domain model from the IO -- all of the IO concerns are handled by the domain service implementation (or more likely, by an application/infrastructure service that the domain service collaborates with). As far as the entity is concerned, the method involved is just a function.
An alternative approach is to create more separation between the concerns; you make the orchestration between the two parts explicit
List<SMSRequest> messages = domainEntity.getMessages();
List<SMSResult> results = sms.send(messages)
domainEntity.onSMS(results)
In this approach, all of the IO happens within the sms service itself; the interactions with the model are constrained to in memory representations. You've effectively got a protocol that's managing the changes in the model and the side effects at the boundary.
I feel that Evans is suggesting that service.send(notification) be the interface.
Horses for courses, I think -- passing an entity to a domain service responsible for orchestration of multiple changes within the model makes sense. I wouldn't choose that pattern for communicating state to/from the boundary within the context of a change to an aggregate.

Related

Web service coordination

We are creating a WCF infrastructure to allow other systems in the organization to consume our business logic. Some of this logic has to do with user authentication, so securing the services is of high concern. The transport layer is secured by certificates. I am more concerned with securing the business layer.
One of our clients calls these services in a certain sequence, in order to support a business process. What I would like to do is put in place some mechanism to verify that the sequence is indeed kept. The sequence can be disrupted by developer errors on the consuming side or by attackers trying to compromise the system. I do not want to put the logic of the process inside the services themselves, since this will couple them to this specific client`s process. I would like to put the logic for coordinating the different services in a separate layer, which will be client specific (or maybe something more generic to support any process?)
Can someone point me to specific patterns or resources which discuss this issue?
I have been searching Google for half a day, and I can`t seem to find any resource discussing this specific issue.
Most web services should be designed to be called independently, since there's no guarantee what order the caller will compose them.
That having been said, one way to encourage them to be called in order is to use a design akin to a Fluent Interface, in which Service A returns an object that is an input parameter to Service B.
[DataContract]
public class ServiceAResult
{
// ...
}
[DataContract]
public class ServiceBResult
{
// ...
}
[ServiceContract]
public interface IServiceA {
[OperationContract]
public ServiceAResult OperationA() {
// ...
}
}
[ServiceContract]
public interface IServiceB {
[OperationContract]
public ServiceBResult OperationB(ServiceAResult input) {
// ...
}
}
Here, the easiest way to create a ServiceAResult to pass to ServiceB.OperationB is to call ServiceA.OperationA.
I recommend you separate your concerns.
Have a web service whose operations are called in order to perform your business processes.
Have a second service which orchestrates your business processes and which calls the operations of the first service in the required order.
Do not make it the responsibility of the first service to ensure that the second service calls things in the correct order. The responsibility of the order of calls should belong to a different service.

WCF Multiple Interface

i am really wanting to get my head around this WCF technology and it seems the last months of information cramming has somewhat distorted my overall concept of how i should build my client/server application.
If someone out there could shed some light on the best practises when developing my app and implementing a Duplex WCF service with multiple interfaces.
General outline: I am wanting to develop an app where users connect to a server and lets say'.. add contacts to an sql database. I have discovered many ways of doing this but would ultimatly like to know im heading down the right path when it comes time to developing the app further.
Some models i have discovered are...
Client has its own LINQ to SQL classes and handles all data to and from data.... BAD. really slow. overheads with LINQ and SQL connections amongst poor implementation of Linq Select command.
Another model was the develop the service to implement the linq to sql commands which are used for CRUD operations however this still doesnt provide live data updates to other clients connected to the service.
So i made a basic app that when a client logs in the to the service there Callback Channel gets added to the Callback List. When a client feeds in a new contact to the service, it invokes a callback to all channel clients with the new contact and the client side function takes care of adding the contact to the right spot.
So now i want to implement a User object and perhaps 2 more other business objects say Project and Item and lets say Item... my idea is to create my service like this
[Serializable]
[DataContract]
[ServiceBehavior(
ConcurrencyMode = ConcurrencyMode.Single,
InstanceContextMode = InstanceContextMode.PerCall)]
public class Project: IProject
{
[DataMember()]
public int projectID;
public int Insert(objSubItem _objSubItem)
{
// code here
}
etc and
[ServiceContract(
Name = "Project",
Namespace = "",
SessionMode = SessionMode.Required,
CallbackContract = typeof(IProjectCallback))]
public interface IProject
{
/// <summary>
/// Inserting a Project record to the database
/// </summary>
/// <param name="_project">Project from Client</param>
/// <return>ProjectID back to the client if -1 then fail</return>
[OperationContract()]
int Insert(Project _project);
and
public interface IProjectCallback
{
/// <summary>
/// Notifies the clients that a Project has been added
/// </summary>
/// <param name="_project">Inserted Project</param>
[OperationContract(IsOneWay = true)]
void NotifyProjectInserted(Project _project);
}
obviously i have other crud functions and functions to ensure that both client and server data records are read only when being editited.
now if i have multi objects what is it the best way to lay it out.
Im thinking to create a servce.cs and an Iservice.cs and an IserviceCallback to negotiate the client channel population.. sould i also use partial classes of the service to implement the Iproject and IUser to properly ivoke the service callbacks aswell as invoking the objects insert.
would i do it like this
[ServiceContract(Name = "Service",
Namespace = "",
SessionMode = SessionMode.Required,
CallbackContract = typeof(IServiceCallBack))]
[ServiceKnownType(typeof(Project))]
[ServiceKnownType(typeof(User))]
public interface IService
{
// code here
}
and also
[ServiceBehavior(
ConcurrencyMode = ConcurrencyMode.Single,
InstanceContextMode = InstanceContextMode.PerCall)]
public partial class Service : IUser
{
public int Insert(User _User)
{
//
}
}
public partial class Service : IProject
{
public int Insert(Project _project)
{
// code here
}
}
public partial class Service : IService
{
// functions here
}
}
if feels as though the approach feels right if it was for one interface but feel that i need some "Best Practice" assistance.
Many thanks in advance,,
Chris Leach
Hi Richard,
I appreciate your response. As you see this is my first post and third ever on any forum related to programming. I have lived my programming life very close to google as shown by my google autofill history but its time to start asking questions of my own so i thank-you for your assistance so far. I am really wanting to understand an overall approach to how best managing data consistency amongst a distributed client/service application. I am looking into Telerik ORM and also Entity Framework as a solution and exposing the entities through a WCF service but i lack the understanding to implement data consistency amongst the clients. i have managed to develop a netDualTcp chat application and have used a list of client callback context to send join/leave and chat functions. I lack the overall picture however it seems that if i have a in memory (static) version of all of the tables in my sql database and either have the clients bind directly to these lists if this is possible or it seems best for my custom user controls to handle the connections so the server is aware of who has that particular user control open and can direct changes to those clients who are registered to the callback contract. that way the clients arent having to load the entire project every time they wish to open the application. I am thinking of a multi purpose application such as a contact/grant application program where users will be using different parts of the application and do not always need to access all of the information at one time. When the user first logs in i am hoping that the service will attach a callback contract for the client and several bits of information are loaded back to the client on authentaction such as a basic state i.e if they are an admin they get notifications etc. once they are logged in they are presented with a blank canvas but then begin to load custom user controls into a docking panel type interface. i guess this is where i become a little stuck about how to best manage concurrency and consistency whilst minimizing load/data transfer times to the client and freeing up cpu proccess time on both client. I know in programming there are multiple ways of doing this but i would like to know from the people on this forum what they feel the best approach to this type of soultion is. I understand its a deep topic but i feel i have come this far and a guiding hand would be appreciated. Thanks again
Generally I find taking a non-abstract view of a service gets me to the right place. What is it that consumers of my service are going to need to do?
I obviously have internal domain objects that are used by my business layer to create and manipulate the data. However, the way the business layer does tings isn;t necessarily the best way to partition functionality for my service.
So for example, if any project should have at least one user in it then when you create the project you should send over at least one user at the same time. The service operations need to encapsulate all of the data required to carry out a self contained business transaction.
Similarly, the death knell of many distributed systems is latency - they require lots of round trips to complete something. So, for example, you want to be able to add a user to a project; in reality you probably want to add a number of users to as project. Therefore, you should model the operation to accept a list of users not a single one which must be invoked multiple times
So a project service should allow you to do all the things related to a project, or projects, through a service contract. If users can live independently of projects then also have a user service. If they cannot then don;t have a user service as everything needs to be project focussed.
Business transactions are often more than straight forward CRUD operations on domain entities and the service should model them rather than reflecting the data model

Domain Model – Repositories – Communication across Sub-Systems

I am currently in the process of designing a system which will use multiple data sources to consume the required data. I am attempting to model the concepts shown below (would post an image but don't have enough points!) where a customer can have an association with a number of products. The Customer would be stored in the "Customer subsystem" and the Product and CustomerProduct would be stored in thee "Product subsystem"
public class Customer
{
public string FirstName { get; set; }
public Guid ID { get; set; }
}
public class CustomerProduct
{
public Guid ID { get; set; }
public Customer Customer { get; set; }
public Product Product { get; set; }
}
public class Product
{
public string Name { get; set; }
public double Price { get; set; }
public Guid ID { get; set; }
}
The “Customer” entity will be physically persisted in a system which must be accessed via a web-service. The “ConsumerProduct” and “Product” entities will be persisted in a SQL database, using NHibernate as the ORM.
As part of the design I was planning to use Repositories to abstract the data persistence technologies away from the domain model. Therefore I would have 3 repository interfaces, ICustomerRepository, ICustomerProductRepository and IProductRepository. I would then create a concrete NHibernate implementation for the CustomerProduct and Product repositories and a concrete web service access implementation for the Customer repository.
What I am struggling with is how the entities, which are persisted in different sub-systems will interact. Ideally I would like a rich domain model, where the CustomerProduct entity would have a physical “Customer” property which returns a Customer object. However I have no idea how this would work as the Customer entity would need to be accessed from a different data store.
The only way I can see to solve this issue is to not maintain a full reference to Customer in the CustomerProduct entity and instead just hold a reference, and then every time I need to get a reference to the Customer I would just go via the Customer Repository.
I would be grateful for any suggestions anyone could put forward on how to solve this issue.
hi I haven't been in your situation before, but I have designing domains that communicate with other subsystems.
I do not have the whole picture, but it seems like the customer entity is more isolated from the others, CustomerProduct and Product. So am I guessing correct that you will present the model in a common GUI but its only the datasource that are separated?
First you can solve this by different ways and you should also ask yourself about non-functional requirements such as maintenance, uptime and support. Will both systems always be up and running simultaneously or will it happened that you take on system down. The clue I'm fishing for is should you communicate sync or async (message queuing?) with subsystems. This can be achieved by using NServiceBus.
But to focus on your Domain, you should go for making the Domain look like it only has one model. This can be accomplished in different ways:
1) Have your ICustomerRepository (an interface contract that acts like is working against a collection of objects) be implemented by a infrastructure related repository that consume your web service in your subsystem. A hint is that you should use GUID as keys so keyconfilcts occur. This approach will not let you have any relationships/associations to customer from your other entities. They will but only through the repository (This a solution that Jimmy Nilsson uses in his book (http://jimmynilsson.com/blog/) to not tighten the model with to many bidirectional relationships).
2) Depends how your use cases will target/use the model, but you can create a application wide service layer that resides at one physical place but uses CustomerService and CustomerProcuctService and ProductService. To prevent that domain logic will leak into application layer some of the coordination between these entites can be encapsulated in a domain event handlers that coordinate some events between different services.
3) you can also create a CustomerAdapter class that have the other subsystems CustomerGUID as a key (it cannot generate keys since Customer webservice have control of that). But you can map it in Nhibernate and have relationsship between CustomerProduct and CustomerAdapter. But when you Map CustomerAdapter you will only load the GUID. Then make sure you will have a ICustomerAdapterService injected into a property using Spring.Net Windsor or some other DI tool. Then you to not map properties (like customername, adress etc) for customerAdapter in Nhibernate. But when you get/read Adress from CustomerAdapter it will get it from ICustomerAdapterService and set all other values as well.
This is not a recommended solution since it will break some DDD rules, like not having services in domain model. But if you see it from this perspective: it actually can be considered a Domain Service since it solves problem within your distributed domain. However it includes infrastructure related things like a WCF service implementation and therefore should the service implementation be in another infrastructure layer/assembly
Most simple is the solution 2 If you can handle the fact that Customer Entity will be accessed only by a Service in application layer.
However this application serviclayer can be a good anticorruption layer between the two subsystems. There is probably a reason why you have two subsystems today.
But an example of interaction flow (with no detailed knowledge of how your domain is):
GUI calls Application Service CustomerProductService method BuyNewProduct(CustomerDTO customer, ProductDTO newProduct)
CustomerProductService have ICustomerProductRepository and IProductRepository injected into constructor. It will also have a infrastructure Service ICustomerFacadeService (change name now :-)) that is injected into a Property CustomerFacadeService. The creation of this service is done by a factory that have two creation methods, Create() and CreateExtendendedWithCustomerService(). The later one will also inject CustomerServiceFacade
The method BuyNewProduct(...) will now Assemble the CustomerDTO and use the CustomerGUID to load Customer from CustomerFacadeService that will call the web service in the other subsystem.
The loaded customer will ensure that it actually exists but now we load the Product with IProductRepository
With both CustomerGUID value and Product Entity we create a new CustomerProduct entity (which is actually just a mapping class between Products and Customer GUID's) and save it through ICustomerProductRepository
Now you can call another infrastructure service to send an email to your customer that will be notified that it has access to the new product. Or you can create Domain events in CustomerProduct entity that delegates this notification to en eventhandler (in application service layer) that has the IEmailService injected in ctor. Then you have incapsulted the domain knowledge of sending notifications when you connect a new customer to a product.
Hope this help you in modelling your domain with less pain. Because its painful to do DDD. Requires a lot discussions with colleagues, domain experts and yourself in front of the mirror :) Is this the right path?
Look at the DDDsample.net for Domain Events or search for Udi Dahan and domain events.
I write an answer here, more space:
Regarding CustomerAdadpter also refered as CustomerFacadeService in the interaction flow example here is my opinion: How to implement depends of your application. Will most usercase be calling mainsystem calling your "cloud-subsystem" which will have a good uptime: -Then you maybe do not need a queue and will have a WCF service in the cloud. Your CustomerFacadeService will be a service Wrapper that just exposes the method your application layer needs and also assemble all necessary DTO objects.
If your cloud system will also call back to your mainsystem then you need to expose some of your methods as a service. THEN you have the option to expose a NServiceBus endpoint as a WCF service. This gives you the possibility to take down the mainsystem without loosing information.
But there is always a lot of buts...
You need of course to have the WCF service on another machine if your infra-tech guys want to install hotfixes/reboot main system's web server.
If you have client's waiting for an response while main system is down, how long will they wait? Not too long I guess.
So one scenario I can see benefits of this is if you have batches/reports that need to be carried out, and if one part of the system is down, the reporting will continue once it's up again.
Here is some example of NServiceBus exposed as a WCF service. But I have no experience in doing exactly that, just the knowledge of "it can be done".
http://docs.particular.net/nservicebus/architecture/nservicebus-and-wcf

Request/Response pattern in SOA implementation

In some enterprise-like project (.NET, WCF) i saw that all service contracts accept a single Request parameter and always return Response:
[DataContract]
public class CustomerRequest : RequestBase {
[DataMember]
public long Id { get; set; }
}
[DataContract]
public class CustomerResponse : ResponseBase {
[DataMember]
public CustomerInfo Customer { get; set; }
}
where RequestBase/ResponseBase contain common stuff like ErrorCode, Context, etc. Bodies of both service methods and proxies are wrapped in try/catch, so the only way to check for errors is looking at ResponseBase.ErrorCode (which is enumeration).
I want to know how this technique is called and why it's better compared to passing what's needed as method parameters and using standard WCF context passing/faults mechanisms?
The pattern you are talking about is based on Contract First development. It is, however not necessary that you use the Error block pattern in WCF, you can still throw faultexceptions back to the client, instead of using the Error Xml block. The Error block has been used for a very long time and therefore, a lot of people are accustom to its use. Also, other platform developers (java for example) are not as familiar with faultExceptions, even though it is an industry standard.
http://docs.oasis-open.org/wsrf/wsrf-ws_base_faults-1.2-spec-os.pdf
The Request / Response pattern is very valuable in SOA (Service Oriented Architecture), and I would recommend using it rather than creating methods that take in parameters and pass back a value or object. You will see the benefits when you start creating your messages. As stated previously, they evolved from Contract First Development, where one would create the messages first using XSDs and generate your classes based on the XSDs. This process was used in classic web services to ensure all of your datatypes would serialize properly in SOAP. With the advent of WCF, the datacontractserializer is more intelligent and knows how to serialize types that would previously not serialize properly(e.g., ArrayLists, List, and so on).
The benefits of Request-Response Pattern are:
You can inherit all of your request and responses from base objects where you can maintain consistency for common properties (error block for example).
Web Services should by nature require as little documentation as possible. This pattern allows just that. Take for instance a method like public BusScheduleResponse GetBusScheduleByDateRange(BusDateRangeRequest request); The client will know by default what to pass in and what they are getting back, as well, when they build the request, they can see what is required and what is optional. Say this request has properties like Carriers [Flag Enum] (Required), StartDate(Required), EndDate(Required), PriceRange (optional), MinSeatsAvailable(Option), etc... you get the point.
When the user received the response, it can contain a lot more data than just the usual return object. Error block, Tracking information, whatever, use your imagination.
In the BusScheduleResponse Example, This could return Multiple Arrays of bus schedule information for multiple Carriers.
Hope this helps.
One word of caution. Don't get confused and think I am talking about generating your own [MessageContract]s. Your Requests and Responses are DataContracts. I just want to make sure I am not confusing you. No one should create their own MessageContracts in WCF, unless they have a really good reason to do so.

Does dependency injection increase my risk of doing something foolish?

I'm trying to embrace widespread dependency injection/IoC. As I read more and more about the benefits I can certainly appreciate them, however I am concerned that in some cases that embracing the dependency injection pattern might lead me to create flexibility at the expense of being able to limit risk by encapsulating controls on what the system is capable of doing and what mistakes I or another programmer on the project are capable of making. I suspect I'm missing something in the pattern that addresses my concerns and am hoping someone can point it out.
Here's a simplified example of what concerns me. Suppose I have a method NotifyAdmins on a Notification class and that I use this method to distribute very sensitive information to users that have been defined as administrators in the application. The information might be distributed by fax, email, IM, etc. based on user-defined settings. This method needs to retrieve a list of administrators. Historically, I would encapsulate building the set of administrators in the method with a call to an AdminSet class, or a call to a UserSet class that asks for a set of user objects that are administrators, or even via direct call(s) to the database. Then, I can call the method Notification.NotifyAdmins without fear of accidentally sending sensitive information to non-administrators.
I believe dependency injection calls for me to take an admin list as a parameter (in one form or another). This does facilitate testing, however, what's to prevent me from making a foolish mistake in calling code and passing in a set of NonAdmins? If I don't inject the set, I can only accidentally email the wrong people with mistakes in one or two fixed places. If I do inject the set aren't I exposed to making this mistake everywhere I call the method and inject the set of administrators? Am I doing something wrong? Are there facilities in the IoC frameworks that allow you to specify these kinds of constraints but still use dependency injection?
Thanks.
You need to reverse your thinking.
If you have a service/class that is supposed to mail out private information to admins only, instead of passing a list of admins to this service, instead you pass another service from which the class can retrieve the list of admins.
Yes, you still have the possibility of making a mistake, but this code:
AdminProvider provider = new AdminProvider();
Notification notify = new Notification(provider);
notify.Execute();
is harder to get wrong than this:
String[] admins = new String[] { "joenormal#hotmail.com" };
Notification notify = new Notification(admins);
notify.Execute();
In the first case, the methods and classes involved would clearly be named in such a way that it would be easy to spot a mistake.
Internally in your Execute method, the code might look like this:
List<String> admins = _AdminProvider.GetAdmins();
...
If, for some reason, the code looks like this:
List<String> admins = _AdminProvider.GetAllUserEmails();
then you have a problem, but that should be easy to spot.
No, dependency injection does not require you to pass the admin list as a parameter. I think you are slightly misunderstanding it. However, in your example, it would involve you injecting the AdminSet instance that your Notification class uses to build its admin list. This would then enable you to mock out this object to test the Notification class in isolation.
Dependencies are generally injected at the time a class is instantiated, using one of these methods: constructor injection (passing dependent class instances in the class's constructor), property injecion (setting the dependent class instances as properties) or something else (e.g. making all injectable objects implement a particular interface that allows the IOC container to call a single method that injects its dependencies. They are not generally injected into each method call as you suggest.
Other good answers have already been given, but I'd like to add this:
You can be both open for extensibility (following the Open/Closed Principle) and still protect sensitive assets. One good way is by using the Specification pattern.
In this case, you could pass in a completely arbitrary list of users, but then filter those users by an AdminSpecification so that only Administrators recieve the notification.
Perhaps your Notification class would have an API similar to this:
public class Notification
{
private readonly string message;
public Notification(string message)
{
this.message = message;
this.AdminSpecification = new AdminSpecification();
}
public ISpecification AdminSpecification { get; set; }
public void SendTo(IEnumerable users)
{
foreach(var u in users.Where(this.AdminSpecification.IsSatisfiedBy))
{
this.Notify(u);
}
}
// more members
}
You can still override the filtering behavior for testing-purposes by assigning a differet Specification, but the default value is secure, so you would be less likely to make mistakes with this API.
For even better protection, you could wrap this whole implementation behind a Facade interface.