Currently implementing Clean Architure using MediatR with
IRequestHandler<IRequest<ResponseMessage>, ResponseMessage>
IRequest<ResponseMessage>
The implementation now separates between business logic layer, infrastructure and controller and they rely on dependency injection and decoupled.
Currently the implementation is in Asp.Net Core and this framework supports response code generation done in controller as example below.
[HttpGet]
[ProducesResponseType(200)]
[ProducesResponseType(404)]
public async Task<IActionResult> GetObject([FromQuery] int Id)
{
...
return Ok(some_result_to_show); // This generates code 200
}
I wonder where in Clean Architecture layer should translate business rule decisions made into correct response codes and what would be good practice or set methodology doing this adaption.
Seems quick implementation would be still doing it in controller however wonder if this decision belongs to business rules or application business rules and should be handled in different layer before translating into response code in presentation layer.
If then Asp.Net core or MediatR (or any other library) has in-built framework or features to support such design.
One way I found is as below
In the business layer,
RequestMessageValidator : AbstractValidator<RequestMessage>
{
RuleFor(r => r).{ConditionSyntax}().WithErrorCode(ResponseCodeString);
}
Then in the controller,
return StatusCode(
Convert.ToInt32(ValidationResult.Errors[0].ErrorCode),
ValidationResult.Errors[0].ErrorMessage);
It is definitely the application / presentation layer which is MVC or your controllers. I would suggest you to throw specific exceptions that serve only one purpose.
For instance when you query data you will have a request-handler that looks for the data that should be found using a given id (a guid, or whatever). Now that identifier is valid but it is none existence on your database. You don't want to access the httpcontext (even though you could) in your handler because than your businesslogic would be coupled to asp.net. Throw an exception that tells you, that a resource was not found to produce a 404 error response (let's call it ResourceNotFoundException i.e.). For that exception you can then use an exception-filter instead of data-annotation above your controller-endpoints. You can have many exception-filter. Each serves its own purpose.
If you do it like that, your businesslogic will be decoupled from any application / presentation layer and can easily be reused where ever you want.
If you want to see an example of how an exception-filter works, I've created a simple solution for that a while ago. You can find it here.
Not directly related to your question but instead of returning an IActionResult, you could also return a data-transfer-object when you want to send data to the client. It will also produce a Http.OK statuscode.
Related
I would like to ask whether it's a good approach to redirect from within a model instead of a controller.
The reason I want to do that is because it is easier to unit test redirection from a model (I just pass a mock redirector object to the model in my tests) as opposed to controller which is more difficult to unit test. It also keeps controller thinner as all I do in the controller is create an instance of the model and pass it parameters from the request object. There is not a single if/else in the controller this way.
Is it a bad practise?
Most often in webapplications - MVC or not - redirects are implemented on a high-level layer. In non OOP code this often is a high level global function that knows a lot about the global static state and what represents a request and a response therein.
In more OOP driven sites, you find this often as a method with the "response" object (compare Symfony2 and HTTP Fundamentals; note that Symfony2 is not MVC), however, it often then has also some similar methods (e.g. see Difference between $this->render and $this->redirect Symfony2).
As most often those "response" objects are central in the web-application this qualifies as well as a high-level layer in my eyes.
From a testing standpoint - especially with integration testing - you normally do not need to test for redirects specifically. You should test that generally your redirect-functionality works on the HTTP layer so that parts of your application that make use of it can rely on it. Common problems are to not really follow the suggestions given in the HTTP/1.1 specs like providing a response body with redirects. A well working webapplication should honor that. Same for using fully qualified URIs.
So how does all fit this into MVC here? In a HTTP context this could be simplified as the following:
The response is to tell the user to go somewhere else.
If the user would not be important, the application could forward directly - that is executing a different command, the client would not be used for that, the redirect not necessary (sub-command).
The response is to say: Done. And then: See next here this command (in form of an URI).
This pretty much sounds like that the actual redirect is just some output you send to the client on the protocol level in client communication. It belongs into the interface of that protocol you want to support. So not into the model but into the client-interface and the boundary of the client interface inside an MVC application is the controller AFAIK.
So the redirect probably is a view-value-object with a special meaning. To get that working in a HTTP MVC you need a full URL abstraction which most PHP frameworks and libraries make a big round around because it's not well known how that works. So in the end I'd say: Do as Symfony2 has done, place it in a high level layer component, drop MVC and live with the deficiencies.
Everything else is hard to achieve, if you try otherwise, there is high risk to not stop with abstracting anymore.
Neither controller nor model should be redirecting anything anywhere. HTTP Location header is form of a response, which strictly in purview of views.
Model layer deals with business logic, it should be completely oblivious even to the existence of presentation layer.
Basically, it goes down to this: controllers handle input, views handle output. HTTP headers are part of output.
Note: when dealing with Rails clones, it is common to see redirects performed in "controller". It is because what they call "controller" is actually a merger of view and a controller responsibilities. This is a side-effect of opting to replace real views with simple templates as the 3rd side of triad.
I would say yes, it is wrong. As far as I understood, while models manage data and views manage layouts (i.e. how data should be displayed), controllers are only and exclusively in charge of the HTTP requests/responses management (in the case of a web app), and redirections typically belong to that tier in my opinion.
Examples in common frameworks
Symfony:
return $this->redirect($this->generateUrl('homepage'));
Spring MVC:
return "redirect:/appointments";
I think that you could have a model for your applications work flow or navigation (in your model layer) and then have your controller translate the different concepts in your work flow/navigation model into what views are to be displayed.
Your work flow class/module could know about the different activities/phases/steps that are available to the user, and it model your application kind of like a state machine. So your controller would make calls to this module to update the state and would recieve a response telling the controller which activity/phase/step it should go to.
That way your work flow model is easy to test but it still doesn't know about your view technology.
Many mentioned in comments these thoughts, so here is a summary:
The logic to figure out whether you need a redirect and what your redirect should be belongs into the controller. The model simply takes the data a view needs. This happens AFTER you decided which view to render. Think of the redirect as an instruction to perform a different controller action.
I use ASP.NET MVC and the controllers generate a RedirectResult for this purpose, which are completely unit testable. I don't know what is supported in your framework, but this is what MVC would do:
public class MyController : Controller {
public ActionResult ShowInfo(string id) {
if( id == null ) {
return new RedirectResult("searchpage");
} else {
return new ViewResult("displayInfo");
}
}
}
In your unit tests, you instantiate MyController and check the type of the result and optionally, the url or view name.
Whether your redirect is actually performed is not a unit test issue - that's essentially making sure your framework is working right. What you need to test is that you are giving the correct instruction (i.e. the redirect) and the correct url.
Long story as brief as possible...
I have an existing application that I'm trying to get ServiceStack into to create our new API. This app is currently an MVC3 app and uses the UnitOfWork pattern using Attribute Injection on MVC routes to create/finalize a transaction where the attribute is applied.
Trying to accomplish something similar using ServiceStack
This gist
shows the relevant ServiceStack configuration settings. What I am curious about is the global request/response filters -- these will create a new unit of work for each request and close it before sending the response to the client (there is a check in there so if an error occurs writing to the db, we return an appropriate response to the client, and not a false "success" message)
My questions are:
Is this a good idea or not, or is there a better way to do
this with ServiceStack.
In the MVC site we only create a new unit
of work on an action that will add/update/delete data - should we do
something similar here or is it fine to create a transaction only to retrieve data?
As mentioned in ServiceStack's IOC wiki the Funq IOC registers dependencies as a singleton by default. So to register it with RequestScope you need to specify it as done here:
container.RegisterAutoWiredAs<NHibernateUnitOfWork, IUnitOfWork()
.ReusedWithin(ReuseScope.Request);
Although this is not likely what you want as it registers as a singleton, i.e. the same instance returned for every request:
container.Register<ISession>((c) => {
var uow = (INHibernateUnitOfWork) c.Resolve<IUnitOfWork>();
return uow.Session;
});
You probably want to make this:
.ReusedWithin(ReuseScope.Request); //per request
.ReusedWithin(ReuseScope.None); //Executed each time its injected
Using a RequestScope also works for Global Request/Response filters which will get the same instance as used in the Service.
1) Whether you are using ServiceStack, MVC, WCF, Nancy, or any other web framework, the most common method to use is the session-per-request pattern. In web terms, this means creating a new unit of work in the beginning of the request and disposing of the unit of work at the end of the request. Almost all web frameworks have hooks for these events.
Resources:
https://stackoverflow.com/a/13206256/670028
https://stackoverflow.com/search?q=servicestack+session+per+request
2) You should always interact with NHibernate within a transaction.
Please see any of the following for an explanation of why:
http://ayende.com/blog/3775/nh-prof-alerts-use-of-implicit-transactions-is-discouraged
http://www.hibernatingrhinos.com/products/nhprof/learn/alert/DoNotUseImplicitTransactions
Note that when switching to using transactions with reads, be sure to make yourself aware of NULL behavior: http://www.zvolkov.com/clog/2009/07/09/why-nhibernate-updates-db-on-commit-of-read-only-transaction/#comments
If you have a decent layered ASP.NET MVC 3 web application with a data service class pumping out view models pulled from a repository, sending JSON to an Ajax client,
[taking a breath]
what's a good way to add data filtering based on ASP.NET logins and roles without really messing up our data service class with these concerns?
We have a repository that kicks out Entity Framework 4.1 POCOs which accepts Lambda Expressions for where clauses (or specification objects.)
The data service class creates query objects (like IQueryable) then returns them with .ToList() in the return statement.
I'm thinking maybe a specification that handles security roles passed to the data service class, or somehow essentially injecting a Lambda Expression in just the right place in the data service class?
I am sure there is a fairly standardized pattern to implement something like this. Links to examples or books on the subject would be most appreciated.
If you've got a single-tiered application (as in, your web layer and service/data layer all run in the same process) then it's common to use a custom principal to achieve what you want.
You can use a custom principal to store extra data about a user (have a watch of this: http://www.asp.net/security/videos/use-custom-principal-objects), but the trick is to set this custom principal into the current thread's principal also, by doing Thread.CurrentPrincipal = myPrincipal
This effectively means that you can get access to your user/role information from deep into your service layer without creating extra parameters on your methods (which is bad design). You can do this by querying Thread.CurrentPrincipal and cast it to your own implementation.
If your service/data layer exists in a different process (perhaps you're using web services) then you can still pass your user information separately from your method calls, by passing custom data headers along with the service request and leave this kind of data out of your method calls.
Edit: to relate back to your querying of data, obviously any queries you write which are influence by some aspect of the currently logged-in user or their role can be picked up by looking at the data in your custom principal, but without passing special data through your method calls.
Hopefully this at least points you in the right direction.
It is not clear from your question if you are using DI, as you mentioned you have your layers split up properly I am presuming so, then again this should be possible without DI I think...
Create an interface called IUserSession or something similar, Implement that inside your asp.net mvc application, the interface can contain something like GetUser(); from this info I am sure you can filter data inside your middle tier, otherwise you can simply use this IUserSession inside your web application and do the filtering inside that tier...
See: https://gist.github.com/1042173
My company has a product that will I feel can benefit from a web service API. We are using MSMQ to route messages back and forth through the backend system. Currently we are building an ASP.Net application that communicates with a web service (WCF) that, in turn, talks to MSMQ for us. Later on down the road, we may have other client applications (not necessarily written in .Net). The message going into MSMQ is an object that has a property made up of an array of strings. There is also a property that contains the command (a string) that will be routed through the system. Personally, I am not a huge fan of this, but I was told it is for scalability and every system can use strings.
My thought, regarding the web services was to model some objects based on our data that can be passed into and out of the web services so they are easily consumed by the client. Initially, I was passing the message object, mentioned above, with the array of strings in it. I was finding that I was creating objects on the client to represent that data, making the client responsible for creating those objects. I feel the web service layer should really be handling this. That is how I have always worked with services. I did this so it was easier for me to move data around the client.
It was recommended to our group we should maintain the “single entry point” into the system by offering an object that contains commands and have one web service to take care of everything. So, the web service would have one method in it, Let’s call it MakeRequest and it would return an object (either serialized XML or JSON). The suggestion was to have a base object that may contain some sort of list of commands that other objects can inherit from. Any other object may have its own command structure, but still inherit base commands. What is passed back from the service is not clear right now, but it could be that “message object” with an object attached to it representing the data. I don’t know.
My recommendation was to model our objects after our actual data and create services for the types of data we are working with. We would create a base service interface that would house any common methods used for all services. So for example, GetById, GetByName, GetAll, Save, etc. Anything specific to a given service would be implemented for that specific implementation. So a User service may have a method GetUserByUsernameAndPassword, but since it implements the base interface it would also contain the “base” methods. We would have several methods in a service that would return the type of object expected, based on the service being called. We could house everything in one service, but I still would like to get something back that is more usable. I feel this approach leaves the client out of making decisions about what commands to be passed. When I connect to a User service and call the method GetById(int id) I would expect to get back a User object.
I had the luxury of working with MS when I started developing WCF services. So, I have a good foundation and understanding of the technology, but I am not the one designing it this time.
So, I am not opposed to the “single entry point” idea, but any thoughts about why either approach is more scalable than the other would be appreciated. I have never worked with such a systematic approach to a service layer before. Maybe I need to get over that?
I think there are merits to both approaches.
Typically, if you are writing an API that is going to be consumed by a completely separate group of developers (perhaps in another company), then you want the API to be as self-explanative and discoverable as possible. Having specific web service methods that return specific objects is much easier to work with from the consumer's perspective.
However, many companies use web services as one of many layers to their applications. In this case, it may reduce maintenance to have a generic API. I've seen some clever mechanisms that require no changes whatsoever to the service in order to add another column to a table that is returned from the database.
My personal preference is for the specific API. I think that the specific methods are much easier to work with - and are largely self-documenting. The specific operation needs to be executed at some point, so why not expose it for what it is? You'd get laughed at if you wrote:
public void MyApiMethod(string operationToPerform, params object[] args)
{
switch(operationToPerform)
{
case "InsertCustomer":
InsertCustomer(args);
break;
case "UpdateCustomer":
UpdateCustomer(args);
break;
...
case "Juggle5BallsAtOnce":
Juggle5BallsAtOnce(args);
break;
}
}
So why do that with a Web Service? It'd be much better to have:
public void InsertCustomer(Customer customer)
{
...
}
public void UpdateCustomer(Customer customer)
{
...
}
...
public void Juggle5BallsAtOnce(bool useApplesAndEatThemConcurrently)
{
...
}
Let me first apologise for the length of the entire topic. It will be fairly long, but I wish to be sure that the message comes over clearly without errors.
Here at the company, we have an existing ASP.NET WebApplication. Written in C# ASP.NET on the .NET Framework 3.5 SP1. Some time ago an initial API was developed for this web application using WCF and SOAP to allow external parties to communicate with the application without relying on the browsers.
This API survived for some time, but eventually the request came to create a new API that was RESTfull and relying on new technologies. I was given this assignment, and I created the initial API using the Microsoft MVC 2 Framework, running inside our ASP.NET WebApplication. This took initially quiet some time to get it properly running, but at the moment we're able to make REST calls on the application to receive XML detailing our resources.
I've attended a Microsoft WebCamp, and I was immediatly sold by the OData concept. It was very similar then what we are doing, but this was a protocol supported by more players instead of our own implementation. Currently I'm working on a PoC (Proof of Concept) to recreate the API that I developed using the OData protocol and the WCF DataService technology.
After searching the Internet for getting NHibernate 2 to work with the Data Services, I succeeded in creating a ReadOnly version of the API that allows us to read out the entities from the internal business layer by mapping the incoming query requests to our Business layer.
However, we wish to have a functional API that also allows the creation of entities using the OData protocol. So now i'm a bit stuck on how to proceed. I've been reading the following article : http://weblogs.asp.net/cibrax/default.aspx?PageIndex=3
The above articly nicely explains on how to map a custom DataService to the NHibernate layer. I've used this as a base to continue on, but I have the "problem" that I don't want to map my requests directly to the database using NHibernate, but I wish to map them to our Business layer (a seperate DLL) that performs a large batch of checks, constraints and updates based upon accessrights, privledges and triggers.
So what I want to ask, I for example create my own NhibernateContext class as in the above articly, but instead rely on our Business Layer instead of NHibernate sessions, could it work? I'd probably have to rely on reflection alot to figure out the type of object I'm working with at runtime and call the correct business classes to perform the updates and deletes.
To demonstrate with a smal ascii picture:
*-----------------*
* Database *
*-----------------*
*------------------------*
* DAL(Data Access Layer) *
*------------------------*
*------------------------*
* BUL (Bussiness Layer) *
*------------------------*
*---------------* *-------------------*
* My OData stuff* * Internal API *
*---------------* *-------------------*
*------------------*
* Web Application *
*------------------*
So, would this work, or would the performance make it useless?
Or am I just missing the ball here?
The idea is that I wish to reuse whatever logic is stored in the BUL & DAL layer from the OData WCF DataService.
I was thinking about creating new classes that inherit from the EntityModel classes in the Data.Services namespace and create a new DataService object that marks all calls to the BUL & DAL & API layers. I'm however not sure where/who to intercept the requests for creating and deleting resources.
I hope it's a bit clear what I'm trying to explain, and I hope someone can help me on this.
The devil is in the details, but it sounds like the design you're proposing should work.
The DataService class is where you get to define the access rights applicable to everyone, configuration settings, and custom operations. In this scenario, I think you will be focusing more on the data context instead (the 'T' in DataService).
For the context, there are really two interesing paths: reads and writes. Reads happen through the IQueryable entry points. Writing a LINQ provider is a good chunk of work, but NHibernate already supports this, although it would return what I imagine we're calling DAL entities. You can use query interceptors to do access checks here if you can express those in terms that the database would understand.
The update path is from what I understand where you are trying to run more business logic (you mentioned validation, extra updates, etc). To do this, you'll want to focus on the IUpdatable implementation (IDataServiceUpdateProvider if you're using the latest version). Here you can use whichever objects you want - they could be DAL objects or business objects. You can do everything in the DAL and then run validation on SaveChanges(), or do everything on business objects if they validate as they go.
There are two places where you might 'jump' from one kind of objects to another. One is in the GetResource() API, where you get an IQueryable, presumably in term of DAL entities. The other is in ResolveResource(), where the runtime is asking for an object to serialize, just like it would get from an IQueryable, so it's presumably also a DAL entity.
Hope this helps - doing uniform access over non-uniform APIs can be hard, but often well worth it!