I have an AWS api that proxies lamba functions. I currently use different endpoints with separate lambda functions:
api.com/getData --> getData
api.com/addData --> addData
api.com/signUp --> signUp
The process to manage all the endpoints and functions becomes cumbersome. Is there any disadvantage when I use a single endpoint to one lambda function which decides what to do based on the query string?
api.com/exec&func=getData --> exec --> if(params.func === 'getData') { ... }
It's perfectly valid to map multiple methods to a single lambda function and many people are using this methodology today as opposed to creating an api gateway resource and lambda function for each discrete method.
You might consider proxying all requests to a single function. Take a look at the following documentation on creating an API Gateway => Lambda proxy integration:
http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-set-up-simple-proxy.html
Their example is great here. A request like the following:
POST /testStage/hello/world?name=me HTTP/1.1
Host: gy415nuibc.execute-api.us-east-1.amazonaws.com
Content-Type: application/json
headerName: headerValue
{
"a": 1
}
Will wind up sending the following event data to your AWS Lambda function:
{
"message": "Hello me!",
"input": {
"resource": "/{proxy+}",
"path": "/hello/world",
"httpMethod": "POST",
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"cache-control": "no-cache",
"CloudFront-Forwarded-Proto": "https",
"CloudFront-Is-Desktop-Viewer": "true",
"CloudFront-Is-Mobile-Viewer": "false",
"CloudFront-Is-SmartTV-Viewer": "false",
"CloudFront-Is-Tablet-Viewer": "false",
"CloudFront-Viewer-Country": "US",
"Content-Type": "application/json",
"headerName": "headerValue",
"Host": "gy415nuibc.execute-api.us-east-1.amazonaws.com",
"Postman-Token": "9f583ef0-ed83-4a38-aef3-eb9ce3f7a57f",
"User-Agent": "PostmanRuntime/2.4.5",
"Via": "1.1 d98420743a69852491bbdea73f7680bd.cloudfront.net (CloudFront)",
"X-Amz-Cf-Id": "pn-PWIJc6thYnZm5P0NMgOUglL1DYtl0gdeJky8tqsg8iS_sgsKD1A==",
"X-Forwarded-For": "54.240.196.186, 54.182.214.83",
"X-Forwarded-Port": "443",
"X-Forwarded-Proto": "https"
},
"queryStringParameters": {
"name": "me"
},
"pathParameters": {
"proxy": "hello/world"
},
"stageVariables": {
"stageVariableName": "stageVariableValue"
},
"requestContext": {
"accountId": "12345678912",
"resourceId": "roq9wj",
"stage": "testStage",
"requestId": "deef4878-7910-11e6-8f14-25afc3e9ae33",
"identity": {
"cognitoIdentityPoolId": null,
"accountId": null,
"cognitoIdentityId": null,
"caller": null,
"apiKey": null,
"sourceIp": "192.168.196.186",
"cognitoAuthenticationType": null,
"cognitoAuthenticationProvider": null,
"userArn": null,
"userAgent": "PostmanRuntime/2.4.5",
"user": null
},
"resourcePath": "/{proxy+}",
"httpMethod": "POST",
"apiId": "gy415nuibc"
},
"body": "{\r\n\t\"a\": 1\r\n}",
"isBase64Encoded": false
}
}
Now you have access to all headers, url params, body etc. and you could use that to handle requests differently in a single Lambda function (basically implementing your own routing).
As an opinion I see some advantages and disadvantages to this approach. Many of them depend on your specific use case:
Deployment: if each lambda function is discrete then you can deploy them independently, which might reduce the risk from code changes (microservices strategy). Conversely you may find that needing to deploy functions separately adds complexity and is burdensome.
Self Description: API Gateway's interface makes it extremely intuitive to see the layout of your RESTful endpoints -- the nouns and verbs are all visible at a glance. Implementing your own routing could come at the expense of this visibility.
Lambda sizing and limits: If you proxy all -- then you'll wind up needing to choose an instance size, timeout etc. that will accommodate all of your RESTful endpoints. If you create discrete functions then you can more carefully choose the memory footprint, timeout, deadletter behavior etc. that best meets the needs of the specific invocation.
I would have commented to just add a couple of points to Dave Maple's great answer but I don't have enough reputation points yet so I'll add the comments here.
I started to head down the path of multiple endpoints pointing to one Lambda function that could treat each endpoint different by accessing the 'resource' property of the Event. After trying it I have now separated them into separate functions for the reasons that Dave suggested plus:
I find it easier to go through logs and monitors when the functions are separated.
One nuance that as a beginner I didn't pick up on at first is that you can have one code base and deploy the exact same code as multiple Lambda functions. This allows you to have the benefits of function separation and the benefits of a consolidated approach in your code base.
You can use the AWS CLI to automate tasks across the multiple functions to reduce/eliminate the downside of managing separate functions. For example, I have a script that updates 10 functions with the same code.
i've been building 5~6 microservices with Lambda-API Gateway, and been through several try & failure and success.
in short, from my experiences, it's better to delegate all the API calls to lambda with just one APIGateway wildcard mapping, such as
/api/{proxy+} -> Lambda
if you ever used any frameworks like grape you know that when making APIs, features like
"middleware"
"global exception handling"
"cascade routing"
"parameter validation"
are really crucial.
as your API grows, it's almost impossible to manage all the routes with API Gateway mapping, nor API Gateway support non of those feature also.
further more, it's not really practically to break lambda for each endpoints for development or deployment.
from your example,
api.com/getData --> getData
api.com/addData --> addData
api.com/signUp --> signUp
imagine you have data ORM, User authentication logic, common view file (such as data.erb).. then how you gonna share that?
you might can break like,
api/auth/{+proxy} -> AuthServiceLambda
api/data/{+proxy} -> DataServiceLambda
but not like "per endpoint". you might can lookup concept of microservice and best practice about how you can split the service
for those web framework like features, checkout this we just built web framework for lambda since i needed this at my company.
A similar scenario is adressed in the official AWS blogpost named Best practices for organizing larger serverless applications.
The general recommendation is to split "monolithic lambdas" into separate lambdas and move the routing to the API Gateway.
This is what the blog writes about the "monolithic lambda" approach:
This approach is generally unnecessary, and it’s often better to take
advantage of the native routing functionality available in API
Gateway.
...
API Gateway is also capable of validating parameters, reducing the
need for checking parameters with custom code. It can also provide
protection against unauthorized access, and a range of other features
more suited to be handled at the service level.
Going from this:
To this
The responsibility of mapping API requests to Lambda in AWS is handled through a Gateway's API specification.
Mapping of URL paths and HTTP methods as well as data validation SHOULD be left up to the Gateway. There is also the question of permissions and API scope; you'll not be able to leverage API scopes and IAM permission levels in a normal way.
In terms of coding, to replicate this mechanism inside of a Lambda handler is an anti-pattern. Going down that route one will soon end up with something that looks like the routing for a node express server, not a Lambda function.
After having set up 50+ Lambdas behind API Gateway I can say that
function handlers should be kept as dump as possible, allowing them to be reusable independent from the context from which they're being invoked.
As far as I know, AWS allows only one handler per Lambda function. That’s why I have created a little "routing" mechanism with Java Generics (for stronger type checks at compile time). In the following example you can call multiple methods and pass different object types to the Lambda and back via one Lambda handler:
Lambda class with handler:
public class GenericLambda implements RequestHandler<LambdaRequest<?>, LambdaResponse<?>> {
#Override
public LambdaResponse<?> handleRequest(LambdaRequest<?> lambdaRequest, Context context) {
switch (lambdaRequest.getMethod()) {
case WARMUP:
context.getLogger().log("Warmup");
LambdaResponse<String> lambdaResponseWarmup = new LambdaResponse<String>();
lambdaResponseWarmup.setResponseStatus(LambdaResponse.ResponseStatus.IN_PROGRESS);
return lambdaResponseWarmup;
case CREATE:
User user = (User)lambdaRequest.getData();
context.getLogger().log("insert user with name: " + user.getName()); //insert user in db
LambdaResponse<String> lambdaResponseCreate = new LambdaResponse<String>();
lambdaResponseCreate.setResponseStatus(LambdaResponse.ResponseStatus.COMPLETE);
return lambdaResponseCreate;
case READ:
context.getLogger().log("read user with id: " + (Integer)lambdaRequest.getData());
user = new User(); //create user object for test, instead of read from db
user.setName("name");
LambdaResponse<User> lambdaResponseRead = new LambdaResponse<User>();
lambdaResponseRead.setData(user);
lambdaResponseRead.setResponseStatus(LambdaResponse.ResponseStatus.COMPLETE);
return lambdaResponseRead;
default:
LambdaResponse<String> lambdaResponseIgnore = new LambdaResponse<String>();
lambdaResponseIgnore.setResponseStatus(LambdaResponse.ResponseStatus.IGNORED);
return lambdaResponseIgnore;
}
}
}
LambdaRequest class:
public class LambdaRequest<T> {
private Method method;
private T data;
private int languageID;
public static enum Method {
WARMUP, CREATE, READ, UPDATE, DELETE
}
public LambdaRequest(){
}
public Method getMethod() {
return method;
}
public void setMethod(Method create) {
this.method = create;
}
public T getData() {
return data;
}
public void setData(T data) {
this.data = data;
}
public int getLanguageID() {
return languageID;
}
public void setLanguageID(int languageID) {
this.languageID = languageID;
}
}
LambdaResponse class:
public class LambdaResponse<T> {
private ResponseStatus responseStatus;
private T data;
private String errorMessage;
public LambdaResponse(){
}
public static enum ResponseStatus {
IGNORED, IN_PROGRESS, COMPLETE, ERROR, COMPLETE_DUPLICATE
}
public ResponseStatus getResponseStatus() {
return responseStatus;
}
public void setResponseStatus(ResponseStatus responseStatus) {
this.responseStatus = responseStatus;
}
public T getData() {
return data;
}
public void setData(T data) {
this.data = data;
}
public String getErrorMessage() {
return errorMessage;
}
public void setErrorMessage(String errorMessage) {
this.errorMessage = errorMessage;
}
}
Example POJO User class:
public class User {
private String name;
public User() {
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
JUnit test method:
#Test
public void GenericLambda() {
GenericLambda handler = new GenericLambda();
Context ctx = createContext();
//test WARMUP
LambdaRequest<String> lambdaRequestWarmup = new LambdaRequest<String>();
lambdaRequestWarmup.setMethod(LambdaRequest.Method.WARMUP);
LambdaResponse<String> lambdaResponseWarmup = (LambdaResponse<String>) handler.handleRequest(lambdaRequestWarmup, ctx);
//test READ user
LambdaRequest<Integer> lambdaRequestRead = new LambdaRequest<Integer>();
lambdaRequestRead.setData(1); //db id
lambdaRequestRead.setMethod(LambdaRequest.Method.READ);
LambdaResponse<User> lambdaResponseRead = (LambdaResponse<User>) handler.handleRequest(lambdaRequestRead, ctx);
}
ps.: if you have deserialisation problems (LinkedTreeMap cannot be cast to ...) in you Lambda function (because uf the Generics/Gson), use the following statement:
YourObject yourObject = (YourObject)convertLambdaRequestData2Object(lambdaRequest, YourObject.class);
Method:
private <T> Object convertLambdaRequestData2Object(LambdaRequest<?> lambdaRequest, Class<T> clazz) {
Gson gson = new Gson();
String json = gson.toJson(lambdaRequest.getData());
return gson.fromJson(json, clazz);
}
The way I see, choosing single vs multiple API is a function of following considerations:
Security: I think this is the biggest challenge of having a single API structure. It may be possible to have different security profile for different parts of the requirement
Think microservice model from business perspective:
The whole purpose of any API should be serving some requests, hence it must be well understood and easy to use. So related APIs should be combined. For example, if you have a mobile client and it requires 10 things to be pulled in and out from DB, it makes sense to have 10 endpoints into a single API.
But this should be within reason and should be seen in context of overall solution design. For example, if you design a payroll product, you may think to have separate modules for leave management and user details management. Even if they are often used by a single client, they should still be different API, because their business meaning is different.
Reusability: Applies to both code and functionality reusability. Code reusability is a easier problem to solve, ie build common modules for shared requirements and build them as libraries.
Functionality reusability is harder to solve. In my mind, most of the cases can be solved by redesigning the way endpoints/functions are laid out, because if you need duplication of functionality that means your initial design is not detailed enough.
Just found a link in another SO post which summarizes better
Related
I’m very new to the SpringReactor project.
Until now I've only used Mono from WebClient .bodyToMono() steps, and mostly block() those Mono's or .zip() multiple of them.
But this time I have a usecase where I need to asynchronously call methods in multiple service classes, and those multiple service classes are calling multiple backend api.
I understand Project Reactor doesn't provide asynchronous flow by default.
But we can make the publishing and/or subscribing on different thread and make code asynchronous
And that's what I am trying to do.
I tried to read the documentation here reactor reference but still not clear.
For the purpose of this question, I’m making up this imaginary scenario. that is a little closer to my use case.
Let's assume we need to get a search response from google for some texts searched under images.
Example Scenario
Let's have an endpoint in a Controller
This endpoint accepts the following object from request body
MultimediaSearchRequest{
Set<String> searchTexts; //many texts.
boolean isAddContent;
boolean isAddMetadata;
}
in the controller, I’ll break the above single request object into multiple objects of the below type.
MultimediaSingleSearchRequest{
String searchText;
boolean isAddContent;
boolean isAddMetadata;
}
This Controller talks to 3 Service classes.
Each of the service classes has a method searchSingleItem.
Each service class uses a few different backend Apis, but finally combines the results of those APIs responses into the same type of response class, let's call it MultimediaSearchResult.
class JpegSearchHandleService {
public MultimediaSearchResult searchSingleItem
(MultimediaSingleSearchRequest req){
return comboneAllImageData(
getNameApi(req),
getImageUrlApi(req),
getContentApi(req) //dont call if req.isAddContent false
)
}
}
class GifSearchHandleService {
public MultimediaSearchResult searchSingleItem
(MultimediaSingleSearchRequest req){
return comboneAllImageData(
getNameApi(req),
gitPartApi(req),
someRandomApi(req),
soemOtherRandomApi(req)
)
}
}
class VideoSearchHandleService {
public MultimediaSearchResult searchSingleItem
(MultimediaSingleSearchRequest req){
return comboneAllImageData(
getNameApi(req),
codecApi(req),
commentsApi(req),
anotherApi(req)
)
}
}
In the end, my controller returns the response as a List of MultimediaSearchResult
Class MultimediaSearchResponse{
List< MultimediaSearchResult> results;
}
If I want to use this all asynchronously using the project reactor. how to achieve it.
Like calling searchSingleItem method in each service for each searchText asynchronously.
Even within the services call each backend API asynchronously (I’m already using WebClient and converting response bodyToMono for backend API calls)
First, I will outline a solution for the upper "layer" of your scenario.
The code (a simple simulation of the scenario):
public class ChainingAsyncCallsInSpring {
public Mono<MultimediaSearchResponse> controllerEndpoint(MultimediaSearchRequest req) {
return Flux.fromIterable(req.getSearchTexts())
.map(searchText -> new MultimediaSingleSearchRequest(searchText, req.isAddContent(), req.isAddMetadata()))
.flatMap(multimediaSingleSearchRequest -> Flux.merge(
classOneSearchSingleItem(multimediaSingleSearchRequest),
classTwoSearchSingleItem(multimediaSingleSearchRequest),
classThreeSearchSingleItem(multimediaSingleSearchRequest)
))
.collectList()
.map(MultimediaSearchResponse::new);
}
private Mono<MultimediaSearchResult> classOneSearchSingleItem(MultimediaSingleSearchRequest req) {
return Mono.just(new MultimediaSearchResult("1"));
}
private Mono<MultimediaSearchResult> classTwoSearchSingleItem(MultimediaSingleSearchRequest req) {
return Mono.just(new MultimediaSearchResult("2"));
}
private Mono<MultimediaSearchResult> classThreeSearchSingleItem(MultimediaSingleSearchRequest req) {
return Mono.just(new MultimediaSearchResult("3"));
}
}
Now, some rationale.
In the controllerEndpoint() function, first we create a Flux that will emit every single searchText from the request. We map these to MultimediaSingleSearchRequest objects, so that the services can consume them with the additional metadata that was provided with the original request.
Then, Flux::flatMap the created MultimediaSingleSearchRequest objects into a merged Flux, which (as opposed to Flux::concat) ensures that all three publishers are subscribed to eagerly i.e. they don't wait for one another. It works best on this exact scenario, when several independent publishers need to be subscribed to at the same time and their order is not important.
After the flat map, at this point, we have a Flux<MultimediaSearchResult>.
We continue with Flux::collectList, thus collecting the emitted values from all publishers (we could also use Flux::reduceWith here).
As a result, we now have a Mono<List<MultimediaSearchResult>>, which can easily be mapped to a Mono<MultimediaSearchResponse>.
The results list of the MultimediaSearchResponse will have 3 items for each searchText in the original request.
Hope this was helpful!
Edit
Extending the answer with a point of view from the service classes as well. Assuming that each inner (optionally skipped) call returns a different type of result, this would be one way of going about it:
public class MultimediaSearchResult {
private Details details;
private ContentDetails content;
private MetadataDetails metadata;
}
public Mono<MultimediaSearchResult> classOneSearchSingleItem(MultimediaSingleSearchRequest req) {
return Mono.zip(getSomeDetails(req), getContentDetails(req), getMetadataDetails(req))
.map(tuple3 -> new MultimediaSearchResult(
tuple3.getT1(),
tuple3.getT2().orElse(null),
tuple3.getT3().orElse(null)
)
);
}
// Always wanted
private Mono<Details> getSomeDetails(MultimediaSingleSearchRequest req) {
return Mono.just(new Details("details")); // api call etc.
}
// Wanted if isAddContent is true
private Mono<Optional<ContentDetails>> getContentDetails(MultimediaSingleSearchRequest req) {
return req.isAddContent()
? Mono.just(Optional.of(new ContentDetails("content-details"))) // api call etc.
: Mono.just(Optional.empty());
}
// Wanted if isAddMetadata is true
private Mono<Optional<MetadataDetails>> getMetadataDetails(MultimediaSingleSearchRequest req) {
return req.isAddMetadata()
? Mono.just(Optional.of(new MetadataDetails("metadata-details"))) // api call etc.
: Mono.just(Optional.empty());
}
Optionals are used for the requests that might be skipped, since Mono::zip will fail if either of the zipped publishers emit an empty value.
If the results of each inner call extend the same base class or are the same wrapped return type, then the original answer applies as to how they can be combined (Flux::merge etc.)
I am using asp.net core 6 and Microsoft.AspNetCore.OData version 7.5.12, I realize this is an older version of the package but I'm pinned to that currently for reasons out of my control.
My situation is I have a single controller that can return a couple of different kinds of entities. Lets say I have dogs and each dog could have walks, so I have the routes:
/api/v1/dogs
/api/v1/dogs/{dog_id}/walks
the first returns a list of dogs and the second returns a list of walks for a given dog with the id of dog_id
To build my edm model I am doing:
private IEdmModel GetEdmModel()
{
var builder = new ODataConventionModelBuilder();
builder.EntitySet<Dog>("dogs");
builder.EntitySet<Walk>("walks");
return builder.GetEdmModel();
}
to map my routes I am doing:
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
var edmModel = GetEdmModel()
app.UseEndpoints((Action<IEndpointRouteBuilder>)(endpoints =>
{
endpoints.Select().Filter().Count().OrderBy().Expand().MaxTop(new int?());
endpoints.MapODataRoute("dogs", "api/v1", edmModel);
endpoints.MapODataRoute("walks", "api/v1/dogs/{dogId}/", edmModel);
....
On my controllers I have the [EnableQuery] attribute on each of the action methods
[Route("api/v1/dogs")]
public class DogController
...
[HttpGet]
[EnableQuery]
public async Task<IActionResult> GetDogs()
{
var dogs = await _dogRepo.GetAllAsync(); // Returns Enumerable<Dog>
return Ok(dogs)
}
[HttpGet("{dogId}/walks")]
[EnableQuery]
public async Task<IActionResult> GetWalksAsync([FromRoute] Guid dogId)
{
var walks = await _walkRepo.GetForDog(dogId); // Returns Enumerable<Walk>
return Ok(walks)
}
Both of these endpoints seem to be working, I can pass odata query params like $count, $top and see the actions take effect, however the payloads from these two endpoints differ and I'm not sure why. The request to /api/v1/dogs returns:
{
"#odata.context": "http://localhost/api/v1/$metadata#dogs",
"value": [
{
"Id": "bc576d83-dc33-4c31-988d-53543f4f01f0",
"Name": "Dog1"
},
{
"Id": "baec93fa-344c-11ed-a261-0242ac120002",
"Name": "Dog2"
}
}
which is fine, however thee request to /api/v1/dogs/{dog_id}/walks returns the fully qualified class name for walk in the #odata.context attribute and also includes a #odata.type property that has the fully qualified class name:
"#odata.context": "http://localhost/api/v1/$metadata#Collection(My.Project.My.Model.Namespace.Walk)",
"value": [
{
"#odata.type": "#My.Project.My.Model.Namespace.Walk",
"Id": "03ebcaf6-f94b-48e6-854b-c156a993fcc9",
"DogId": "bc576d83-dc33-4c31-988d-53543f4f01f0",
"WalkTime": "2022-09-14T08:59:57.765216-03:00"
},
I'm really unsure what I am missing here. I feel like I am configuring these similarly, the fact that I can perform odata operations on both of these endpoints suggests to me these are both configured appropriately. I can't find a reason for this descrepancey and not seeing anything in the documentation, especially for this version of Microsoft.AspNetCore.OData, I feel like I am missing something simple here. I did see the post Why does my OData return data contain "#odata.type"? in which the advice is given to modify the client's request however I don't think that would apply to me because I am using the same client, same headers etc, for each endpoint request and I am getting the differing responses. I would also prefer my API to be consistent.
If anyone could suggest what I'm doing wrong here I'd really appreciate the help. I've been banging my head against this for days. Thanks very much!
In the app I'm working on, I'm using Mediatr and its pipelines to handle database interaction, some minor business logic, validation, etc.
There's a few checks for things like access control I can handle in the pipeline, since I'm using a context object as described here https://jimmybogard.com/sharing-context-in-mediatr-pipelines/ to go from ASP.Net identity to a custom context object with user information and claims.
One problem I'm having is that since this application is multi-tenant, I need to ensure that even if an object exists, it belongs to that tenant, and the only way to be sure of that is to grab the object from the database and check it. It seems to me the validation shouldn't have side effects, so I don't want to rely on that to populate the context object. But then that pushes a bunch of validation down into the Mediatr handlers as they check for object existence, and so on, leading to a lot of repeated code. I don't really want to query the database multiple times since some queries can be expensive.
Another issue with doing the more complicated validation in the actual request handlers is getting what are essentially validation errors back out. Currently, if one of these checks fail I throw a ValidationException, which is then caught by middleware and turned into a ProblemDetails that's returned to the API caller. This is basically exceptions as flow control, and a validation failure really isn't "exceptional" anyhow.
The thoughts I'm having on how to solve this are:
Somewhere in the pipeline, when I'm building the context, include attempting to fetch the objects needed from the database. Validation then fails if any of these are null. This seems like it would make testing harder, as well as needing to decorate the requests somehow (or use reflection) so the pipeline can know to attempt to load these objects.
Have the queries in the validator, but use some sort of cache aware repository so when the same object is queried later, it's served from the cache, and not the database. The handlers would also use this cache aware repository (Currently the handlers interact directly with the EF Core DbContext to query). This then adds the issue of cache invalidation, which I'm going to have to handle at some point, anyhow (quite a few items are seldom modified). For testing, a dummy cache object can be injected that doesn't actually cache anything.
Make all the responses from requests implement an interface (or extend an abstract class) that has validation info, general success flags, etc. This can either be returned through the API directly, or have some pipeline that transforms failures into ProblemDetails. This would add some boilerplate to every response and handler, but avoids exceptions as flow control, and the caching/reflection issues in the other options.
Assume for 1 and 2 that any sort of race conditions are not an issue. Objects don't change owners, and things are seldom actually deleted from the database for auditing/accounting purposes.
I know there's no true one size fits all for problems like this, but I would like to know if there's additional options I'm missing, or any long term maintainability issues anyone with a similar pipeline has encountered if they went with one of these listed options.
We use MediatR IRequestPreProcessor for fetching data that we need both in RequestHandler and in FluentValidation validators.
RequestPreProcessor:
public interface IProductByIdBinder
{
int ProductId { get; }
ProductEntity Product { set; }
}
public class ProductByIdBinder<T> : IRequestPreProcessor<T> where T : IProductByIdBinder
{
private readonly IRepositoryReadAsync<ProductEntity> productRepository;
public ProductByIdBinder(IRepositoryReadAsync<ProductEntity> productRepository)
{
this.productRepository = productRepository;
}
public async Task Process(T request, CancellationToken cancellationToken)
{
request.Product = await productRepository.GetAsync(request.ProductId);
}
}
RequestHandler:
public class ProductDeleteCommand : IRequest, IProductByIdBinder
{
public ProductDeleteCommand(int id)
{
ProductId = id;
}
public int ProductId { get; }
public ProductEntity Product { get; set; }
private class ProductDeleteCommandHandler : IRequestHandler<ProductDeleteCommand>
{
private readonly IRepositoryAsync<ProductEntity> productRepository;
public ProductDeleteCommandHandler(
IRepositoryAsync<ProductEntity> productRepository)
{
this.productRepository = productRepository;
}
public Task<Unit> Handle(ProductDeleteCommand request, CancellationToken cancellationToken)
{
productRepository.Delete(request.Product);
return Unit.Task;
}
}
}
FluentValidation validator:
public class ProductDeleteCommandValidator : AbstractValidator<ProductDeleteCommand>
{
public ProductDeleteCommandValidator()
{
RuleFor(cmd => cmd)
.Must(cmd => cmd.Product != null)
.WithMessage(cmd => $"The product with id {cmd.ProductId} doesn't exist.");
}
}
I see nothing wrong with handling business logic validation in the handler layer.
Moreover, I do not think it is right to throw exceptions for them, as you said it is exceptions as flow control.
Introducing a cache seems like overkill for the use case too. The most reasonable option is the third IMHO.
Instead of implementing an interface you can use the nifty OneOf library and have something like
using HandlerResponse = OneOf<Success, NotFound, ValidationResponse>;
public class MediatorHandler : IRequestHandler<Command, HandlerResponse>
{
public async Task<HandlerResponse> Handle(
Command command,
CancellationToken cancellationToken)
{
Resource resource = await _userRepository
.GetResource(command.Id);
if (resource is null)
return new NotFound();
if (!resource.IsValid)
return new ValidationResponse(new ProblemDetails());
return new Success();
}
And then map it in your API Layer like
public async Task<IActionResult> PostAsync([FromBody] DummyRequest request)
{
HandlerResponse response = await _mediator.Send(
new Command(request.Id));
return response.Match<IActionResult>(
success => Created(),
notFound => NotFound(),
failed => new UnprocessableEntityResult(failed.ProblemDetails))
);
}
I have a very loosely couplet system that takes about any json payload and saves in a mongo colection.
There are no entities to expose as resouces, but only controller endpoints
eg.
#RequestMapping(method = RequestMethod.POST, consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<Map<String, Object>> publish(#RequestBody Map<String, Object> jsonBody) {
.. save the body in mongo
....}
I still want to build a hypermedia driven app. with links for navigation and paging.
The controller therefor implements ResourceProcessor
public class PublicationController implements ResourceProcessor<RepositoryLinksResource> {
....
#Override
public RepositoryLinksResource process(RepositoryLinksResource resource) {
resource.add(linkTo(methodOn(PublicationController.class).getPublications()).withRel("publications"));
return resource;
}
The problem is that the processor never gets called ??
Putting #EnableWebMvc on a configuration class solves it (the processor gets called), but firstly that should not be necessary, and secondary the format of HAL links seems broken
eg. gets formattet as a list
links: [
{
"links":[
{
"rel":"self",
"href":"http://localhost:8080/api/publications/121212"
},
{
"rel":"findByStartTimeBetween",
"href":"http://localhost:8080/api/publications/search/findStartTimeBetween?timeStart=2015-04-10T13:44:56.437&timeEnd=2015-04-10T13:44:56.439"
}
]
}
Are there alternatives to #enableWebMvc so the processor gets called ?
Currently I'm running Spring boot v. 1.2.3
Well it turns out that the answer was quite simple.
The problem was that I had static content (resources/static/index.html)
This will suppress the hypermedia links from the root.
Moving the static content made everything thing work great.
Laravel 4: In the context of consume-your-own-api, my XyzController uses my custom InternalAPiDispatcher class to create a Request object, push it onto a stack (per this consideration), then dispatch the Route:
class InternalApiDispatcher {
// ...
public function dispatch($resource, $method)
{
$this->request = \Request::create($this->apiBaseUrl . '/' . $resource, $method);
$this->addRequestToStack($this->request);
return \Route::dispatch($this->request);
}
To start with, I'm working on a basic GET for a collection, and would like the Response content to be in the format of an Eloquent model, or whatever is ready to be passed to a View (perhaps a repository thingy later on when I get more advanced). It seems inefficient to have the framework create a json response and then I decode it back into something else to display it in a view. What is a simple/efficient/elegant way to direct the Request to return the Response in the format I desire wherever I am in my code?
Also, I've looked at this post a lot, and although I'm handling query string stuff in the BaseContorller (thanks to this answer to my previous question) it all seems to be getting far too convoluted and I feel I'm getting lost in the trees.
EDIT: could the following be relevant (from laravel.com/docs/templates)?
"By specifying the layout property on the controller, the view specified will be created for you and will be the assumed response that should be returned from actions."
Feel free to mark this as OT if you like, but I'm going to suggest that you might want to reconsider your problem in a different light.
If you are "consuming your own API", which is delivered over HTTP, then you should stick to that method of consumption.
For all that it might seem weird, the upside is that you could actually replace that part of your application with some other server altogether. You could run different parts of your app on different boxes, you could rewrite the HTTP part completely, etc, etc. All the benefits of "web scale".
The route you're going down is coupling the publisher and the subscriber. Now, since they are both you, or more accurately your single app, this is not necessarily a bad thing. But if you want the benefits of being able to access your own "stuff" without resorting to HTTP (or at least "HTTP-like") requests, then I wouldn't bother with faking it. You'd be better off defining a different internal non-web Service API, and calling that.
This Service could be the basis of your "web api", and in fact the whole HTTP part could probably be a fairly thin controller layer on top of the core service.
It's not a million miles away from where you are now, but instead of taking something that is meant to output HTTP requests and mangling it, make something that can output objects, and wrap that for HTTP.
Here is how I solved the problem so that there is no json encoding or decoding on an internal request to my API. This solution also demonstrates use of route model binding on the API layer, and use of a repository by the API layer as well. This is all working nicely for me.
Routes:
Route::get('user/{id}/thing', array(
'uses' => 'path\to\Namespace\UserController#thing',
'as' => 'user.thing'));
//...
Route::group(['prefix' => 'api/v1'], function()
{
Route::model('thing', 'Namespace\Thing');
Route::model('user', 'Namespace\User');
Route::get('user/{user}/thing', [
'uses' => 'path\to\api\Namespace\UserController#thing',
'as' => 'api.user.thing']);
//...
Controllers:
UI: UserController#thing
public function thing()
{
$data = $this->dispatcher->dispatch('GET', “api/v1/user/1/thing”)
->getOriginalContent(); // dispatcher also sets config flag...
// use $data in a view;
}
API: UserController#thing
public function thing($user)
{
$rspns = $this->repo->thing($user);
if ($this->isInternalCall()) { // refs config flag
return $rspns;
}
return Response::json([
'error' => false,
'thing' => $rspns->toArray()
], 200);
Repo:
public function thing($user)
{
return $user->thing;
}
Here is how I achieved it in Laravel 5.1. It requires some fundamental changes to the controllers to work.
Instead of outputting response with return response()->make($data), do return $data.
This allows the controller methods to be called from other controllers with App::make('apicontroller')->methodname(). The return will be object/array and not a JSON.
To do processing for the external API, your existing routing stays the same. You probably need a middleware to do some massaging to the response. Here is a basic example that camel cases key names for the JSON.
<?php
namespace App\Http\Middleware;
use Closure;
class ResponseFormer
{
public function handle($request, Closure $next)
{
$response = $next($request);
if($response->headers->get('content-type') == 'application/json')
{
if (is_array($response->original)) {
$response->setContent(camelCaseKeys($response->original));
}
else if (is_object($response->original)) {
//laravel orm returns objects, it is a huge time saver to handle the case here
$response->setContent(camelCaseKeys($response->original->toArray()));
}
}
return $response;
}
}