I'd like to build an JAXRS method, allows to specify a lumps of operations to perform on my resources.
For example, I have got a resource, Book. On this resource I deploy these methods: create, delete and update. If only, I set these method on Book resource, when my client needs to perform a lot of updates over a lot of book resources, he is going to have to send a lot of requests for each Book resource.
I'd like to deploy a JAXRS operation offers this functionality. For example: a batch method that receives which operation must to perform.
However, I've no idea how to do this.
I'm using JAXRS-2.0.
Thanks for all.
This isn't a jaxrs-specific question by any means but more of a design question. The JAXRS implementation of this is relatively straight-forward.
From what you're describing, I would create a batch POST endpoint that has a specific data structure for the job. You accept a JSON blob (serialized form of this data structure) instructing the endpoint what to do. Once the endpoint receives the data, a thread is spawned off in the background to "do work" and an identifier of "the job" is returned (assuming this is a long-running task possibly). If you do return a "job id", you should also have an endpoint to "get status" which will return the current status and presumably some sort of output once the job has completed.
Example data structure you may want to accept as JSON:
{
job_name: "Some job name",
requests: [
{
tasks: ["UPDATE"],
book_id: 122,
data: {
pages: 155,
last_published_date: "2015-09-01"
}
},
{
tasks: ["DELETE"],
book_id: 957
}
]
}
Your actual endpoint in JAXRS may look something like this:
#POST
#Path(value = "batch")
#Produces(MediaType.APPLICATION_JSON)
public String batchRequest(String batchRequest) {
BatchRequest requestObj = null;
if (!StringUtils.isBlank(batchRequest))
{
Gson gson = new Gson();
requestObj = gson.fromJson(batchRequest, BatchRequest.class);
// This is the class that would possibly spawn off a thread
// and return some sort of details about the job
JobDetails jobDetails = JobRunner.run(requestObj);
return gson.toJson(jobDetails);
}
return "error";
}
Hopefully this makes sense and helps out. Please feel free to reply with additional questions and I'll try to help as much as I can!
Related
I’m very new to the SpringReactor project.
Until now I've only used Mono from WebClient .bodyToMono() steps, and mostly block() those Mono's or .zip() multiple of them.
But this time I have a usecase where I need to asynchronously call methods in multiple service classes, and those multiple service classes are calling multiple backend api.
I understand Project Reactor doesn't provide asynchronous flow by default.
But we can make the publishing and/or subscribing on different thread and make code asynchronous
And that's what I am trying to do.
I tried to read the documentation here reactor reference but still not clear.
For the purpose of this question, I’m making up this imaginary scenario. that is a little closer to my use case.
Let's assume we need to get a search response from google for some texts searched under images.
Example Scenario
Let's have an endpoint in a Controller
This endpoint accepts the following object from request body
MultimediaSearchRequest{
Set<String> searchTexts; //many texts.
boolean isAddContent;
boolean isAddMetadata;
}
in the controller, I’ll break the above single request object into multiple objects of the below type.
MultimediaSingleSearchRequest{
String searchText;
boolean isAddContent;
boolean isAddMetadata;
}
This Controller talks to 3 Service classes.
Each of the service classes has a method searchSingleItem.
Each service class uses a few different backend Apis, but finally combines the results of those APIs responses into the same type of response class, let's call it MultimediaSearchResult.
class JpegSearchHandleService {
public MultimediaSearchResult searchSingleItem
(MultimediaSingleSearchRequest req){
return comboneAllImageData(
getNameApi(req),
getImageUrlApi(req),
getContentApi(req) //dont call if req.isAddContent false
)
}
}
class GifSearchHandleService {
public MultimediaSearchResult searchSingleItem
(MultimediaSingleSearchRequest req){
return comboneAllImageData(
getNameApi(req),
gitPartApi(req),
someRandomApi(req),
soemOtherRandomApi(req)
)
}
}
class VideoSearchHandleService {
public MultimediaSearchResult searchSingleItem
(MultimediaSingleSearchRequest req){
return comboneAllImageData(
getNameApi(req),
codecApi(req),
commentsApi(req),
anotherApi(req)
)
}
}
In the end, my controller returns the response as a List of MultimediaSearchResult
Class MultimediaSearchResponse{
List< MultimediaSearchResult> results;
}
If I want to use this all asynchronously using the project reactor. how to achieve it.
Like calling searchSingleItem method in each service for each searchText asynchronously.
Even within the services call each backend API asynchronously (I’m already using WebClient and converting response bodyToMono for backend API calls)
First, I will outline a solution for the upper "layer" of your scenario.
The code (a simple simulation of the scenario):
public class ChainingAsyncCallsInSpring {
public Mono<MultimediaSearchResponse> controllerEndpoint(MultimediaSearchRequest req) {
return Flux.fromIterable(req.getSearchTexts())
.map(searchText -> new MultimediaSingleSearchRequest(searchText, req.isAddContent(), req.isAddMetadata()))
.flatMap(multimediaSingleSearchRequest -> Flux.merge(
classOneSearchSingleItem(multimediaSingleSearchRequest),
classTwoSearchSingleItem(multimediaSingleSearchRequest),
classThreeSearchSingleItem(multimediaSingleSearchRequest)
))
.collectList()
.map(MultimediaSearchResponse::new);
}
private Mono<MultimediaSearchResult> classOneSearchSingleItem(MultimediaSingleSearchRequest req) {
return Mono.just(new MultimediaSearchResult("1"));
}
private Mono<MultimediaSearchResult> classTwoSearchSingleItem(MultimediaSingleSearchRequest req) {
return Mono.just(new MultimediaSearchResult("2"));
}
private Mono<MultimediaSearchResult> classThreeSearchSingleItem(MultimediaSingleSearchRequest req) {
return Mono.just(new MultimediaSearchResult("3"));
}
}
Now, some rationale.
In the controllerEndpoint() function, first we create a Flux that will emit every single searchText from the request. We map these to MultimediaSingleSearchRequest objects, so that the services can consume them with the additional metadata that was provided with the original request.
Then, Flux::flatMap the created MultimediaSingleSearchRequest objects into a merged Flux, which (as opposed to Flux::concat) ensures that all three publishers are subscribed to eagerly i.e. they don't wait for one another. It works best on this exact scenario, when several independent publishers need to be subscribed to at the same time and their order is not important.
After the flat map, at this point, we have a Flux<MultimediaSearchResult>.
We continue with Flux::collectList, thus collecting the emitted values from all publishers (we could also use Flux::reduceWith here).
As a result, we now have a Mono<List<MultimediaSearchResult>>, which can easily be mapped to a Mono<MultimediaSearchResponse>.
The results list of the MultimediaSearchResponse will have 3 items for each searchText in the original request.
Hope this was helpful!
Edit
Extending the answer with a point of view from the service classes as well. Assuming that each inner (optionally skipped) call returns a different type of result, this would be one way of going about it:
public class MultimediaSearchResult {
private Details details;
private ContentDetails content;
private MetadataDetails metadata;
}
public Mono<MultimediaSearchResult> classOneSearchSingleItem(MultimediaSingleSearchRequest req) {
return Mono.zip(getSomeDetails(req), getContentDetails(req), getMetadataDetails(req))
.map(tuple3 -> new MultimediaSearchResult(
tuple3.getT1(),
tuple3.getT2().orElse(null),
tuple3.getT3().orElse(null)
)
);
}
// Always wanted
private Mono<Details> getSomeDetails(MultimediaSingleSearchRequest req) {
return Mono.just(new Details("details")); // api call etc.
}
// Wanted if isAddContent is true
private Mono<Optional<ContentDetails>> getContentDetails(MultimediaSingleSearchRequest req) {
return req.isAddContent()
? Mono.just(Optional.of(new ContentDetails("content-details"))) // api call etc.
: Mono.just(Optional.empty());
}
// Wanted if isAddMetadata is true
private Mono<Optional<MetadataDetails>> getMetadataDetails(MultimediaSingleSearchRequest req) {
return req.isAddMetadata()
? Mono.just(Optional.of(new MetadataDetails("metadata-details"))) // api call etc.
: Mono.just(Optional.empty());
}
Optionals are used for the requests that might be skipped, since Mono::zip will fail if either of the zipped publishers emit an empty value.
If the results of each inner call extend the same base class or are the same wrapped return type, then the original answer applies as to how they can be combined (Flux::merge etc.)
I have an AWS api that proxies lamba functions. I currently use different endpoints with separate lambda functions:
api.com/getData --> getData
api.com/addData --> addData
api.com/signUp --> signUp
The process to manage all the endpoints and functions becomes cumbersome. Is there any disadvantage when I use a single endpoint to one lambda function which decides what to do based on the query string?
api.com/exec&func=getData --> exec --> if(params.func === 'getData') { ... }
It's perfectly valid to map multiple methods to a single lambda function and many people are using this methodology today as opposed to creating an api gateway resource and lambda function for each discrete method.
You might consider proxying all requests to a single function. Take a look at the following documentation on creating an API Gateway => Lambda proxy integration:
http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-set-up-simple-proxy.html
Their example is great here. A request like the following:
POST /testStage/hello/world?name=me HTTP/1.1
Host: gy415nuibc.execute-api.us-east-1.amazonaws.com
Content-Type: application/json
headerName: headerValue
{
"a": 1
}
Will wind up sending the following event data to your AWS Lambda function:
{
"message": "Hello me!",
"input": {
"resource": "/{proxy+}",
"path": "/hello/world",
"httpMethod": "POST",
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"cache-control": "no-cache",
"CloudFront-Forwarded-Proto": "https",
"CloudFront-Is-Desktop-Viewer": "true",
"CloudFront-Is-Mobile-Viewer": "false",
"CloudFront-Is-SmartTV-Viewer": "false",
"CloudFront-Is-Tablet-Viewer": "false",
"CloudFront-Viewer-Country": "US",
"Content-Type": "application/json",
"headerName": "headerValue",
"Host": "gy415nuibc.execute-api.us-east-1.amazonaws.com",
"Postman-Token": "9f583ef0-ed83-4a38-aef3-eb9ce3f7a57f",
"User-Agent": "PostmanRuntime/2.4.5",
"Via": "1.1 d98420743a69852491bbdea73f7680bd.cloudfront.net (CloudFront)",
"X-Amz-Cf-Id": "pn-PWIJc6thYnZm5P0NMgOUglL1DYtl0gdeJky8tqsg8iS_sgsKD1A==",
"X-Forwarded-For": "54.240.196.186, 54.182.214.83",
"X-Forwarded-Port": "443",
"X-Forwarded-Proto": "https"
},
"queryStringParameters": {
"name": "me"
},
"pathParameters": {
"proxy": "hello/world"
},
"stageVariables": {
"stageVariableName": "stageVariableValue"
},
"requestContext": {
"accountId": "12345678912",
"resourceId": "roq9wj",
"stage": "testStage",
"requestId": "deef4878-7910-11e6-8f14-25afc3e9ae33",
"identity": {
"cognitoIdentityPoolId": null,
"accountId": null,
"cognitoIdentityId": null,
"caller": null,
"apiKey": null,
"sourceIp": "192.168.196.186",
"cognitoAuthenticationType": null,
"cognitoAuthenticationProvider": null,
"userArn": null,
"userAgent": "PostmanRuntime/2.4.5",
"user": null
},
"resourcePath": "/{proxy+}",
"httpMethod": "POST",
"apiId": "gy415nuibc"
},
"body": "{\r\n\t\"a\": 1\r\n}",
"isBase64Encoded": false
}
}
Now you have access to all headers, url params, body etc. and you could use that to handle requests differently in a single Lambda function (basically implementing your own routing).
As an opinion I see some advantages and disadvantages to this approach. Many of them depend on your specific use case:
Deployment: if each lambda function is discrete then you can deploy them independently, which might reduce the risk from code changes (microservices strategy). Conversely you may find that needing to deploy functions separately adds complexity and is burdensome.
Self Description: API Gateway's interface makes it extremely intuitive to see the layout of your RESTful endpoints -- the nouns and verbs are all visible at a glance. Implementing your own routing could come at the expense of this visibility.
Lambda sizing and limits: If you proxy all -- then you'll wind up needing to choose an instance size, timeout etc. that will accommodate all of your RESTful endpoints. If you create discrete functions then you can more carefully choose the memory footprint, timeout, deadletter behavior etc. that best meets the needs of the specific invocation.
I would have commented to just add a couple of points to Dave Maple's great answer but I don't have enough reputation points yet so I'll add the comments here.
I started to head down the path of multiple endpoints pointing to one Lambda function that could treat each endpoint different by accessing the 'resource' property of the Event. After trying it I have now separated them into separate functions for the reasons that Dave suggested plus:
I find it easier to go through logs and monitors when the functions are separated.
One nuance that as a beginner I didn't pick up on at first is that you can have one code base and deploy the exact same code as multiple Lambda functions. This allows you to have the benefits of function separation and the benefits of a consolidated approach in your code base.
You can use the AWS CLI to automate tasks across the multiple functions to reduce/eliminate the downside of managing separate functions. For example, I have a script that updates 10 functions with the same code.
i've been building 5~6 microservices with Lambda-API Gateway, and been through several try & failure and success.
in short, from my experiences, it's better to delegate all the API calls to lambda with just one APIGateway wildcard mapping, such as
/api/{proxy+} -> Lambda
if you ever used any frameworks like grape you know that when making APIs, features like
"middleware"
"global exception handling"
"cascade routing"
"parameter validation"
are really crucial.
as your API grows, it's almost impossible to manage all the routes with API Gateway mapping, nor API Gateway support non of those feature also.
further more, it's not really practically to break lambda for each endpoints for development or deployment.
from your example,
api.com/getData --> getData
api.com/addData --> addData
api.com/signUp --> signUp
imagine you have data ORM, User authentication logic, common view file (such as data.erb).. then how you gonna share that?
you might can break like,
api/auth/{+proxy} -> AuthServiceLambda
api/data/{+proxy} -> DataServiceLambda
but not like "per endpoint". you might can lookup concept of microservice and best practice about how you can split the service
for those web framework like features, checkout this we just built web framework for lambda since i needed this at my company.
A similar scenario is adressed in the official AWS blogpost named Best practices for organizing larger serverless applications.
The general recommendation is to split "monolithic lambdas" into separate lambdas and move the routing to the API Gateway.
This is what the blog writes about the "monolithic lambda" approach:
This approach is generally unnecessary, and it’s often better to take
advantage of the native routing functionality available in API
Gateway.
...
API Gateway is also capable of validating parameters, reducing the
need for checking parameters with custom code. It can also provide
protection against unauthorized access, and a range of other features
more suited to be handled at the service level.
Going from this:
To this
The responsibility of mapping API requests to Lambda in AWS is handled through a Gateway's API specification.
Mapping of URL paths and HTTP methods as well as data validation SHOULD be left up to the Gateway. There is also the question of permissions and API scope; you'll not be able to leverage API scopes and IAM permission levels in a normal way.
In terms of coding, to replicate this mechanism inside of a Lambda handler is an anti-pattern. Going down that route one will soon end up with something that looks like the routing for a node express server, not a Lambda function.
After having set up 50+ Lambdas behind API Gateway I can say that
function handlers should be kept as dump as possible, allowing them to be reusable independent from the context from which they're being invoked.
As far as I know, AWS allows only one handler per Lambda function. That’s why I have created a little "routing" mechanism with Java Generics (for stronger type checks at compile time). In the following example you can call multiple methods and pass different object types to the Lambda and back via one Lambda handler:
Lambda class with handler:
public class GenericLambda implements RequestHandler<LambdaRequest<?>, LambdaResponse<?>> {
#Override
public LambdaResponse<?> handleRequest(LambdaRequest<?> lambdaRequest, Context context) {
switch (lambdaRequest.getMethod()) {
case WARMUP:
context.getLogger().log("Warmup");
LambdaResponse<String> lambdaResponseWarmup = new LambdaResponse<String>();
lambdaResponseWarmup.setResponseStatus(LambdaResponse.ResponseStatus.IN_PROGRESS);
return lambdaResponseWarmup;
case CREATE:
User user = (User)lambdaRequest.getData();
context.getLogger().log("insert user with name: " + user.getName()); //insert user in db
LambdaResponse<String> lambdaResponseCreate = new LambdaResponse<String>();
lambdaResponseCreate.setResponseStatus(LambdaResponse.ResponseStatus.COMPLETE);
return lambdaResponseCreate;
case READ:
context.getLogger().log("read user with id: " + (Integer)lambdaRequest.getData());
user = new User(); //create user object for test, instead of read from db
user.setName("name");
LambdaResponse<User> lambdaResponseRead = new LambdaResponse<User>();
lambdaResponseRead.setData(user);
lambdaResponseRead.setResponseStatus(LambdaResponse.ResponseStatus.COMPLETE);
return lambdaResponseRead;
default:
LambdaResponse<String> lambdaResponseIgnore = new LambdaResponse<String>();
lambdaResponseIgnore.setResponseStatus(LambdaResponse.ResponseStatus.IGNORED);
return lambdaResponseIgnore;
}
}
}
LambdaRequest class:
public class LambdaRequest<T> {
private Method method;
private T data;
private int languageID;
public static enum Method {
WARMUP, CREATE, READ, UPDATE, DELETE
}
public LambdaRequest(){
}
public Method getMethod() {
return method;
}
public void setMethod(Method create) {
this.method = create;
}
public T getData() {
return data;
}
public void setData(T data) {
this.data = data;
}
public int getLanguageID() {
return languageID;
}
public void setLanguageID(int languageID) {
this.languageID = languageID;
}
}
LambdaResponse class:
public class LambdaResponse<T> {
private ResponseStatus responseStatus;
private T data;
private String errorMessage;
public LambdaResponse(){
}
public static enum ResponseStatus {
IGNORED, IN_PROGRESS, COMPLETE, ERROR, COMPLETE_DUPLICATE
}
public ResponseStatus getResponseStatus() {
return responseStatus;
}
public void setResponseStatus(ResponseStatus responseStatus) {
this.responseStatus = responseStatus;
}
public T getData() {
return data;
}
public void setData(T data) {
this.data = data;
}
public String getErrorMessage() {
return errorMessage;
}
public void setErrorMessage(String errorMessage) {
this.errorMessage = errorMessage;
}
}
Example POJO User class:
public class User {
private String name;
public User() {
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
JUnit test method:
#Test
public void GenericLambda() {
GenericLambda handler = new GenericLambda();
Context ctx = createContext();
//test WARMUP
LambdaRequest<String> lambdaRequestWarmup = new LambdaRequest<String>();
lambdaRequestWarmup.setMethod(LambdaRequest.Method.WARMUP);
LambdaResponse<String> lambdaResponseWarmup = (LambdaResponse<String>) handler.handleRequest(lambdaRequestWarmup, ctx);
//test READ user
LambdaRequest<Integer> lambdaRequestRead = new LambdaRequest<Integer>();
lambdaRequestRead.setData(1); //db id
lambdaRequestRead.setMethod(LambdaRequest.Method.READ);
LambdaResponse<User> lambdaResponseRead = (LambdaResponse<User>) handler.handleRequest(lambdaRequestRead, ctx);
}
ps.: if you have deserialisation problems (LinkedTreeMap cannot be cast to ...) in you Lambda function (because uf the Generics/Gson), use the following statement:
YourObject yourObject = (YourObject)convertLambdaRequestData2Object(lambdaRequest, YourObject.class);
Method:
private <T> Object convertLambdaRequestData2Object(LambdaRequest<?> lambdaRequest, Class<T> clazz) {
Gson gson = new Gson();
String json = gson.toJson(lambdaRequest.getData());
return gson.fromJson(json, clazz);
}
The way I see, choosing single vs multiple API is a function of following considerations:
Security: I think this is the biggest challenge of having a single API structure. It may be possible to have different security profile for different parts of the requirement
Think microservice model from business perspective:
The whole purpose of any API should be serving some requests, hence it must be well understood and easy to use. So related APIs should be combined. For example, if you have a mobile client and it requires 10 things to be pulled in and out from DB, it makes sense to have 10 endpoints into a single API.
But this should be within reason and should be seen in context of overall solution design. For example, if you design a payroll product, you may think to have separate modules for leave management and user details management. Even if they are often used by a single client, they should still be different API, because their business meaning is different.
Reusability: Applies to both code and functionality reusability. Code reusability is a easier problem to solve, ie build common modules for shared requirements and build them as libraries.
Functionality reusability is harder to solve. In my mind, most of the cases can be solved by redesigning the way endpoints/functions are laid out, because if you need duplication of functionality that means your initial design is not detailed enough.
Just found a link in another SO post which summarizes better
I'm working on a Web API RESTful service that on a request needs to perform a task. We're using Hangfire to execute that task as a job, and on failure, will attempt to retry the job up to 10 times.
If the job eventually succeeds I want to run an additional job (to send an event to another service). If the job fails even after all of the retry attempts, I want to run a different additional job (to send a failure event to another service).
However, I can't figure out how to do this. I've created the following JobFilterAttribute:
public class HandleEventsAttribute : JobFilterAttribute, IElectStateFilter
{
public IBackgroundJobClient BackgroundJobClient { get; set; }
public void OnStateElection(ElectStateContext context)
{
var failedState = context.CandidateState as FailedState;
if (failedState != null)
{
BackgroundJobClient.Enqueue<MyJobClass>(x => x.RunJob());
}
}
}
The one problem I'm having is injecting the IBackgroundJobClient into this attribute. I can't pass it as a property to the attribute (I get a "Cannot access non-static field 'backgroundJobClient' in static context" error). We're using autofac for dependency injection, and I tried figuring out how to use property injection, but I'm at a loss. All of this leads me to believe I may be on the wrong track.
I'd think it would be a fairly common pattern to run some additional cleanup code if a Hangfire job fails. How do most people do this?
Thanks for the help. Let me know if there's any additional details I can provide.
Hangfire can build an execution chains. If you want to schedule next job after first one succeed, you need to use ContinueWith(string parentId, Expression<Action> methodCall, JobContinuationOptions options); with the JobContinuationOptions.OnlyOnSucceededState to run it only after success.
But you can create a HangFire extension like JobExecutor and run tasks inside it to get more possibilities.
Something like that:
public static JobResult<T> Enqueue<T>(Expression<Action> a, string name)
{
var exprInfo = GetExpressionInfo(a);
Guid jGuid = Guid.NewGuid();
var jobId = BackgroundJob.Enqueue(() => JobExecutor.Execute(jGuid, exprInfo.Method.DeclaringType.AssemblyQualifiedName, exprInfo.Method.Name, exprInfo.Parameters, exprInfo.ParameterTypes));
JobResult<T> result = new JobResult<T>(jobId, name, jGuid, 0, default(T));
JobRepository.WriteJobState(new JobResult<T>(jobId, name, jGuid, 0, default(T)));
return result;
}
More detailed information you can find here: https://indexoutofrange.com/Don%27t-do-it-now!-Part-5.-Hangfire-job-continuation,-ContinueWith/
I haven't been able to verify this will work, but BackgroundJobClient has no static methods, so you would need a reference to an instance of it.
When I enqueue tasks, I use the static Hangfire.BackgroundJob.Enqueue which should work without a reference to the JobClient instance.
Steve
Laravel 4: In the context of consume-your-own-api, my XyzController uses my custom InternalAPiDispatcher class to create a Request object, push it onto a stack (per this consideration), then dispatch the Route:
class InternalApiDispatcher {
// ...
public function dispatch($resource, $method)
{
$this->request = \Request::create($this->apiBaseUrl . '/' . $resource, $method);
$this->addRequestToStack($this->request);
return \Route::dispatch($this->request);
}
To start with, I'm working on a basic GET for a collection, and would like the Response content to be in the format of an Eloquent model, or whatever is ready to be passed to a View (perhaps a repository thingy later on when I get more advanced). It seems inefficient to have the framework create a json response and then I decode it back into something else to display it in a view. What is a simple/efficient/elegant way to direct the Request to return the Response in the format I desire wherever I am in my code?
Also, I've looked at this post a lot, and although I'm handling query string stuff in the BaseContorller (thanks to this answer to my previous question) it all seems to be getting far too convoluted and I feel I'm getting lost in the trees.
EDIT: could the following be relevant (from laravel.com/docs/templates)?
"By specifying the layout property on the controller, the view specified will be created for you and will be the assumed response that should be returned from actions."
Feel free to mark this as OT if you like, but I'm going to suggest that you might want to reconsider your problem in a different light.
If you are "consuming your own API", which is delivered over HTTP, then you should stick to that method of consumption.
For all that it might seem weird, the upside is that you could actually replace that part of your application with some other server altogether. You could run different parts of your app on different boxes, you could rewrite the HTTP part completely, etc, etc. All the benefits of "web scale".
The route you're going down is coupling the publisher and the subscriber. Now, since they are both you, or more accurately your single app, this is not necessarily a bad thing. But if you want the benefits of being able to access your own "stuff" without resorting to HTTP (or at least "HTTP-like") requests, then I wouldn't bother with faking it. You'd be better off defining a different internal non-web Service API, and calling that.
This Service could be the basis of your "web api", and in fact the whole HTTP part could probably be a fairly thin controller layer on top of the core service.
It's not a million miles away from where you are now, but instead of taking something that is meant to output HTTP requests and mangling it, make something that can output objects, and wrap that for HTTP.
Here is how I solved the problem so that there is no json encoding or decoding on an internal request to my API. This solution also demonstrates use of route model binding on the API layer, and use of a repository by the API layer as well. This is all working nicely for me.
Routes:
Route::get('user/{id}/thing', array(
'uses' => 'path\to\Namespace\UserController#thing',
'as' => 'user.thing'));
//...
Route::group(['prefix' => 'api/v1'], function()
{
Route::model('thing', 'Namespace\Thing');
Route::model('user', 'Namespace\User');
Route::get('user/{user}/thing', [
'uses' => 'path\to\api\Namespace\UserController#thing',
'as' => 'api.user.thing']);
//...
Controllers:
UI: UserController#thing
public function thing()
{
$data = $this->dispatcher->dispatch('GET', “api/v1/user/1/thing”)
->getOriginalContent(); // dispatcher also sets config flag...
// use $data in a view;
}
API: UserController#thing
public function thing($user)
{
$rspns = $this->repo->thing($user);
if ($this->isInternalCall()) { // refs config flag
return $rspns;
}
return Response::json([
'error' => false,
'thing' => $rspns->toArray()
], 200);
Repo:
public function thing($user)
{
return $user->thing;
}
Here is how I achieved it in Laravel 5.1. It requires some fundamental changes to the controllers to work.
Instead of outputting response with return response()->make($data), do return $data.
This allows the controller methods to be called from other controllers with App::make('apicontroller')->methodname(). The return will be object/array and not a JSON.
To do processing for the external API, your existing routing stays the same. You probably need a middleware to do some massaging to the response. Here is a basic example that camel cases key names for the JSON.
<?php
namespace App\Http\Middleware;
use Closure;
class ResponseFormer
{
public function handle($request, Closure $next)
{
$response = $next($request);
if($response->headers->get('content-type') == 'application/json')
{
if (is_array($response->original)) {
$response->setContent(camelCaseKeys($response->original));
}
else if (is_object($response->original)) {
//laravel orm returns objects, it is a huge time saver to handle the case here
$response->setContent(camelCaseKeys($response->original->toArray()));
}
}
return $response;
}
}
Updated: 09/02/2009 - Revised question, provided better examples, added bounty.
Hi,
I'm building a PHP application using the data mapper pattern between the database and the entities (domain objects). My question is:
What is the best way to encapsulate a commonly performed task?
For example, one common task is retrieving one or more site entities from the site mapper, and their associated (home) page entities from the page mapper. At present, I would do that like this:
$siteMapper = new Site_Mapper();
$site = $siteMapper->findByid(1);
$pageMapper = new Page_Mapper();
$site->addPage($pageMapper->findHome($site->getId()));
Now that's a fairly trivial example, but it gets more complicated in reality, as each site also has an associated locale, and the page actually has multiple revisions (although for the purposes of this task I'd only be interested in the most recent one).
I'm going to need to do this (get the site and associated home page, locale etc.) in multiple places within my application, and I cant think of the best way/place to encapsulate this task, so that I don't have to repeat it all over the place. Ideally I'd like to end up with something like this:
$someObject = new SomeClass();
$site = $someObject->someMethod(1); // or
$sites = $someObject->someOtherMethod();
Where the resulting site entities already have their associated entities created and ready for use.
The same problem occurs when saving these objects back. Say I have a site entity and associated home page entity, and they've both been modified, I have to do something like this:
$siteMapper->save($site);
$pageMapper->save($site->getHomePage());
Again, trivial, but this example is simplified. Duplication of code still applies.
In my mind it makes sense to have some sort of central object that could take care of:
Retrieving a site (or sites) and all nessessary associated entities
Creating new site entities with new associated entities
Taking a site (or sites) and saving it and all associated entities (if they've changed)
So back to my question, what should this object be?
The existing mapper object?
Something based on the repository pattern?*
Something based on the unit of work patten?*
Something else?
* I don't fully understand either of these, as you can probably guess.
Is there a standard way to approach this problem, and could someone provide a short description of how they'd implement it? I'm not looking for anyone to provide a fully working implementation, just the theory.
Thanks,
Jack
Using the repository/service pattern, your Repository classes would provide a simple CRUD interface for each of your entities, then the Service classes would be an additional layer that performs additional logic like attaching entity dependencies. The rest of your app then only utilizes the Services. Your example might look like this:
$site = $siteService->getSiteById(1); // or
$sites = $siteService->getAllSites();
Then inside the SiteService class you would have something like this:
function getSiteById($id) {
$site = $siteRepository->getSiteById($id);
foreach ($pageRepository->getPagesBySiteId($site->id) as $page)
{
$site->pages[] = $page;
}
return $site;
}
I don't know PHP that well so please excuse if there is something wrong syntactically.
[Edit: this entry attempts to address the fact that it is oftentimes easier to write custom code to directly deal with a situation than it is to try to fit the problem into a pattern.]
Patterns are nice in concept, but they don't always "map". After years of high end PHP development, we have settled on a very direct way of handling such matters. Consider this:
File: Site.php
class Site
{
public static function Select($ID)
{
//Ensure current user has access to ID
//Lookup and return data
}
public static function Insert($aData)
{
//Validate $aData
//In the event of errors, raise a ValidationError($ErrorList)
//Do whatever it is you are doing
//Return new ID
}
public static function Update($ID, $aData)
{
//Validate $aData
//In the event of errors, raise a ValidationError($ErrorList)
//Update necessary fields
}
Then, in order to call it (from anywhere), just run:
$aData = Site::Select(123);
Site::Update(123, array('FirstName' => 'New First Name'));
$ID = Site::Insert(array(...))
One thing to keep in mind about OO programming and PHP... PHP does not keep "state" between requests, so creating an object instance just to have it immediately destroyed does not often make sense.
I'd probably start by extracting the common task to a helper method somewhere, then waiting to see what the design calls for. It feels like it's too early to tell.
What would you name this method ? The name usually hints at where the method belongs.
class Page {
public $id, $title, $url;
public function __construct($id=false) {
$this->id = $id;
}
public function save() {
// ...
}
}
class Site {
public $id = '';
public $pages = array();
function __construct($id) {
$this->id = $id;
foreach ($this->getPages() as $page_id) {
$this->pages[] = new Page($page_id);
}
}
private function getPages() {
// ...
}
public function addPage($url) {
$page = ($this->pages[] = new Page());
$page->url = $url;
return $page;
}
public function save() {
foreach ($this->pages as $page) {
$page->save();
}
// ..
}
}
$site = new Site($id);
$page = $site->addPage('/');
$page->title = 'Home';
$site->save();
Make your Site object an Aggregate Root to encapsulate the complex association and ensure consistency.
Then create a SiteRepository that has the responsibility of retrieving the Site aggregate and populating its children (including all Pages).
You will not need a separate PageRepository (assuming that you don't make Page a separate Aggregate Root), and your SiteRepository should have the responsibility of retrieving the Page objects as well (in your case by using your existing Mappers).
So:
$siteRepository = new SiteRepository($myDbConfig);
$site = $siteRepository->findById(1); // will have Page children attached
And then the findById method would be responsible for also finding all Page children of the Site. This will have a similar structure to the answer CodeMonkey1 gave, however I believe you will benefit more by using the Aggregate and Repository patterns, rather than creating a specific Service for this task. Any other retrieval/querying/updating of the Site aggregate, including any of its child objects, would be done through the same SiteRepository.
Edit: Here's a short DDD Guide to help you with the terminology, although I'd really recommend reading Evans if you want the whole picture.