Pipeline design - weblogic

I have got the following question about the pipeline design..
Can you please help me understand what we should be doing inside the request pipeline and response pipeline?
Let's say.. I am designing a proxy service that calls 2 separate business services and sends the response to the caller..
The proxy service will have assign, service callout, routing and reply action. I can do everything inside the request pipeline and I don't understand the purpose of the response pipeline.
Can anyone help me understand what we should be doing inside the response pipeline?
Thank you

A typical scenario in an OSB pipeline would be to validate, transform and enrich the request in the Request Pipeline, Route to a business service (which appears after the pipeline pair), then validate, transform and enrich the response in the Response Pipeline. In this case in your pipeline, after the Pipeline Pair node, you will have a Route node.
Often people will use a service callout instead of a Route node. In that case, you may not have anything (like a Route node) appearing after your pipeline pair and you can do everything in the Request pipeline.
You should understand the difference between a Route node and a Service Callout. Routing to a business service uses non blocking IO compared to a Service Callout which will block a thread, so using a Route is the much more scalable option.

Related

How do I enrich logcontext with tracking id, when using a Hosted Service and ServiceScopeFactory

We have .Net 6 WebApi and we are using Serilog. For regular http requests, we are using a custom middleware to enrich log context with a custom request tracing id. The Middleware inspects the http context request header for a tracing header. if its not available, it will add it and then push it to logcontext. So within that same request, whenever any service makes a logging call, the same tracing id is logged, allowing us to tie different log entries across service calls.
But we also have Hosted service under the same WebApi which reads request from a Channel and processes it. Within the hosted service, we are using ServiceScopeFactory and getting the required services within that scope. I am wondering, how can I use similar concept like the logcontext enrichment middleware so that I can add a tracing id to link all log entries happening across services, without needing to explicitly add that id within each service's log entry.
I saw that there is a Serilog.Enrichment.CorrelationId package, but this is dependent on httpContext. But in case of Hosted Service, there is no httpConext. So what options are available to link different log entries happening within that same scope? Also, to add, the services called within httprequest action method and hosted service are common.
Thanks!

Flowable task vs Http Task

I want to understand the main difference between a flowable task and http task. Why should we prefer one over the other and when is it better to use flowable task or http task?
What exactly do you mean by Flowable Task? In case you mean the Service Task then the Http Task is just an extension of the Service task that has specific functionality provided out of the box to easily send HTTP requests. You can also achieve this by implementing some logic to a Service Task. The only difference is that the Http Task is specified for sending Http requests and you can simply use it and provide the endpoint you want to call.

why replace ocelot api gateway with rabbitMQ

We are making a cloud native enterprise business application on dotnet core mvc platform. The dotnet core default api gateway between frontend application and backend microservices is Ocelot used in Async mode.
We have been suggested to use RabbitMQ message broker instead of Ocelot. The reasoning given for this shift is asynchronous request - response exchange between frontend and microservices. Here, we would like to declare that our application would have few hundred cshtml pages spanning over several frontend modules. We are expecting over thousand users concurrently using the application.
Our concern is that, is it the right suggestion or not. Our development team feels that we should continue using Ocelot api gateway for general request - response exchange between frontend and microservices and use RabbitMQ only for events which are going to trigger backgroup processing and respond after a delay when the job gets completed.
In case you guys feel that yes we can replace Ocelot, then our further concerns about reliable session based request and response. We should not have to programmaticaly corelate response to session requests. Here it may please be noted that with RabbitMQ we are testing with dotnet core MassTransit library. The Ocelot API Gateway is designed to handle session based request-response commnunication.
In RabbitMQ should we make reply queue for each request or should the client maintain a single reply queue for all requests. Should the reply queue be exclusive or durable.
Can single reply queue per client be able to serve to all request or will it be correct to make multiple receive endpoint based on application modules/cshtml pages to serve all our concurrent users with efficient way.
Thanking you all, we eagerly wait for your replies.
I recommend to implement RabbitMQ. You might need to change ocelot to rabbit mq. 

Is using RPC with Masstransit best practice if you are trying to get a response from a queue

I thought using RPC is bad practice but all the resources I am finding point to using RPC in order to get a response from a queue after publishing a request. Are there any other ways of doing it? Is it the best practice?
Thanks
MassTransit has built-in support for producing requests (which can be published, or sent directly to a specific endpoint). The request client can be created manually or added to a dependency injection container, and one or more response types can be handled.
MassTransit uses the bus endpoint to receive responses by default.
To register the request client in the container, the AddRequestClient method is used as shown below.
services.AddMassTransit(x =>
{
// configure transport/host/etc.
x.AddRequestClient<CheckOrderStatus>();
});
RPC is a common pattern, and producing requests when a response is required, it a regularly used approach. Another option is combining a command with an event, and observing the event separate from the request producer. However, if a linear programmatic flow is required, using RPC via the request client is an easy solution.

Configure RabbitMQ to route to queue via HTTP endpoint, therefore not needing the normal JSON data

For my deployment I have a number of 3rd party systems that can only send HTTP POST requests with metrics (I need in the queue) and they cannot be re-configured. My goal is to have specific endpoints (or vhosts) that when POST'd to will automatically route to the correct queue, without needing the necessary routing key and other standard rabbitmq JSON data. As this modification is not possible in the 3rd party systems.
I can't find any way to do this natively as of now, but I believe it may be possible to configure a HTTP reverse proxy in the front, whereby any data sent to the specific endpoint, will be re-directed to the correct rabbitMQ HTTP endpoint, where I could then bolt in the nessary JSON data so it can be parsed by rabbitmq and placed in the realvent queue. I wanted to check if this is the only logical solution to this, or am I missing something obvious that can be done within rabbitmq's administration page or via config files.