I made spring batch application that produces amqp (rabbitmq) messages which consists of list of json objects. Messages have headers with some metadata. Spring cloud stream app is consuming messages, and I used functional approach. How can I access the headers?
Is it bad approach to use message headers for anything but routing?
If your function signature in Spring Cloud Stream application accepts Message (for example):
#Bean
public Consumer<Message<?>> consume() {
return message -> {
message.getHeaders()....
};
}
. . then you can simply access message headers.
We're also working on simplifying it a bit.
No, it s not a bad approach to use message headers for anything but routing.
Think of Message Headers as meta information, only relevant to the current message . . . anything that is important for the current message but not after.
Related
I know that if I implement SC Stream Function in Spring reactive way, I can't send DLQ like traditional imperative way.
To do this, I am trying to manually change the Destination in the MessageHeader so that the message goes to the DLQ Topic.
However, if an error occurs, I want to send a message to the zombie Topic using the onErrorContinue method, but that doesn't work.
Shouldn't I produce it in onErrorContinue?
input.flatMap { sellerDto ->
// do something..
}.onErrorContinue { throwable, source ->
log.error("error occured!! ${throwable.message}")
source as Message<*>
Flux.just(MessageBuilder.withPayload(String(source.payload as ByteArray)).setHeader("spring.cloud.stream.sendto.destination","zombie").build())
}
No, since with reactive function the unit of work is the entire stream (not individual item). I provide more details here.
That said, you can inject StreamBridge which is always available as a bean and use it inside of your filter or any other applicable reactive operator. Basically streamBridge.send("destination", Message)
I need to calculate some kind of digest of the request body using the WebClient of Webflux and this digest must be set into a HTTP header. Using the good old Spring MVC ClientHttpRequestInterceptor is easy because the request body is provided as an array of bytes.
The ExchangeFilterFunction does not provide access to the request body.
The body is sent as JSon and Spring uses Jackson in order to serialize Java objects, so an option could be serialize my Object into Json and calculate the digest on it, but this strategy has two drawbacks:
my code would repeat what Spring will do when the request is actually sent
there's no guarantee that the acutal bytes sent by Spring as a request are equal to what I've passed to the digest function
I suppose that I should use some low level API of Netty, but I can't find any example.
I implemented the solution proposed by #rewolf and it worked, but I encountered an issue because of the multi-threading nature of WebFlux.
In fact, it's possible that the client request is saved into the thread-local map by one thread, but a different thread tries to get it, so a null value is returned.
For example, it happens if the request to be signed is created inside a Rest controller method which has a Mono as a request body parameter:
#PostMapping
public String execute(#RequestBody Mono<MyBody> body){
Mono<OtherBody> otherBody = body.map(this::transformBodyIntoOtherBody);
...
webClient.post()
.body(otherBody)
.exchange();
...
}
According to Reactor specs, the Reactor Context should be used instead of Thread Local.
I forked #rewolf project and implemented a solution based on Reactor Context: https://github.com/taxone/blog-hmac-auth-webclient
This is not currently easy to do with WebClient. But there are ways to do so by intercepting the body post-serialization. This can be done by registering a custom encoder that intercepts the data after encoding, and the passes it to a custom HttpConnector to inject it as a header.
This blog post explains one way to achieve it: https://andrew-flower.com/blog/Custom-HMAC-Auth-with-Spring-WebClient
Edit: Currently this blog post doesn't take into account concurrent requests. See the accepted answer by Claodio for the modified approach.
can anyone explain what is Service Data Object(SDO) and Service Message Object(SMO)?
Questions:
1. what is the purpose of SDO and SMO?
2.how it works?
These concepts aren’t used with Mule, they seem to come from IBM. https://www.ibm.com/support/knowledgecenter/SSFTN5_8.5.7/com.ibm.wbpm.main.doc/topics/cwesb_sca_smo2.html
The equivalent of the SMO in Mule is the Mule Event which you can read about here: https://docs.mulesoft.com/mule-runtime/4.1/about-mule-event
A Mule event contains the core information processed by the runtime. It travels through components inside your Mule app following the configured application logic.
It’s basically an abstraction layer so you don’t have to deal with different protocols and transports.
A Mule Event is composed of these objects:
A Mule Message contains a message payload and its associated attributes.
Variables are Mule event metadata that you use in your flow.
A Http POST for example would be represented as an event.
The event payload would be the body data of the http request
Where as the http headers such as content-type would be attributes on the event.
Same for JMS. The message body would be the payload and the jms header would be attributes.
As for SDO, each SMO has an SDO. This is very specific to that IBM article and not relevant in Mule. But from what I understand it basically allows you to access your heterogenous business data in a common way. I guess Dataweave in Mule accomplishes this as Dataweave is the transformation and expression language in Mule, it allows you to query and transform data in the same way regardless of the data type, xml, Json, CSV and so on.
Just getting my head around message queues and Redis MQ, excellent framework.
I understand that you have to use .RegisterHandler(...) to determine which handler will process the type of message/event that is in the message queue.
So if I have EventA, EventB etc should I have one Service which handles each of those Events, like :
public class DomainService : Service {
public object Any(EventA eventA) {...}
public object Any(EventB eventA) {...}
}
So these should be only queue/redis list created?
Also, what If I want a chain of events to happen, so for example a message of type EventA also has a handler that sends an Email providing handlers earlier on the chain are successful?
ServiceStack has no distinction between services created for MQ's, REST, HTML or SOAP services, they're the same thing. i.e. they each accept a Request DTO and optionally return a Response DTO and the same service can handle calls from any endpoint or format, e.g HTML, REST, SOAP or MQ.
Refer to ServiceStack's Architecture diagram to see how MQ fits in.
Limitations
The only things you need to keep in mind are:
Like SOAP, MQ's only support 1 Verb so your methods need to be named Post or Any
Only Action Filters are executed (i.e. not Global or Attribute filters)
You get MqRequest and MqResponse stubs in place of IHttpRequest, IHttpResponse. You can still use .Items to pass data through the request pipeline but any HTTP actions like setting cookies or HTTP Headers are benign
Configuring a Redis MQ Host
The MQ Host itself is completely decoupled from the rest of the ServiceStack framework, who doesn't know the MQ exists until you pass the message into ServiceStack yourself, which is commonly done inside your registered handler, e.g:
var redisFactory = new PooledRedisClientManager("localhost:6379");
var mqHost = new RedisMqServer(redisFactory, retryCount:2);
mqHost.RegisterHandler<Hello>(m => {
return this.ServiceController.ExecuteMessage(m);
});
//shorter version:
//mqHost.RegisterHandler<Hello>(ServiceController.ExecuteMessage);
mqHost.Start(); //Starts listening for messages
In your RegisterHandler<T> you specify the type of Request you want it to listen for.
By default you can only Register a single handler for each message and in ServiceStack a Request is tied to a known Service implementation, in the case of MQ's it's looking for a method signature first matching: Post(Hello) and if that doesn't exist it looks for the fallback Any(Hello).
Can add multiple handlers per message yourself
If you want to invoke multiple handlers then you would just maintain your own List<Handler> and just go through and execute them all when a request comes in.
Calling different services
If you want to call a different service, just translate it to a different Request DTO and pass that to the ServiceController instead.
When a MQ Request is sent by anyone, e.g:
mqClient.Publish(new Hello { Name = "Client" });
Your handler is invoked with an instance of type IMessage where the Request DTO is contained in the Body property. At that point you can choose to discard the message, validate it or alter it.
MQ Requests are the same as any other Service requests
In most cases you would typically just forward the message on to the ServiceController to process, the implementation of which is:
public object ExecuteMessage<T>(IMessage<T> mqMsg)
{
return Execute(mqMsg.Body, new MqRequestContext(this.Resolver, mqMsg));
}
The implementation just extracts the Request DTO from the mqMsg.Body and processes that message as a normal service being passed a C# Request DTO from that point on, with a MqRequestContext that contains the MQ IHttpRequest, IHttpResponse stubs.
I am still confused when it is appropriate to use the Message type in WCF like below
[ServiceContract]
public interface IMyService
{
[OperationContract]
Message GetData();
[OperationContract]
void PutData(Message m);
}
Why would you want to use it?
Can you use it for streaming?
Thanks
MSDN lists the follow reasons for using the message class directly:
When you need an alternative way of creating outgoing message contents (for example, creating a message directly from a file on disk) instead of serializing .NET Framework objects.
When you need an alternative way of using incoming message contents (for example, when you want to apply an XSLT transformation to the raw XML contents) instead of deserializing into .NET Framework objects.
When you need to deal with messages in a general way regardless of message contents (for example, when routing or forwarding messages when building a router, load-balancer, or a publish-subscribe system).
See Using the Message Class for more detailed information.
Edit to address the streaming question
I didn't find a definitive answer in my quick scan via google, but the article above states "All communication between clients and services ultimately results in Message instances being sent and received" - so I would assume it could be used directly in streaming.
While the reasons listed by Tim are valid, we use messages directly in our services to create one uber routing service.
we have one service that can take any method call you throw at it, Clients are generated by wsdls supplied from multiple sources.
This service would take the message, examine its content and route it accordingly.
So in my opinion if you want to get closer to the wire, or when you dont know the type of incoming messages, you can use the message in the signature directly.
Streaming is a separate concept than message signature, streaming is supported by wcf under very specific bindings and security mechanism and the method signature has to be very specific (i.e it should return/accept stream). Also in streaming the actual stream of data travels outside the scope of soap message.