I have three microservices - Service A, B and C.
Service A should call B and C asynchronously, A should build the response based B and C responses.
I am using Rabbit MQ for async ipc.
Tried RabbitTemplate's convertSendAndRecieve with direct-replyTo option to consume, which makes the current processing thread wait/block on the async call to complete which makes it synchronous.
I wouldn't like to use the convertAndSend and let Service A listen on the reply queue and process based on correlation id as there would be thousands of responses in the reply queue and mapping the messages based on correlation id results in poor performance.
Creating separate queues for each session is not an option either due to its own caveats (getting acknowledgement from all clusters on new queue creation impacts the performance too)
Sorry if this problem has been solved before, I couldnt get much help on this on internet. Any help would be appreciated.
There is AsyncRabbitTemplate for your purpose do not block the caller until the reply: https://docs.spring.io/spring-amqp/docs/2.0.0.RELEASE/reference/html/_reference.html#async-template:
Version 1.6 introduced the AsyncRabbitTemplate. This has similar sendAndReceive (and convertSendAndReceive) methods to those on the AmqpTemplate but instead of blocking, they return a ListenableFuture.
RabbitConverterFuture<String> future = this.template.convertSendAndReceive("foo");
future.addCallback(new ListenableFutureCallback<String>() {
#Override
public void onSuccess(String result) {
...
}
#Override
public void onFailure(Throwable ex) {
...
}
});
Related
Currently I am able to see the streaming values exposed by the code below, but only one http client will receive the continuous stream of values, the others will not be able to.
The code, a modified version of the quarkus quickstart for kafka reactive streaming is:
#Path("/migrations")
public class StreamingResource {
private volatile Map<String, String> counterBySystemDate = new ConcurrentHashMap<>();
#Inject
#Channel("migrations")
Flowable<String> counters;
#GET
#Path("/stream")
#Produces(MediaType.SERVER_SENT_EVENTS) // denotes that server side events (SSE) will be produced
#SseElementType("text/plain") // denotes that the contained data, within this SSE, is just regular text/plain data
public Publisher<String> stream() {
Flowable<String> mainStream = counters.doOnNext(dateSystemToCount -> {
String key = dateSystemToCount.substring(0, dateSystemToCount.lastIndexOf("_"));
counterBySystemDate.put(key, dateSystemToCount);
});
return fromIterable(counterBySystemDate.values().stream().sorted().collect(Collectors.toList()))
.concatWith(mainStream)
.onBackpressureLatest();
}
}
Is it possible to make any modification that would allow multiple clients to consume the same data, in a broadcast fashion?
I guess this implies letting go of backpressure, because that would imply a state per consumer?
I saw that Observable is not accepted as a return type in the resteasy-rxjava2 for the Server Side Events media-tpe.
Please let me know any ideas,
Thank you
Please find the full code in Why in multiple connections to PricesResource Publisher, only one gets the stream?
Let's say I have a situation where I need to wait for up to 1 minute for some action to be performed.
If it is expired, then try different action.
My current solution proposal is based on RabbitMQ features.
I would create following resources:
#Bean
DirectExchange exchangeDirect() {
return new DirectExchange("exchange.direct");
}
#Bean
Queue bufferQueue() {
Map<String, Object> args = new HashMap<>();
args.put("x-message-ttl", amqpProperties.getTimeToLive().toMillis());
args.put("x-dead-letter-exchange", "exchange.direct");
args.put("x-dead-letter-routing-key", "timedOutQueue");
return new Queue("buffer.queue", true, false, false, args);
}
#Bean
Queue timedOutQueue() {
return new Queue("timed.out.queue", true);
}
#Bean
Binding bufferQueueToExchangeDirect() {
return bind(bufferQueue())
.to(exchangeDirect())
.with("buffer.queue");
}
#Bean
Binding timedOutQueueToExchangeDirect() {
return bind(timedOutQueue())
.to(exchangeDirect())
.with("timed.out.queue");
}
When I add action to bufferQueue and I don't receive any delivery update within 1 minute, this request is then moved to timedOutQueue thanks to bufferQueue's TTL.
I can attach application rabbit listener to timedOutQueue and use different action.
When I add action to bufferQueue and I receive confirmation that action was successfully performed, I'd like to remove given action event from bufferQueue.
I couldn't find such feature in RabbitMQ, i.e. being able to receive selectively.
I also found some articles saying that selective consuming is antipattern.
Is it possible to selectively consume messages from RabbitMQ queue?
What is proper way to implement this pattern in RabbitMQ?
There is no concept of message selection in RabbitMQ.
The "proper" way for an application that wants to selectively receive messages is to use multiple queues/routing keys with a consumer on each specific queue he expresses interest in.
However, there is no way to "remove" a message from the middle of a queue; only the head.
When I add action to bufferQueue and I receive confirmation that action was successfully performed, I'd like to remove given action event from bufferQueue.
That makes no sense to me; when the message timed out in bufferQueue due to TTL, and was moved to timedOutQueue, it no longer exists in bufferQueue so there is nothing to remove.
There is also no mechanism to ...
and I don't receive any delivery update within 1 minute,
... because each message in a queue is independent.
It doesn't sound like your application is suitable for a message broker at all.
I am moving a old app from Msmq to RabbitMQ. The App uses MassTransit 2.10 and I need a function that returns the number of messages in queue for a specific message type.
In the current implementation there is this line of code that returns the message types:
var messages = MsmqEndpointManagement.New(endpoint.Address).MessageTypes();
Is it possible to replace this instruction with something similar when using RabbitMQ ?
When moving to RabbitMQ, the management of queues is different. Since it's a broker (compared to MSMQ, which is a, well, different), it was designed with a separate management API and console. There are other libraries that can be used to get message counts, but not one that will get you the message types (since it would require reading every message to find the type - which is what that MSMQ method above is doing, btw).
I'd suggest looking at HareDu to manage your broker from the application/API.
With HareDu 2 Broker and Autofac APIs you can do the following:
var result = _container.Resolve<IBrokerObjectFactory>()
.Object<Queue>()
.GetAll()
.Select(x => x.Data)
.Select(x => new
{
QueueName = x.Name, x.TotalMessages
});
I have solved the issue using the following function, with EasyNetQ:
public static int GetMessageCount(string queueName)
{
IQueue queue;
IBus bus = getBusFromName(queueName);
if (queues.TryGetValue(queueName, out queue))
return (int)bus.Advanced.MessageCount(queue);
return 0;
}
the getBusFromName() it's a function that retrieve the IBus instance of the queue from a dictionary in which I store all the queues used by the software.
I am having a great experience with ServiceStack & Redis, but I'm confused by ThreadPool and Pub/Sub within a thread, and an apparent limitation for accessing Redis within a message callback. The actual error I get states that I can only call "Subscribe" or "Publish" within the "current context". This happens when I try to do another Redis action from the message callback.
I have a process that must run continuously. In my case I can't just service a request one time, but must keep a thread alive all the time doing calculations (and controlling these threads from a REST API route is ideal). Data must come in to the process on a regular basis, and data must be published. The process must also store and retrieve data from Redis. I am using routes and services to take data in and store it in Redis, so this must take place async from the "calculation" process. I thought pub/sub would be the answer to glue the pieces together, but so far that does not seem possible.
Here is how my code is currently structured (the code with the above error). This is the callback for the route that starts the long term "calculation" thread:
public object Get(SystemCmd request)
{
object ctx = new object();
TradingSystemCmd SystemCmd = new TradingSystemCmd(request, ctx);
ThreadPool.QueueUserWorkItem(x =>
{
SystemCmd.signalEngine();
});
return (retVal); // retVal defined elsewhere
}
Here is the SystemCmd.signalEngine():
public void signalEngine(){
using (var subscription = Redis.CreateSubscription())
{
subscription.OnSubscribe = channel =>
{
};
subscription.OnUnSubscribe = channel =>
{
};
subscription.OnMessage = (channel, msg) =>
{
TC_CalcBar(channel, redisTrade);
};
subscription.SubscribeToChannels(dmx_key); //blocking
}
}
The "TC_CalcBar" call does processing on data as it becomes available. Within this call is a call to Redis for a regular database accesses (and the error). What I could do would be to remove the Subscription and use another method to block on data being available in Redis. But the current approach seemed quite nice until it failed to work. :-)
I also don't know if the ThreadPool has anything to do with the error, or not.
As per Redis documentation:
Once the client enters the subscribed state it is not supposed to
issue any other commands, except for additional SUBSCRIBE, PSUBSCRIBE,
UNSUBSCRIBE and PUNSUBSCRIBE commands.
Source : http://redis.io/commands/subscribe
I have a Windows service using NServiceBus to handle incoming messages.
While processing a message, I would like to check to see if there are any other remaining messages on the queue to process.
What is the best way to approach this?
For this specific scenario I'd say that a saga could be appropriate where it is created by the first message received, opens a timeout (for let's say one minute), collects all messages during that period of time, then Bus.SendLocal's a message containing all rows, for which another handler creates the spreadsheet and uploads.
Since, NServiceBus is using MSMQ, you can use the methods from System.Messaging.
Included is a modified method, I'm currently working on, to do a kind of batch processing.
using System.Messaging;
public int PeekAtQueue()
{
const string QUEUE_NAME = "private$\\you_precious_queuname";
if (!MessageQueue.Exists(".\\" + QUEUE_NAME))
return 0;
var messageQueues = MessageQueue.GetPrivateQueuesByMachine(Environment.MachineName);
var queue = messageQueues.Single(x => x.QueueName == QUEUE_NAME);
return queue.GetAllMessages().Count();
}
Modified here itself in the editor. Hope it still compiles :)
Found a similar discussion here, by the way:
http://jopinblog.wordpress.com/2008/03/12/counting-messages-in-an-msmq-messagequeue-from-c/