I am confused about Flow, but is there any way to have a Flow that functions in this fashion?
Buffers data until delivered. No subscriber buffer until full, which won't ever happen
When a subscriber comes in deliver everything in buffer and remove them from buffer as delivered.
Subscriber can unsub and resub and it won't replay just deliver items that were emitted, but not yet delivered to a subscriber.
Doesn't have to multicast should only be one subscriber. I tried MutableSharedFlow(extraBufferCapacity = 10), but from what I have just found if there is no subscriber when the event comes in it just disposes of it.
I am using x.onEach{}.collect() if that is the correct way.
You can have this behaviour by using a single-consumer hot flow - a Flow based on a Channel.
Declare a Channel(BUFFERED) as the variable/property you will use to send elements. Then you may expose it as a flow using channel.receiveAsFlow() so a consumer can start/stop collecting as it sees fit. Elements will be buffered as long as no consumer collects elements.
If you want to avoid confusion for users of your API, you may want to expose a ReceiveChannel directly. In some cases however, this makes things non-composable, so using receiveAsFlow() allows you to declare a Flow API despite the hotness, which is more composable.
Related
Let's say I have one ActiveMQ Broker and an undefined numbers of consumers.
Problem:
To process a message, consumers need an external service which is either "DATA1" or "DATA2" (specified in the message)
Each server, "DATA1" and "DATA2", can only handle 20 connections
So at most 20 "DATA1" and 20 "DATA2" messages must be dispatched at any time
Because of priorization, the messages must be enqueued in the same queue
Even if message A has a higher prio than message B, if A can't be processed because the external service has no free slots, message B needs to be processed instead
How can this be solved? As long as I was using message pulling (prefetch of 0), I was able to do this by using a BrokerPlugin that, on messagePull, achieved this by using semaphores and selectors. If the limits were reached, the pull returned null.
However, due to performance issues I had to set prefetch to 1 and use push instead. Therefore, my messagePull hack no longer works (it's never called).
So far I'm considering implementing a custom Cursor but I was wondering if someone knows a better solution.
Update the custom cursor worked but broke features like message removal. I tried with a custom Queue and QueueDispatchSelector (which is a pain to configure since there isn't a proper API to do so) and it mostly works but I still have synchronisation issues.
Also, a very suitable API seems to be DispatchPolicy, however, while it is referenced by Queue, it's never used.
Queues give you buffering for system processing time for free. Messages are delivered on demand. With prefetch=0 or prefetch=1, should effectively get you there. Messages will only be delivered to a consumer when the consumer is ready (ie.. during the consumer.receive() method).
consumer.receive() is a blocking call, so you should not need any custom plugin or other to delay delivery until the consumer process (and its required downstream services) are ready to handle it.
The behavior should work out-of-the-box, or there are some details to your use case that are not provided to shed more light on the scenario.
I have a Java application which publishes events to RabbitMQ. It has one very important characteristic: message order must be preserved at all times. The consumer can handle duplicates, but it cannot handle when message 2 is enqueued before message 1, so to say.
I have been reading a lot about RabbitMQ lately, and I feel there is only solution to do this: set the channel in confirm mode (https://www.rabbitmq.com/confirms.html - basically, it forces the broker to acknowledge the publication) and publish one by one. With one by one I mean that the message 2 is only published after RabbitMQ confirmed (via an asynchronous ACK response) that message 1 is actually well received and persisted.
I tried this in a conceptual implementation, and while this works fine, it's uber slow, without exaggerating. Which makes sense: after all, we are now limiting our message rate to 1 message at a time.
So this leads me to my question: are there other, more performant, ways to ensure that message ordering is always preserved (either in RabbitMQ or via different approaches)?
Although my concern is RabbitMQ, I believe this question might be applied to any kind of asynchronous message queue service.
RabbitMQ's clients enqueue in the same order that you sent. It's when subscribers go down, you get network splits or the subscriber NACKs messages that they can get re-ordered; and even then RMQ tries to keep them in the same approximate order by re-queueing at the same position, or as close to the same position.
You can do it like you suggest; take one message at a time, because if you take a message, but crash before you've ACKed it from the broker, it will pop up when your service comes back up, at the same position.
This assumes you only have a single service instance at any given time, consuming from the queue. Which in turn is a distributed systems problem on its own, if you have a scheduler like Kubernetes or Mesos, spawning your service instances.
Another solution would be to ensure ordering of processing in the receiving service, by "resequencing" the messages based on their logical timestamps/sequence numbers.
I've written a much more thorough guide as annotated code here https://github.com/haf/rmq-publisher-confirms-hopac/blob/master/src/Server/Shared/RabbitMQ.fs — with batching you can resequence. Furthermore, if your idempotence builds the consecutive sequence numbers into its logic, you can start taking batches and each event will be idempotent, despite being re-consumed.
When using RabbitMQ as Message Broker, I have a scenario where multiple concurrent consumers pull messages from a Queue using the basic.get AMQP method and use explicit acknowledgement for deleting the message from the Queue. Assuming the following setup
Q has messages M1, M2, M3 and has consumers C1, C2 and C3 (each having its own connection and channel) connected to it.
How is concurrency handled in the basic.get method? Is the call to basic.get method synchronized to handle concurrent consumers each using its own connection and channel? C1, C2 and C3 issue a basic.get call to receive a message at the same time (assume the server receives all 3 requests simultaneously).
C1 requests a message using basic.get and gets M1. When C2 requests for a message, since its using a different connection, does it get M1 again?
How can consumers pull messages in batches of a predefined size?
Your questions really hit at the heart of queuing and process theory, so I will answer from that standpoint (RabbitMQ is really a generic message broker as far as my answers are concerned, as this applies to any message broker).
How is concurrency handled in the basic.get method? Is the call to
basic.get method synchronized to handle concurrent consumers each
using its own connection and channel? C1, C2 and C3 issue a basic.get
call to receive a message at the same time (assume the server receives
all 3 requests simultaneously).
Answer 1: RabbitMQ is designed to be a reliable message broker. It contains internal processes and controls to ensure that the same message does not get passed out multiple times to different consumers. Now, due to the impracticality of testing the scenario that you describe, does it work perfectly? Who knows. That is why properly-designed applications using message-based architecture will use idempotent transactions, such that if the same transaction is processed multiple times, the result will be the same as if the transaction was processed once.
Takeaway: Design your application so that the answer to this question is unimportant.
C1 requests a message using basic.get and gets M1. When C2 requests
for a message, since its using a different connection, does it get M1
again?
Answer 2: No. Subject to the assumptions of my previous answer, the RabbitMQ broker will not serve the same message back once it has been delivered. Depending on the settings of the channel and queue, the message may be automatically acknowledged upon delivery and will never be redelivered. Other settings will have the message requeue automatically upon the "death" of the processing thread/channel or a negative acknowledgment from your processing thread. This is important functionality, since a "poison" message could repeatedly wreak havoc in your application if it could be served up to multiple consumers. Takeaway: you may safely rely on this assumption in designing your application.
How can consumers pull messages in batches of a predefined size?
Answer: They can't, nor would it make sense for them to. In any queuing system, the fundamental assumption is that items are removed from the queue in single file. Attempts to violate this assumption result in unpredictable behavior; furthermore, single-piece flow is commonly the most efficient method of processing. However, in the real world, there are cases where batch sizes > 1 are necessary. In such cases, it makes sense to load the batch into its own single message, so this may require a separate processing thread that pulls messages from the queue and batches them together, or put them in batches initially. Keep in mind that once you have multiple consumers, there is no possible way to guarantee single messages will be processed in order. Takeaway: Batching should be avoided wherever possible, but where it is not practical to avoid, you may not assume that batches will contain individual messages in any particular order.
You might wanna read the RabbitMQ Api guide and the introduction to Amqp.
First of all, avoid consuming messages using basicGet in your consumers. Rather use the Consumer interface basicConsume. This allows RabbitMq to push you messages as they arrive on the queue. Everything else is a waist of resources here as it boils down to busy polling.
When using basicConsume RabbitMq will even push you more messages in the background up to a certain prefetch count. This allows you to process multiple messages concurrently as well as minimizing the time you need to wait for your next message to process (if some message is available).
Concurrency is not an issue at all, that's what you're using a queue for!
When having multiple consumers on one queue, a message will always only be delivered to one consumer (as long as the message is ACKed). Otherwise you need private queues for each consumer and route your messages accordingly.
Btw, if you're able to share the connection among your consumers, you should do so.
Just make sure to use one channel per thread.
There is no special configuration required for that scenario. Each client will atomically fetch and receive one message from the queue, just as you would like to happen.
Question
I want to pass data between applications, in a publish-subscribe manner. Data may be produced at a much higher rate than consumed and messages get lost, which is not a problem. Imagine a fast sensor and a slow sensor data processor. For that, I use redis pub/sub and wrote a class which acts as a subscriber, receives every message and puts that into a buffer. The buffer is overwritten when a new message comes in or nullified when the message is requested by the "real" function. So when I ask this class, I immediately get a response (hint that my function is slower than data comes in) or I have to wait (hint that my function is faster than the data).
This works pretty good for the case that data comes in fast. But for data which comes in relatively seldom, let's say every five seconds, this does not work: imagine my consumer gets launched slightly after the producer, the first message is lost and my consumer needs to wait nearly five seconds, until it can start working.
I think I have to solve this with Redis tools. Instead of a pub/sub, I could simply use the get/set methods, thus putting the cache functionality into Redis directly. But then, my consumer would have to poll the database instead of the event magic I have at the moment. Keys could look like "key:timestamp", and my consumer now has to get key:* and compare the timestamps permamently, which I think would cause a lot of load. There is no natural possibility to sleep, since although I don't care about dropped messages (there is nothing I can do about), I do care about delay.
Does someone use Redis for a similar thing and could give me a hint about clever use of Redis tools and data structures?
edit
Ideally, my program flow would look like this:
start the program
retrieve key from Redis
tell Redis, "hey, notify me on changes of key".
launch something asynchronously, with a callback for new messages.
By writing this, an idea came up: The publisher not only publishes message on topic key, but also set key message. This way, an application could initially get and then subscribe.
Good idea or not really?
What I did after I got the answer below (the accepted one)
Keyspace notifications are really what I need here. Redis acts as the primary source for information, my client subscribes to keyspace notifications, which notify the subscribers about events affecting specific keys. Now, in the asynchronous part of my client, I subscribe to notifications about my key of interest. Those notifications set a key_has_updates flag. When I need the value, I get it from Redis and unset the flag. With an unset flag, I know that there is no new value for that key on the server. Without keyspace notifications, this would have been the part where I needed to poll the server. The advantage is that I can use all sorts of data structures, not only the pub/sub mechanism, and a slow joiner which misses the first event is always able to get the initial value, which with pub/sib would have been lost.
When I need the value, I obtain the value from Redis and set the flag to false.
One idea is to push the data to a list (LPUSH) and trim it (LTRIM), so it doesn't grow forever if there are no consumers. On the other end, the consumer would grab items from that list and process them. You can also use keyspace notifications, and be alerted each time an item is added to that queue.
I pass data between application using two native redis command:
rpush and blpop .
"blpop blocks the connection when there are no elements to pop from any of the given lists".
Data are passed in json format, between application using list as queue.
Application that want send data (act as publisher) make a rpush on a list
Application that want receive data (act as subscriber) make a blpop on the same list
The code shuold be (in perl language)
Sender (we assume an hash pass)
#Encode hash in json format
my $json_text = encode_json \%$hash_ref;
#Connect to redis and send to list
my $r = Redis->new(server => "127.0.0.1:6379");
$r->rpush("shared_queue","$json_text");
$r->quit;
Receiver (into a infinite loop)
while (1) {
my $r = Redis->new(server => "127.0.0.1:6379");
my #elem =$r->blpop("shared_queue",0);
#Decode hash element
my $hash_ref=decode_json($elem\[1]);
#make some stuff
}
I find this way very usefull for many reasons:
The element are stored into list, so temporary disabling of receiver has no information loss. When recevier restart, can process all items into the list.
High rate of sender can be handled with multiple instance of receiver.
Multiple sender can send data on unique list. In ths case should be easily implmented a data collector
Receiver process that act as daemon can be monitored with specific tools (e.g. pm2)
From Redis 5, there is new data type called "Streams" which is append-only datastructure. The Redis streams can be used as reliable message queue with both point to point and multicast communication using consumer group concept Redis_Streams_MQ
We have a bunch of requests that we plan to publish to the queue.
There will be several different subscriber types, each in their own round robin pool.
For example Request1 is pushed onto the queue
LoggingSubscriber1 and LoggingSubscriber2 both subscribe with the "LoggingSubscriber" subscriptionId so that only one of them gets the request.
There will be other groups like DoProcessSubscriber1, DoProcessSubscriber2, and DoProcessSubscriber3
And another DoOtherProcessSubscriber1, DoOtherProcessSubscriber2
We need some way to know that all three subscribers (Logging, DoProcess, and DoOtherProcess) have completed, so that we can perform some action...like sending a message to the client that all the entire request has completed.
How would we aggregate responses like this? We were thinking of having each subscriber put a response object on the queue, but we still aren't sure how to know that they are all done.
Ideally you'd use the Request/Response pattern built into EasyNetQ, but that's designed for a single (potentially farmed) consumer. It doesn't allow you to bind to multiple queues. In your case you should probably have your client set up a subscription for replies and have all three services publish a message when they are complete. The client can then wait until it has a response from all three before updating.
However, I'd encourage you to possibly re-think your design. By making the client responsible for acknowledging the completion of the subscribers, you're building a very tightly coupled system. Messaging System design works far better if you adopt the notion of eventual consistency. Allow your client to fire-and-forget and have some audit process ensure that all the expected processing did eventually occur.