I want to stream real-time sensor data(webcam, laser point cloud, etc.) from one robot to multiple observers.
In this use case, only the newest data is useful. For example, when a new frame of point cloud arrives, the older ones will be useless.
Redis has nice publisher/consumer support, but it has buffers according to (Redis Pubsub and Message Queueing).
So are there better alternatives? Something like ROS's publishers/subscribers. They have a message queue size parameter.
/**
* The subscribe() call is how you tell ROS that you want to receive messages
* on a given topic.
*
* The second parameter to the subscribe() function is the size of the message
* queue. If messages are arriving faster than they are being processed, this
* is the number of messages that will be buffered up before beginning to throw
* away the oldest ones.
*/
ros::Subscriber sub = n.subscribe("chatter", 1000, chatterCallback);
Maybe you can use redis list data structure for your purpose, like a queue. The list data structure in redis is made with linked list and adding a new item is O(1). Whenever your robot produces data it can put it in a list with LPUSH command, and when you want to get the latest item from the list use LRANGE "key-name" 0 0. This command will retrive the latest pushed item. Also if you want to not accumulate the data in queue, you may try to use LTRIM before LRANGE to maintain the latest records. For example LTRIM "key-name" 0 10 will keep the records of last 10 elements. This trim interval should be set according to your observer processing speeds. ref: https://redis.io/docs/data-types/lists/
Related
Assume that there is a key K in Redis that is holding a list of values.
Many producer clients are adding elements to this list, one by one using LPUSH or RPUSH.
On the other hand, another set of consumer clients are popping elements from the list, though with certain restriction. Consumers will only attempt to pop N number of items, only if the list contains at least N number of items. This ensures that the consumer will hold N items in hand after finishing popping process
If the list contains fewer than N number of items, consumers shouldn't even attempt to pop elements from the list at all, because they won't have at least N items at the end.
If there is only 1 Consumer client, the client can simply run LLEN command to check if the list contains at least N items, and subtract N using LPOP/RPOP.
However, if there are many consumer clients, there can be a race condition and they can simultaneously pop items from the list, after reading LLEN >= N. So we might end up in a state where each consumer might pop fewer than N elements, and there is no item left in the list in Redis.
Using a separate locking system seems to be one way to tackle this issue, but I was curious if this type of operation can be done only using Redis commands, such as Multi/Exec/Watch etc.
I checked Multi/Exec approach and it seems that they do not support rollback. Also, all commands executed between Multi/Exec transaction will return 'QUEUED' so I won't be able to know if N number of LPOP that I will be executing in the transaction will all return elements or not.
So all you need is an atomic way to check on list length and pop conditionally.
This is what Lua scripts are for, see EVAL command.
Here a Lua script to get you started:
local len = redis.call('LLEN', KEYS[1])
if len >= tonumber(ARGV[1]) then
local res = {n=len}
for i=1,len do
res[i] = redis.call('LPOP', KEYS[1])
end
return res
else
return false
end
Use as
EVAL "local len = redis.call('LLEN', KEYS[1]) \n if len >= tonumber(ARGV[1]) then \n local res = {n=len} \n for i=1,len do \n res[i] = redis.call('LPOP', KEYS[1]) \n end \n return res \n else \n return false \n end" 1 list 3
This will only pop ARGV[1] elements (the number after the key name) from the list if the list has at least that many elements.
Lua scripts are ran atomically, so there is no race condition between reading clients.
As OP pointed in comments, there is risk of data-loss, say because power failure between LPOPs and the script return. You can use RPOPLPUSH instead of LPOP, storing the elements on a temporary list. Then you also need some tracking, deleting, and recovery logic. Note your client could also die, leaving some elements unprocessed.
You may want to take a look at Redis Streams. This data structure is ideal for distributing load among many clients. When using it with Consumer Groups, it has a pending entries list (PEL), that acts as that temporary list.
Clients then do a XACK to remove elements from the PEL once processed. Then you are also protected from client failures.
Redis Streams are very useful to solve the complex problem you are trying to solve. You may want to do the free course about this.
You could use a prefetcher.
Instead of each consumer greedily picking an item from the queue which leads to the problem of 'water water everywhere, but not a drop to drink', you could have a prefetcher that builds a packet of size = 6. When the prefetcher has a full packet, it can place the item in a separate packet queue (another redis key with list of packets) and pop the items from the main queue in a single transaction. Essentially, what you wrote:
If there is only 1 Consumer client, the client can simply run LLEN
command to check if the list contains at least N items, and subtract N
using LPOP/RPOP.
If the prefetcher doesn't have a full packet, it does nothing and keeps waiting for the main queue size to reach 6.
On the consumer side, they will just query the prefetched packets queue and pop the top packet and go. It is always 1 pre-built packet (size=6 items). If there are no packets available, they wait.
On the producer side, no changes are required. They can keep inserting into the main queue.
BTW, there can be more than one prefetcher task running concurrently and they can synchronize access to the main queue between themselves.
Implementing a scalable prefetcher
Prefetcher implementation could be described using a buffet table analogy. Think of the main queue as a restaurant buffet table where guests can pick up their food and leave. Etiquette demands that the guests follow a queue system and wait for their turn. Prefetchers also would follow something analogous. Here's the algorithm:
Algorithm Prefetch
Begin
while true
check = main queue has 6 items or more // this is a queue read. no locks required
if(check == true)
obtain an exclusive lock on the main queue
if lock successful
begin a transaction
create a packet and fill it with top 6 items from
the queue after popping them
add the packet to the prefetch queue
if packet added to prefetch queue successfully
commit the transaction
else
rollback the transaction
end if
release the lock
else
// someone else has the excl lock, we should just wait
sleep for xx millisecs
end if
end if
end while
End
I am just showing an infinite polling loop here for simplicity. But this could be implemented using pub/sub pattern through Redis Notifications. So, the prefetcher just waits for a notification that the main queue key is receiving an LPUSH and then executes the logic inside the while loop body above.
There are other ways you could do this. But this should give you some ideas.
The timestamp minimum is 0, and the sequence part starts at 0. Why is Redis Streams minimum message ID '0-1' and not '0-0'?
Is '0-0' used internally? Is this why you can have 'empty' streams?
It appears to be a bug - there is an open pull request to fix it at https://github.com/antirez/redis/pull/6574
This makes perfect sense. If id 0-0 was allowed, there'd be no way to start fetching stream from the very beginning if using an explicit index (as opposed to -). Sometimes it is convenient to use an explicit index and not -.
In other words, it would be confusing and undesirable if simply passing 0 would result in skipping first element.
To make a queue in redis by using LPUSH and LTRIM, in python I do it like this:
if not str(key) in r.lrange('myq', 0 , -1):
r.lpush("myq" , key)
r.ltrim("myq" , 0, MYQ_LENGTH)
But how to store key:value pairs in a redis queue?
Suppose that Keys and Values can be any strings (so can not be splitted using :), what is the best way to push
Key1:Val1
Key2:Val2
Key3:Val3
Key4:Val4
into a queue 4 items, and when the 5th pair is pushed into the queue, it pop out Key1:VAl1?
You can encode your strings using something like MessagePack or JSON, and push that into the list. Upon pop, perform the matching decode in the client (or write a Lua script that does it server-side).
Pseudo-code example:
r.lpush("myq", "{\"key1\":\"val1\"}")
...
ele = r.lpop("myq")
hash = JSON.decode(ele)
I use ordered set to true, however when many (1000 or more) messages are sent in a short period of time (< 1 second) the messages received are not all received in the same order.
rtcPeerConnection.createDataChannel("app", {
ordered: true,
maxPacketLifeTime: 3000
});
I could provide a minimal example to reproduce this strange behavior if necessary.
I also use bufferedAmountLowThreshold and the associated event to delay when the send buffered amount is too big. I chose 2000 but I don't know what the optimal number is. The reason I have so many messages in a short period of time is because I don't want to overflow the maximum amount of data sent at once. So I split the data into 800 Bytes packs and send those. Again I don't know what the maximum size 1 message can be.
const SEND_BUFFERED_AMOUNT_LOW_THRESHOLD = 2000; //Bytes
rtcSendDataChannel.bufferedAmountLowThreshold = SEND_BUFFERED_AMOUNT_LOW_THRESHOLD;
const MAX_MESSAGE_SIZE = 800;
Everything works fine for small data that is not split into too many messages. The error occurs randomly for big files only.
In 2016/11/01 , there is a bug that lets the dataChannel.bufferedAmount value change during the event loop task execution. Relying on this value can thus cause unexpected results. It is possible to manually cache dataChannel.bufferedAmount, and to use that to prevent this issue.
See https://bugs.chromium.org/p/webrtc/issues/detail?id=6628
I'm trying to use ActiveMQPrefetchPolicy but cannot quite understand how to use it.
I'm using queue, there are 3 params that I can define for PrefetchPolicy:
queuePrefetch, queueBrowserPrefetch, inputStreamPrefetch
Actually I don't get the meaning of queueBrowserPrefetch and inputStreamPrefetch so I do not know how to use it.
I assume that you have seen the ActiveMQ page on prefetch limits.
queueBrowserPrefetch sets the maximum number of messages sent to a
ActiveMQQueueBrowser until acks are received.
inputStreamPrefetch sets the maximum number of messages sent
through a jms-stream until acks are received
Both queue-browser and jms-stream are specialized consumers. You can read more about each one of them but if you are not using them it won't matter what you assign to their prefetch limits.