I am very intrigued by Redis streams. (Looks like the potential to build little systems powered by append-logs, like Kafka, but without all the overhead of Kafka.)
It looks straightforward to XADD to a log/stream and to consume an entry from a log/stream. But what about if you want to join across two streams?
Kafka Streams, Flink, Spark, etc. provide means for doing this. Is there an equivalent in the Redis universe?
If not, I guess I'll just need to implement my own thing that consumes from two streams, does its own join logic from the messages, and publishes back out to a new stream. If others have experience doing this with Redis Streams, please do share your pointers or warnings.
If I am correct, you are looking for a way to join two Redis streams.
It seems there is a connector available for Spark, that allows you to consume the streams https://github.com/RedisLabs/spark-redis/blob/master/doc/streaming.md
From here Spark logic for the join should be easy to use.
I have attended the Redis University Stream course and they does not show any built in method to join messages from two streams.
I guess one work-around would be to handle the IDs in a proper way.
Let's make an example:
You produce messages to stream A with even IDs.
You prodce messages to stream B with odd IDs.
You consume from stream A and stream B and produce to a new stream C being careful on respecting the order because it is not allowed to produce an ID equal or smaller than the larger already present in the stream.
In this whay you achieve the join.
Looking forward to better answer not using external libraries
Related
The https://redis.io/topics/streams-intro#capped-streams documentation mentions the capped streams to prevent memory overload:
...Sometimes it is useful to have at maximum a given number of items inside a stream, other times once a given size is reached, it is useful to move data from Redis to a storage which is not in memory...
However it only explains the redis capabilities on trimming the stream. I was not able to find any concept or a proven way to actually move the data from redis. I understand I can create a consumer to move all events to the unlimited place but the statement quoted above suggests that I should be able to move only old events in an efficient manner. Could you please share an idea of a solution?
IIUC you're looking to delete consumed messages, use case could be a replay or store it as historical data.
Redis as such does not offer a clean way to move data out of any Redis collection, capped stream just means you can trim the stream since it could potentially lead to out of memory.
The easiest way would be to add a consumer in the archive group that would consume from the stream and writes this data somewhere else. The consumer has to work for all required Redis streams where archival is required, this way you will always have data in secondary storage.
Now you would need some trim policy that would trim the collection, the easiest way could be trim it periodically like daily see my other answer
How to define TTL for redis streams?
Am I missing something, or is there no way to generate backpressure with Redis streams? If a producer is pushing data to a stream faster consumers can consume it, there's no obvious way to signal to the producer that it should stop or slow down.
I expected that there would be a blocking version of XADD, that would block the client until room became available in a capped stream (similar to the blocking version of XREAD that allows consumers to wait until data becomes available), but this doesn't seem to be the case.
How do people deal with the above scenario — signaling to a producer that it should hold off on adding more items to a stream?
I understand that some data stream systems such as Kafka do not require backpressure, but Redis doesn't appear to have a comparable solution, and it seems like this would be a relatively common problem for many Redis streams use cases.
If you have persistence (either RDB or AOF) turned on, your stream messages will be persisted, hence there's no need for backpressure.
And if you use replicas, you have another level of redudancy.
Backpressure is needed only when Redis does not have enough memory (or enough network bandwidth to the replicas) to hold the messages.
And, honestly, I have never seen this scenario.
Why would you want to ? Unless you run out of memory it is not an issue and each consumer slow and fast can read at their leisure.
Note not using consumer groups just publishing via XADD and readers read via XRANGE via position stored in a key which is closer to Kafka. Using one stream per partition.
Producer can check if the table size gets too big every 1K messages (via XLEN) to slow down if this is an issue and cant you cant throw HW at it 5 nodes with 20 Gig each is pretty easy with the streams spread across the cluster .. Don't understand this should be easy so im probably missing something.
There is also an XADD version that trims the size of the table to ensure you don't over fill with the above but that world require some pretty extreme stuff. For us this is 2 days worth for frequent stuff which sends the latest state and 9 months for others.
Another thing dont store large messages in the stream , use a blob or separate key/ store.
Redis can be used as realtime pub-sub just as Kafka.
I am confused which one to use when.
Any use case would be a great help.
Redis pub-sub is mostly like a fire and forget system where all the messages you produced will be delivered to all the consumers at once and the data is kept nowhere. You have limitation in memory with respect to Redis. Also, the number of producers and consumers can affect the performance in Redis.
Kafka, on the other hand, is a high throughput, distributed log that can be used as a queue. Here any number of users can produce and consumers can consume at any time they want. It also provides persistence for the messages sent through the queue.
Final Take:
Use Redis:
If you want a fire and forget kind of system, where all the messages that you produce are delivered instantly to consumers.
If speed is most concerned.
If you can live up with data loss.
If you don't want your system to hold the message that has been sent.
The amount of data that is gonna be dealt with is not huge.
Use kafka:
If you want reliability.
If you want your system to have a copy of messages that has been sent even after consumption.
If you can't live up with data loss.
If Speed is not a big concern.
data size is huge
Redis 5.0+ version provides the Stream data structure. It could be considered as a log data structure with delivery guarantees. It offers a set of blocking operations allowing consumers to wait for new data added to a stream by producers, and in addition to that, a concept called Consumer Groups.
Basically Stream structure provides the same capabilities as Kafka.
Here is the documentation https://redis.io/topics/streams-intro
There are two most popular Java clients that support this feature: Redisson and Jedis
Redisson provides ReliableTopic object if reliability of delivery is required. https://github.com/redisson/redisson/wiki/6.-distributed-objects/#613-reliable-topic
I use RabbitMQ as an integration distribution system, kind of ETL, pollers are querying tables from source databases, publish results on RabbitMQ, and results are consumed according their source (1 queue per source (app.) to be saved in another form.
I'm asking if it would be better to split queues per query AND source (app..), actually it's done only by source, and "postrouted" using a custom payload header.
The only advantage I see, that could be a defect, is that there are a same number of consumer as there are queries to do. But it could become a problem ...
Thanks.
I would say that one queue per query could get out of hand quickly in terms of managing and monitoring them.
I find it works well to have one queue per destination, and to then use the routing key to specify how things should be handled within your consumer code (i.e. for the type). That way, you get RabbitMQ to do the multiplexing for you, and the consumer code can run separately on the same messages on each destination point.
There are course, always many different ways, but I find that this tends to work well for ETL applications. If you have tons of destinations, perhaps you would want to move towards adding the destination to the routing key as well. If you don't have any ordering requirements (i.e. due to RDBMS Foreign Key Constraints), you could also consider adding multiple consumers to the same queue to improve throughput. (For cases where you do have such ordering requirements, that's where the one queue per destination and the multiplexing that provides proves to be especially useful.)
I have a problem with a topology. I try to explain the workflow...
I have a source that emits ~500k tuples every 2 minutes, these tuples must be read by a spout and processed exatly once like a single object (i think a batch in trident).
After that, a bolt/function/what else?...must appends a timestamp and save the tuples into Redis.
I tried to implement a Trident topology with a Function that save all the tuples into Redis using a Jedis object (Redis library for Java) into this Function class, but when i deploy i receive a NotSerializable Exception on this object.
My question is.How can i implement a Function that writes on Redis this batch of tuples? Reading on the web i cannot found any example that writes from a function to Redis or any example using State object in Trident (probably i have to use it...)
My simple topology:
TridentTopology topology = new TridentTopology();
topology.newStream("myStream", new mySpout()).each(new Fields("field1", "field2"), new myFunction("redis_ip", "6379"));
Thanks in advance
(replying about state in general since the specific issue related to Redis seems solved in other comments)
The concepts of DB updates in Storm becomes clearer when we keep in mind that Storm reads from distributed (or "partitioned") data sources (through Storm "spouts"), processes streams of data on many nodes in parallel, optionally perform calculations on those streams of data (called "aggregations") and saves the results to distributed data stores (called "states"). Aggregation is a very broad term that just means "computing stuff": for example computing the minimum value over a stream is seen in Storm as an aggregation of the previously known minimum value with the new values currently processed in some node of the cluster.
With the concepts of aggregations and partition in mind, we can have a look at the two main primitives in Storm that allow to save something in a state: partitionPersist and persistentAggregate, the first one runs at the level of each cluster node without coordination with the other partitions and feels a bit like talking to the DB through a DAO, while the second one involves "repartitioning" the tuples (i.e. re-distributing them across the cluster, typically along some groupby logic), doing some calculation (an "aggregate") before reading/saving something to DB and it feels a bit like talking to a HashMap rather than a DB (Storm calls the DB a "MapState" in that case, or a "Snapshot" if there's only one key in the map).
One more thing to have in mind is that the exactly once semantic of Storm is not achieved by processing each tuple exactly once: this would be too brittle since there are potentially several read/write operations per tuple defined in our topology, we want to avoid 2-phase commits for scalability reasons and at large scale, network partitions become more likely. Rather, Storm will typically continue replaying the tuples until he's sure they have been completely successfully processed at least once. The important relationship of this to state updates is that Storm gives us primitive (OpaqueMap) that allows idempotent state update so that those replays do not corrupt previously stored data. For example, if we are summing up the numbers [1,2,3,4,5], the resulting thing saved in DB will always be 15 even if they are replayed and processed in the "sum" operation several times due to some transient failure. OpaqueMap has a slight impact on the format used to save data in DB. Note that those replay and opaque logic are only present if we tell Storm to act like that, but we usually do.
If you're interested in reading more, I posted 2 blog articles here on the subject.
http://svendvanderveken.wordpress.com/2013/07/30/scalable-real-time-state-update-with-storm/
http://svendvanderveken.wordpress.com/2014/02/05/error-handling-in-storm-trident-topologies/
One last thing: as hinted by the replay stuff above, Storm is a very asynchronous mechanism by nature: we typically have some data producer that post event in a queueing system (e,g. Kafka or 0MQ) and Storm reads from there. As a result, assigning a timestamp from within storm as suggested in the question may or may not have the desired effect: this timestamp will reflect the "latest successful processing time", not the data ingestion time, and of course it will not be identical in case of replayed tuples.
Have you tried trident-state for redis. There is a code on github that does it already:
https://github.com/kstyrc/trident-redis.
Let me know if this answers your question or not.