Two issues
Do lua scripts really solve all cases for redis transactions?
What are best practices for asynchronous transactions from one client?
Let me explain, first issue
Redis transactions are limited, with an inability to unwatch specific keys, and all keys being unwatched upon exec; we are limited to a single ongoing transaction on a given client.
I've seen threads where many redis users claim that lua scripts are all they need. Even the redis official docs state they may remove transactions in favour of lua scripts. However, there are cases where this is insufficient, such as the most standard case: using redis as a cache.
Let's say we want to cache some data from a persistent data store, in redis. Here's a quick process:
Check cache -> miss
Load data from database
Store in redis
However, what if, between step 2 (loading data), and step 3 (storing in redis) the data is updated by another client?
The data stored in redis would be stale. So... we use a redis transaction right? We watch the key before loading from db, and if the key is updated somewhere else before storage, storage would fail. Great! However, within an atomic lua script, we cannot load data from an external database, so lua cannot be used here. Hopefully I'm simply missing something, or there is something wrong with our process.
Moving on to the 2nd issue (asynchronous transactions)
Let's say we have a socket.io cluster which processes various messages, and requests for a game, for high speed communication between server and client. This cluster is written in node.js with appropriate use of promises and asynchronous concepts.
Say two requests hit a server in our cluster, which require data to be loaded and cached in redis. Using our transaction from above, multiple keys could be watched, and multiple multi->exec transactions would run in overlapping order on one redis connection. Once the first exec is run, all watched keys will be unwatched, even if the other transaction is still running. This may allow the second transaction to succeed when it should have failed.
These overlaps could happen in totally separate requests happening on the same server, or even sometimes in the same request if multiple data types need to load at the same time.
What is best practice here? Do we need to create a separate redis connection for every individual transaction? Seems like we would lose a lot of speed, and we would see many connections created just from one server if this is case.
As an alternative we could use redlock / mutex locking instead of redis transactions, but this is slow by comparison.
Any help appreciated!
I have received the following, after my query was escalated to redis engineers:
Hi Jeremy,
Your method using multiple backend connections would be the expected way to handle the problem. We do not see anything wrong with multiple backend connections, each using an optimistic Redis transaction (WATCH/MULTI/EXEC) - there is no chance that the “second transaction will succeed where it should have failed”.
Using LUA is not a good fit for this problem.
Best Regards,
The Redis Labs Team
Related
I have three microservice that needs to communicate between each other. Microservice-1 is incharge of the data and the database(he writes and read to it). I will add a redis cache store for Microservice-1 to cache data there. I want to put a redis-slave for the other 2 microservices to reduce communication with the actual microservice, if the data is already in the cache store. Since all updates to the data, has to go thru the Microservice-1 and he will always update the cache, redis replication will make sure the other two microservices will get it too. Ofcourse, if the data is not in cache, it will call the Microservice-1 for the data, which will update the cache.
Am i missing something, with this approach ?
This will definitely work in the "sunny day" case.
But sometimes there are storms, and in storms there's a chance of losing cache coherency (i.e. the DB and Redis disagree on the data).
For example, lets say that you have Microservice-1 update the DB and then update Redis. What happens if there's a crash between updating the DB and updating Redis?
On the other hand, what if you reverse the ordering (update Redis and then the DB)? Now Redis could be updated and not the DB.
Neither of these in insurmountable, but absent a means of having a transaction which ensures that 0 or 2 of Redis and the DB are updated, there will always be a time window where the change is in one but not the other. In that situation, it's probably worth embracing eventual consistency (e.g. periodically scan the DB and update redis with recently updated records).
As an elaboration on that, a Command Query Responsibility Segregation with Event Sourcing (CQRS/ES) approach may prove useful: Microservice-1 gets split into two services, one which takes commands (requests to update) and another which handles queries. Instead of updating a row in a DB, the command service now appends (in a typical DB, an INSERT) an event which describes what changed. The query service can subscribe to those events and update Redis. Other microservices can also subscribe to the stream of events and update their own views (which can be remixed in any way they want) of Microservice-1's state.
I've found different zookeeper definitions across multiple resources. Maybe some of them are taken out of context, but look at them pls:
A canonical example of Zookeeper usage is distributed-memory computation...
ZooKeeper is an open source Apache™ project that provides a centralized infrastructure and services that enable synchronization across a cluster.
Apache ZooKeeper is an open source file application program interface (API) that allows distributed processes in large systems to synchronize with each other so that all clients making requests receive consistent data.
I've worked with Redis and Hazelcast, that would be easier for me to understand Zookeeper by comparing it with them.
Could you please compare Zookeeper with in-memory-data-grids and Redis?
If distributed-memory computation, how does zookeeper differ from in-memory-data-grids?
If synchronization across cluster, than how does it differs from all other in-memory storages? The same in-memory-data-grids also provide cluster-wide locks. Redis also has some kind of transactions.
If it's only about in-memory consistent data, than there are other alternatives. Imdg allow you to achieve the same, don't they?
https://zookeeper.apache.org/doc/current/zookeeperOver.html
By default, Zookeeper replicates all your data to every node and lets clients watch the data for changes. Changes are sent very quickly (within a bounded amount of time) to clients. You can also create "ephemeral nodes", which are deleted within a specified time if a client disconnects. ZooKeeper is highly optimized for reads, while writes are very slow (since they generally are sent to every client as soon as the write takes place). Finally, the maximum size of a "file" (znode) in Zookeeper is 1MB, but typically they'll be single strings.
Taken together, this means that zookeeper is not meant to store for much data, and definitely not a cache. Instead, it's for managing heartbeats/knowing what servers are online, storing/updating configuration, and possibly message passing (though if you have large #s of messages or high throughput demands, something like RabbitMQ will be much better for this task).
Basically, ZooKeeper (and Curator, which is built on it) helps in handling the mechanics of clustering -- heartbeats, distributing updates/configuration, distributed locks, etc.
It's not really comparable to Redis, but for the specific questions...
It doesn't support any computation and for most data sets, won't be able to store the data with any performance.
It's replicated to all nodes in the cluster (there's nothing like Redis clustering where the data can be distributed). All messages are processed atomically in full and are sequenced, so there's no real transactions. It can be USED to implement cluster-wide locks for your services (it's very good at that in fact), and tehre are a lot of locking primitives on the znodes themselves to control which nodes access them.
Sure, but ZooKeeper fills a niche. It's a tool for making a distributed applications play nice with multiple instances, not for storing/sharing large amounts of data. Compared to using an IMDG for this purpose, Zookeeper will be faster, manages heartbeats and synchronization in a predictable way (with a lot of APIs for making this part easy), and has a "push" paradigm instead of "pull" so nodes are notified very quickly of changes.
The quotation from the linked question...
A canonical example of Zookeeper usage is distributed-memory computation
... is, IMO, a bit misleading. You would use it to orchestrate the computation, not provide the data. For example, let's say you had to process rows 1-100 of a table. You might put 10 ZK nodes up, with names like "1-10", "11-20", "21-30", etc. Client applications would be notified of this change automatically by ZK, and the first one would grab "1-10" and set an ephemeral node clients/192.168.77.66/processing/rows_1_10
The next application would see this and go for the next group to process. The actual data to compute would be stored elsewhere (ie Redis, SQL database, etc). If the node failed partway through the computation, another node could see this (after 30-60 seconds) and pick up the job again.
I'd say the canonical example of ZooKeeper is leader election, though. Let's say you have 3 nodes -- one is master and the other 2 are slaves. If the master goes down, a slave node must become the new leader. This type of thing is perfect for ZK.
Consistency Guarantees
ZooKeeper is a high performance, scalable service. Both reads and write operations are designed to be fast, though reads are faster than writes. The reason for this is that in the case of reads, ZooKeeper can serve older data, which in turn is due to ZooKeeper's consistency guarantees:
Sequential Consistency
Updates from a client will be applied in the order that they were sent.
Atomicity
Updates either succeed or fail -- there are no partial results.
Single System Image
A client will see the same view of the service regardless of the server that it connects to.
Reliability
Once an update has been applied, it will persist from that time forward until a client overwrites the update. This guarantee has two corollaries:
If a client gets a successful return code, the update will have been applied. On some failures (communication errors, timeouts, etc) the client will not know if the update has applied or not. We take steps to minimize the failures, but the only guarantee is only present with successful return codes. (This is called the monotonicity condition in Paxos.)
Any updates that are seen by the client, through a read request or successful update, will never be rolled back when recovering from server failures.
Timeliness
The clients view of the system is guaranteed to be up-to-date within a certain time bound. (On the order of tens of seconds.) Either system changes will be seen by a client within this bound, or the client will detect a service outage.
Good day!
Suppose we have a redis-master and several slaves. The master goal is to store all data while slaves are used for quering data for users. Hovewer quering is a bit complex and some temporary data needs to be stored. And also I want to cache the query result for a couple of minutes.
How should I configure replication to save temporary data and caches?
Redis slaves have optional support to accept writes, however you have to understand a few limitations of writable slaves before to use them, since they have non trivial issues.
Keys created on the slaves will not support expires. Actually in recent versions of Redis they appear to work but are actually leaked instead of expired, until the next time you resynchronize the slave with the master from scratch or issue FLUSHALL or alike. There are deep reasons for this issue... it is currently not clear if we'll deprecate writable slaves at all, find a solution, or deny expires for writable slaves.
You may want, anyway, to use a different Redis numerical DB (SELECT command) in order to store your intermediate data (you may use MULTI/.../MOVE/EXEC transaction in order to generate your intermediate results in the currently selected DB where data belongs, and MOVE the keys off to some other DB, so it will be clear if keys are accumulating and you can FLUSHDB from time to time).
The keys you create on your slave are volatile, they may go away in any moment when the master will resynchronize with the slave. Does not look like an issue for you since if they key is no longer there, you could recompute, but care should be take,
If you elect this slave into a master you have additional keys inside.
So there are definitely things to take in mind in this setup, however it is doable in some way. However you may want to consider alternative strategies.
Lua scripts on the slave side in order to filter your data inside Lua. Not as fast as Redis C commands often.
Precomputation of data directly in the actual data set in order to make your queries possible just using read only commands.
MIGRATE in order to migrate interesting keys from a slave to an instance (another master) designed specifically to perform post-computations.
Hard to tell what's the best strategy without in-depth analysis of the actual use case / problem, but I hope this general guidelines help.
I'm using Redis as a session store in my app. Can I use the same instance (and db) of Redis for my job queue? If it's of any significance, it's hosted with redistogo.
It is perfectly fine to use the same redis for multiple operations.
We had a similar use case where we used Redis as a key value store as well as a job queue.
However you may want to consider other aspects like the performance requirements for your application. Redis can ideally handle around 70k operations per second and if at some time in future you think you may hit these benchmarks it's much better to split your operations to multiple redis instances based on the kind of operations you perform. This will allow you to make decisions about availability and replication at a more finer level depending on the requirements. As a simple use case once your key size grows you may be able to flush your session app redis or shard your keys using redis cluster without affecting job queing infrastructure.
We have big shopping and product dealing system. We have faced lots problem with MySQL so after few r&D we planned to use Redis and we start integrating Redis in our system.
Following this previously directly hitting the database now we have moved the Redis system
User shopping cart details
Affiliates clicks tracking records
We have product dealing user data.
other site stats.
I am not only storing the data in Redis system i have written crons which moves Redis data in MySQL data at time intervals. This is the main point i am facing the issues.
Bellow points i am looking for solution
Is their any other ways to dump big data from Redis to MySQL?
Redis fail our store data in file so is it possible to store that data directly to MySQL database?
Is Redis have any trigger system using that i can avoid the crons like queue system?
Is their any other way to dump big data from Redis to MySQL?
Redis has the possibility (using bgsave) to generate a dump of the data in a non blocking and consistent way.
https://github.com/sripathikrishnan/redis-rdb-tools
You could use Sripathi Krishnan's well-known package to parse a redis dump file (RDB) in Python, and populate the MySQL instance offline. Or you can convert the Redis dump to JSON format, and write scripts in any language you want to populate MySQL.
This solution is only interesting if you want to copy the complete data of the Redis instance into MySQL.
Does Redis have any trigger system that i can use to avoid the crons like queue system?
Redis has no trigger concept, but nothing prevents you to post events in Redis queues each time something must be copied to MySQL. For instance, instead of:
# Add an item to a user shopping cart
RPUSH user:<id>:cart <item>
you could execute:
# Add an item to a user shopping cart
MULTI
RPUSH user:<id>:cart <item>
RPUSH cart_to_mysql <id>:<item>
EXEC
The MULTI/EXEC block makes it atomic and consistent. Then you just have to write a little daemon waiting on items of the cart_to_mysql queue (using BLPOP commands). For each dequeued item, the daemon has to fetch the relevant data from Redis, and populate the MySQL instance.
Redis fail our store data in file so is it possible to store that data directly to MySQL database?
I'm not sure I understand the question here. But if you use the above solution, the latency between Redis updates and MySQL updates will be quite limited. So if Redis fails, you will only loose the very last operations (contrary to a solution based on cron jobs). It is of course not possible to have 100% consistency in the propagation of data though.