Problem:
I am getting junk values like "OK" for redis get call.
This issue reproduces often over a particular period of time irrespective of the keys trying to obtain through get command.
I'm Using :
Redis version 2.8
Jedis client 2.5.1 to connect Redis
Please suggest some solution to resolve this issue.
The problem is outlined in this page. From the article:
I learned a hard lesson when enabling Redis transactions in the Spring RedisTemplate class redisTemplate.setEnableTransactionSupport(true);: Redis started returning junk data after running for a few days, causing serious data corruption. A similar case was reported on StackOverflow.
By running a monitor command, my team discovered that after a Redis operation or RedisCallback, Spring doesn't close the Redis connection automatically, as it should do. Reusing an unclosed connection may return junk data from an unexpected key in Redis. Interestingly, this issue doesn't show up when transaction support is set to false in RedisTemplate.
We discovered that we could make Spring close Redis connections automatically by configuring a PlatformTransactionManager (such as DataSourceTransactionManager) in the Spring context, then using the #Transactional annotation to declare the scope of Redis transactions.
Based on this experience, we believe it's good practice to configure two separate RedisTemplates in the Spring context: One with transaction set to false is used on most Redis operations; the other with transaction enabled is only applied to Redis transactions. Of course PlatformTransactionManager and #Transactional must be declared to prevent junk values from being returned.
Moreover, we learned the downside of mixing a Redis transaction with a relational database transaction, in this case JDBC. Mixed transactions do not behave as you would expect.
Related
Redisson provides support for locking backed by Redis. It also provides implementation for working with spring cache framework. But based on what I saw locking is not called by default when try to update a key in a cache using spring cache framework. Redisson has separate APIs for locking a particular key. Is that correct?
Also the locking APIs seem to take key as an input so I am not clear how locking works. For locking I am assuming you need both cache name and key.
I am new to redis so any help in throwing some light on this is really appreciated. Thanks
Firstly, locking in Redisson is implement by Redis, but not only used for Redis updating.
For example if you want to implement an atomic operation like this:
Get key value from Redis
Calculate a new value based on some logic
Save the new value to Redis and Mysql
You can use Redisson lock to make the operation atomically.
Secondly, in Redis, set/update command is atomic and you don't need to lock the key if you only update the value.
And for locking API, Redisson implement lock by Redis key/value, so you only need to provide lock-key, which generally contains a resource id and resource type(like "lock:user:31352")
I am running Redis in sentinels mode, It has happened many times that I write data in Redis but while reading same key I don't get expected value.
I am wondering if it is possible when I write data it is written on Master and while reading it goes to slave but since Replication in Redis is asynchronous in nature all slaves are not updated and hence I don't get updated value/ valid value.
I am using redisson client and three servers for sentinel configuration.
It's not possible. To overcome this you may choose from follow options:
set readMode config parameter to MASTER
use RBatch object with defined syncSlaves setting BatchOptions.syncSlaves(2, 10, TimeUnit.SECONDS)
Two issues
Do lua scripts really solve all cases for redis transactions?
What are best practices for asynchronous transactions from one client?
Let me explain, first issue
Redis transactions are limited, with an inability to unwatch specific keys, and all keys being unwatched upon exec; we are limited to a single ongoing transaction on a given client.
I've seen threads where many redis users claim that lua scripts are all they need. Even the redis official docs state they may remove transactions in favour of lua scripts. However, there are cases where this is insufficient, such as the most standard case: using redis as a cache.
Let's say we want to cache some data from a persistent data store, in redis. Here's a quick process:
Check cache -> miss
Load data from database
Store in redis
However, what if, between step 2 (loading data), and step 3 (storing in redis) the data is updated by another client?
The data stored in redis would be stale. So... we use a redis transaction right? We watch the key before loading from db, and if the key is updated somewhere else before storage, storage would fail. Great! However, within an atomic lua script, we cannot load data from an external database, so lua cannot be used here. Hopefully I'm simply missing something, or there is something wrong with our process.
Moving on to the 2nd issue (asynchronous transactions)
Let's say we have a socket.io cluster which processes various messages, and requests for a game, for high speed communication between server and client. This cluster is written in node.js with appropriate use of promises and asynchronous concepts.
Say two requests hit a server in our cluster, which require data to be loaded and cached in redis. Using our transaction from above, multiple keys could be watched, and multiple multi->exec transactions would run in overlapping order on one redis connection. Once the first exec is run, all watched keys will be unwatched, even if the other transaction is still running. This may allow the second transaction to succeed when it should have failed.
These overlaps could happen in totally separate requests happening on the same server, or even sometimes in the same request if multiple data types need to load at the same time.
What is best practice here? Do we need to create a separate redis connection for every individual transaction? Seems like we would lose a lot of speed, and we would see many connections created just from one server if this is case.
As an alternative we could use redlock / mutex locking instead of redis transactions, but this is slow by comparison.
Any help appreciated!
I have received the following, after my query was escalated to redis engineers:
Hi Jeremy,
Your method using multiple backend connections would be the expected way to handle the problem. We do not see anything wrong with multiple backend connections, each using an optimistic Redis transaction (WATCH/MULTI/EXEC) - there is no chance that the “second transaction will succeed where it should have failed”.
Using LUA is not a good fit for this problem.
Best Regards,
The Redis Labs Team
I am currently implementing a server system which has both an SQL database and a Redis datastore. The data written to Redis heavily depends on the SQL data (cache, objects representing logic entities defined by a number of SQL models and their relationships).
While looking for an error handling methodology to wrap client requests, something similar to SQL's transaction commit & rollback (Redis doesn't support rollbacks), I thought of a mechanism which can serve this purpose and I'd appreciate input regarding it.
Basically, I intend to wrap (with before/after middleware) every client request with an SQL transaction and a Redis multi command (pipes commands until exec or discard command is invoked), and allow both transactions to occur only if the request was processed successfully.
The problem is that once you start a Redis multi command, you are not able to preform any reads/writes and actually use their values while processing a request. I reduced the problem just for reads since depending on just-now written values can be optimized out.
My (simple) solution: split the Redis connection into two - a writer and a reader. The writer connection will be the one to be initialized with the multi command and executed/discarded at the end. Of course, all writing will be preformed through it, while reading is done using the reader (executed instantly).
The down side: as opposed to SQL, you can't rely on values written in the same request (transaction). Then again, usually quite easy to overcome.
We're having a severe config/product bug on our installation. We've been experiencing concurrency related errors that we've been blaming to Jedis usage, but it seems that it might be a product / config issue.
This is a single redis installation with over 4M keys. Whenever we run a long running command from redis-cli, like a keys *, our client code (Jedis based) starts to throw errors, like trying to cast a string into a binary (typical concurrency errors in Jedis conf). The worst scenario is that sometimes it seems that it returns wrong keys. We were using a Jedis instance in each actor instance, so it shouldn't be an issue but we changed to JedisPool nevertheless. But the problem remained (we are using Jedis 2.6.2).
But the main thing was when trying from different redis-cli. We run KEYS * that stays a long time running and then a GET command which returned. It was our understanding that the KEYS * should block everyone, but the GET command keeps working. This also happens with a SLEEP command.
Is this related to a config setting or this is something that shouldn't happen, or the KEYS command isn't blocking and my problem lies elsewhere?
Redis.io documentation for KEYS clearly states that KEYS is a debug command and should not be used in production:
Warning: consider KEYS as a command that should only be used in
production environments with extreme care. It may ruin performance
when it is executed against large databases. This command is intended
for debugging and special operations, such as changing your keyspace
layout. Don't use KEYS in your regular application code. If you're
looking for a way to find keys in a subset of your keyspace, consider
using SCAN or sets.