The documentation and Internet at large don't seem to provide any examples of setting values into a JSON object stored as a hash value, yet I know that the Redis collection types are often just hacks that allow their underlying storage to be managed similarly to scalar keys.
Can someone confirm whether RedisJSON supports this or doesn't, and, if it does, provide an example?
Thank you.
https://redis.io/commands/json.set
RedisJSON introduces a new JSON data type to Redis.
Therefor JSON.SET can only act on JSON types.
Halo! I'm recently diving into cloudflare Workers, especially Durable Objects. I could make a simple request which put a js object into the assigned key. Let's say the key is key0, and the put object value is {"fieldA": "val0", "fieldB": "val1"}. In this case, how can i update the field-value of fieldA without removing fieldB? I've tried simply executing put("key0", {"fieldA": "newVal0"}) and it has kept removing {"fieldB": "val1"}.
Of course it is a common behaviour in js operations, but i cannot find out anything like ~["key0"]["fieldA"] = "newVal0" in docs(maybe i'm missing sth). OTL
Hope this question reach to the gurus in the community! Thanks in advance [:
EDIT after the answers:
In theory, it would be wonderful if flare durable objects support and work just like a normal js object. Such possible worker feature feels like a killer app for the cloud db services, since the average cpu time is quite fast and flare also has super low pricing compared to other big bros. If it happens, i would eager to migrate everything into the flare platform [:
Durable Objects' KV storage only supports get and put operations -- it doesn't have any sort of "update". So, you have two options:
get() the key, modify it, and then write the modified version back. This may sound inefficient, but keep in mind that commonly-accessed keys will likely be in in-memory cache. In fact, this get/modify/put implemented in your JavaScript is probably about as fast as any modification operation that Durable Objects itself could possibly implement built-in. That said, you probably don't want to use this approach with large objects, since the whole object has to be written to disk again after every update.
Split your object across multiple keys. E.g. instead of having the key foo map to {"fieldA": "val0", "fieldB": "val1"}, you could have separate keys foo:fieldA and foo:fieldB. Note that you can fetch all the keys at once using storage.list({prefix: "foo:"}). This approach is not as convenient but allows each field to be written separately to disk.
get and put deal with whole JS objects, so if you want to change part of the object you should get it, update it using normal JS, and then put the entire object back.
I'd like to use sails-redis to track all kinds of events.
Therefore I need the ability to increment model attributes in a performant way.
I already found the Model.native function, wich allows me to access the native redis methods.
But since sails-redis is based on Strings and not on Hashes I can not use any native increment methods (so far i know).
Is there any clean and performant way to solve this issue?
The thing sails-redis does is to create a database with CRUD methods by using redis key-value-store based on strings.
Therefore do not see sails-redis as an wrapper for redis. Forget about that. It is just another database which almost has nothing to do with redis.
Use the right tool for the right job!
I you have a job like event tracking where you want to use Redis because of it's speed use node-redis and implement it yourself. sails-redis is just not made for such things.
I simply created a new service and used node-redis. There might be a more elegant way, but mines works and improved performance a hole lot.
https://github.com/balderdashy/sails-redis/issues/34
I need to find the best implementation to send a byte array to the key space of a Redis Server with Booksleeve.
I tried different implementation like UTF8 Encoding but i don't know what is the most optimized one in memory of redis server (i will worked with millions of key like this so i really need the shortest in memory key).
Is anyone has already had this requirement?
In the current build for simplicity I've stuck to string keys, however the code would handle binary fine - it uses the binary API. IIRC I received a patch in my inbox just this week that adds binary key support.
Since it seems to be in demand I'll look at that this week.
Edit: a week came and went; the reason being that I'm also doing some work on redis-cluster support, which is going to need some new interfaces anyway, because:
not all operations are supported
parallel (numbered) databases aren't supported
So basically my plan is to roll both pieces of work into the same branch, giving:
a new set of interfaces
which use a struct for the key parameter with an implicit conversion operator from string and byte, allowing either to be used interchangeably
with the redis-cluster and redis-server commands on separate APIs
and a new method on the old connection to get one of the new APIs on a per-DB basis, i.e. Database(3).Keys.Remove(key); or something like that
ETA is still imaginary, but I wanted to explain why I hadn't simply thrown in the existing patch - I think the advent of redis-cluster makes it a good time to revisit the entire API, (but obviously in a way that doesn't break existing code).
So, I've come to a place where I wanted to segment the data I store in redis into separate databases as I sometimes need to make use of the keys command on one specific kind of data, and wanted to separate it to make that faster.
If I segment into multiple databases, everything is still single threaded, and I still only get to use one core. If I just launch another instance of Redis on the same box, I get to use an extra core. On top of that, I can't name Redis databases, or give them any sort of more logical identifier. So, with all of that said, why/when would I ever want to use multiple Redis databases instead of just spinning up an extra instance of Redis for each extra database I want? And relatedly, why doesn't Redis try to utilize an extra core for each extra database I add? What's the advantage of being single threaded across databases?
You don't want to use multiple databases in a single redis instance. As you noted, multiple instances lets you take advantage of multiple cores. If you use database selection you will have to refactor when upgrading. Monitoring and managing multiple instances is not difficult nor painful.
Indeed, you would get far better metrics on each db by segregation based on instance. Each instance would have stats reflecting that segment of data, which can allow for better tuning and more responsive and accurate monitoring. Use a recent version and separate your data by instance.
As Jonaton said, don't use the keys command. You'll find far better performance if you simply create a key index. Whenever adding a key, add the key name to a set. The keys command is not terribly useful once you scale up since it will take significant time to return.
Let the access pattern determine how to structure your data rather than store it the way you think works and then working around how to access and mince it later. You will see far better performance and find the data consuming code often is much cleaner and simpler.
Regarding single threaded, consider that redis is designed for speed and atomicity. Sure actions modifying data in one db need not wait on another db, but what if that action is saving to the dump file, or processing transactions on slaves? At that point you start getting into the weeds of concurrency programming.
By using multiple instances you turn multi threading complexity into a simpler message passing style system.
In principle, Redis databases on the same instance are no different than schemas in RDBMS database instances.
So, with all of that said, why/when would I ever want to use multiple
Redis databases instead of just spinning up an extra instance of Redis
for each extra database I want?
There's one clear advantage of using redis databases in the same redis instance, and that's management. If you spin up a separate instance for each application, and let's say you've got 3 apps, that's 3 separate redis instances, each of which will likely need a slave for HA in production, so that's 6 total instances. From a management standpoint, this gets messy real quick because you need to monitor all of them, do upgrades/patches, etc. If you don't plan on overloading redis with high I/O, a single instance with a slave is simpler and easier to manage provided it meets your SLA.
Even Salvatore Sanfilippo (creator of Redis) thinks it's a bad idea to use multiple DBs in Redis. See his comment here:
https://groups.google.com/d/topic/redis-db/vS5wX8X4Cjg/discussion
I understand how this can be useful, but unfortunately I consider
Redis multiple database errors my worst decision in Redis design at
all... without any kind of real gain, it makes the internals a lot
more complex. The reality is that databases don't scale well for a
number of reason, like active expire of keys and VM. If the DB
selection can be performed with a string I can see this feature being
used as a scalable O(1) dictionary layer, that instead it is not.
With DB numbers, with a default of a few DBs, we are communication
better what this feature is and how can be used I think. I hope that
at some point we can drop the multiple DBs support at all, but I think
it is probably too late as there is a number of people relying on this
feature for their work.
I don't really know any benefits of having multiple databases on a single instance. I guess it's useful if multiple services use the same database server(s), so you can avoid key collisions.
I would not recommend building around using the KEYS command, since it's O(n) and that doesn't scale well. What are you using it for that you can accomplish in another way? Maybe redis isn't the best match for you if functionality like KEYS is vital.
I think they mention the benefits of a single threaded server in their FAQ, but the main thing is simplicity - you don't have to bother with concurrency in any real way. Every action is blocking, so no two things can alter the database at the same time. Ideally you would have one (or more) instances per core of each server, and use a consistent hashing algorithm (or a proxy) to divide the keys among them. Of course, you'll loose some functionality - piping will only work for things on the same server, sorts become harder etc.
Redis databases can be used in the rare cases of deploying a new version of the application, where the new version requires working with different entities.
I know this question is years old, but there's another reason multiple databases may be useful.
If you use a "cloud Redis" from your favourite cloud provider, you probably have a minimum memory size and will pay for what you allocate. If however your dataset is smaller than that, then you'll be wasting a bit of the allocation, and so wasting a bit of money.
Using databases you could use the same Redis cloud-instance to provide service for (say) dev, UAT and production, or multiple instances of your application, or whatever else - thus using more of the allocated memory and so being a little more cost-effective.
A use-case I'm looking at has several instances of an application which use 200-300K each, yet the minimum allocation on my cloud provider is 1M. We can consolidate 10 instances onto a single Redis without really making a dent in any limits, and so save about 90% of the Redis hosting cost. I appreciate there are limitations and issues with this approach, but thought it worth mentioning.
I am using redis for implementing a blacklist of email addresses , and i have different TTL values for different levels of blacklisting , so having different DBs on same instance helps me a lot .
Using multiple databases in a single instance may be useful in the following scenario:
Different copies of the same database could be used for production, development or testing using real-time data. People may use replica to clone a redis instance to achieve the same purpose. However, the former approach is easier for existing running programs to just select the right database to switch to the intended mode.
Our motivation has not been mentioned above. We use multiple databases because we routinely need to delete a large set of a certain type of data, and FLUSHDB makes that easy. For example, we can clear all cached web pages, using FLUSHDB on database 0, without affecting all of our other use of Redis.
There is some discussion here but I have not found definitive information about the performance of this vs scan and delete:
https://github.com/StackExchange/StackExchange.Redis/issues/873