I want to remove all keys matching SomePrefix* from my Redis. Is it possible ?
I see only m_connectionMultiplexer.GetDatabase().KeyDelete() but not KeyMatch() or GetAllKeys() within the library.
Preferably not Lua scripting such as link by Leonid Beschastny
I want to use that on initialization of web application for development state of the application.
SE.Redis directly mimics the features exposed by the server. The server does not have a "delete keys matching this pattern" feature. It does have "scan for keys matching this pattern" (via GetServer().GetKeys(...)), and it has "delete this key / these keys" (via GetDatabase.KeyDelete(...)). You could iterate in batches over the matching keys, deleting each batch in turn. Because you can pipeline requests, you don't pay latency per batch.
As an alternative implementation: partition the data by numeric database (select) or server, and use flushdb / flush.
Related
I have a database system which processes read operations against logical objects that are identified by OIDs (object identifiers). Example is 1.2.4.3.544. There are users identified by GUIDs. API gets results from DB and puts them into Redis. Key example is
SomePrefix_<oid>1.2.4.3.544</oid>_somedetails_<user>1f0c6cfe-ee9d-472c-b320-190df55f9527</user>
There are about a couple of hundreds unique OIDs in the system and about a hundred registered users. Also keys could not have user part for anonymous requests. Number of keys could vary from 80K to.. I suppose 500k.
I need to provide per-OID and per-user statistics to the UI and implement the possibility to delete-per-OID (the same for user). So the task is splitted into two. First version I had implemented was unsuccessful - I used .Keys("*") method in c# app which turns into Redis SCAN * to get all keys to the app to run through them, collect and distinct OIDs/users and show them on the UI. This operation was taking too much time to extract all keys to the app so I switched to another approach - on every save/delete key I incremented/decremented counters stored in Redis in another DB. Counters are simply integers with keys. This was almost okay but I got a requirement to set TTL for every cached request, so I faced with a dilemma how to store/track my statistics to keep it up-to-date with actually stored keys. Options I think
A) run LUA script to scan all keys and get all unique OIDS and unique users with counts, and return them to app. For deletion option - run SCAN inside LUA script and DEL all keys matched with pattern. Pros - no need for separate stats tracking. Cons - need to scan for all keys on every call. I have Zabbix dashboard to show statistics requested through my app and it could be painful to scan keys on every N seconds. These are cons.
B) In separate Redis DB store keys with sets where key is OID (or for users - key is GUID) and inside set store keys to cached requests. But how I could delete them when keys to cached requests are expired? Can I somehow link a value stored in a set with a key from another DB and make this value disappear from a set when key expires?
My Confusion regarding Redis
If I install Redis on my server and my 4 different clients connect to that same redis server, so how will the data between them will be kept separate so as one client does not override the key-value pair that other client has saved.
Eg:-
client-1 set name="adam"
client-2 set name="henry"
So as Redis server is common between these clients the name key set by client-1 will be overwritten by client-2, So when
client-1 execute get name ==> henry (As it has been updated which is wrong, he was expecting it to be adam)
So how does Redis separates the multiple user instance running on same server ? Does it create separate databases internally or stores as per user or what ?
Redis itself doesn't separate your data. You'd have to separate those by yourself. There are many options to do that.
Using Redis database: Redis supports multiple databases. Each application (in your case, client) can be set/allocated to use to use one specific database. This allocation has to be done in application end, not in Redis.
The limitations of this approach are: i) Redis supports at most 16 databases (denoted from 0 to 15). ii) Redis cluster mode supports only one database.
Note: SELECT command is used to select a specific database.
Namespacing: Each application can be (for example) assigned an unique prefix. They'd prefix all their keys with that assigned prefix.
Use separate Redis instance per application.
Since you have key-value pairs and you use the very same key for multiple clients, you will need to differentiate your clients. A way to do so would be to prepend the identifier of your client to each key. So, instead of set name you could do something like set client1_name. You would do well to implement some functions in your application that would be called like setName and getName and it would prepend the client identifier to the name under the hood. So, you implement your helper functions once, ensuring that it correctly builds the keys both for getters and setters and never again worry about the client.
I'm new in Redis and use Redis 2.8 with StackExchange.Redis Libarary.
How can I write a KEYS pattern to get all keys with specific Hashed member value?
As I use StackExchange.Redis and want to get Keys with a pattern like this (when username is a member for a key): KEYS "username:*AAA*".
database.HashKeys("suggest me a pattern :) ")
I will call this method many times on HTTP user request to find out user's session data stored in Redis database, do you suggest a better alternative solution for this approach?
This simply isn't a direct fit for any redis features. You certainly shouldn't use KEYS for this - in addition to being expensive (you should prefer SCAN, btw), that scans the keys, not the values.
I'm creating a mobile app and it requires a API service backend to get/put information for each user. I'll be developing the web service on ServiceStack, but was wondering about the storage. I love the idea of a fast in-memory caching system like Redis, but I have a few questions:
I created a sample schema of what my data store should look like. Does this seems like it's a good case for using Redis as opposed to a MySQL DB or something like that?
schema http://www.miles3.com/uploads/redis.png
How difficult is the setup for persisting the Redis store to disk or is it kind of built-in when you do writes to the store? (I'm a newbie on this NoSQL stuff)
I currently have my setup on AWS using a Linux micro instance (because it's free for a year). I know many factors go into this answer, but in general will this be enough for my web service and Redis? Since Redis is in-memory will that be enough? I guess if my mobile app skyrockets (hey, we can dream right?) then I'll start hitting the ceiling of the instance.
What to think about when desigining a NoSQL Redis application
1) To develop correctly in Redis you should be thinking more about how you would structure the relationships in your C# program i.e. with the C# collection classes rather than a Relational Model meant for an RDBMS. The better mindset would be to think more about data storage like a Document database rather than RDBMS tables. Essentially everything gets blobbed in Redis via a key (index) so you just need to work out what your primary entities are (i.e. aggregate roots)
which would get kept in its own 'key namespace' or whether it's non-primary entity, i.e. simply metadata which should just get persisted with its parent entity.
Examples of Redis as a primary Data Store
Here is a good article that walks through creating a simple blogging application using Redis:
http://www.servicestack.net/docs/redis-client/designing-nosql-database
You can also look at the source code of RedisStackOverflow for another real world example using Redis.
Basically you would need to store and fetch the items of each type separately.
var redisUsers = redis.As<User>();
var user = redisUsers.GetById(1);
var userIsWatching = redisUsers.GetRelatedEntities<Watching>(user.Id);
The way you store relationship between entities is making use of Redis's Sets, e.g: you can store the Users/Watchers relationship conceptually with:
SET["ids:User>Watcher:{UserId}"] = [{watcherId1},{watcherId2},...]
Redis is schema-less and idempotent
Storing ids into redis sets is idempotent i.e. you can add watcherId1 to the same set multiple times and it will only ever have one occurrence of it. This is nice because it means you don't ever need to check the existence of the relationship and can freely keep adding related ids like they've never existed.
Related: writing or reading to a Redis collection (e.g. List) that does not exist is the same as writing to an empty collection, i.e. A list gets created on-the-fly when you add an item to a list whilst accessing a non-existent list will simply return 0 results. This is a friction-free and productivity win since you don't have to define your schemas up front in order to use them. Although should you need to Redis provides the EXISTS operation to determine whether a key exists or a TYPE operation so you can determine its type.
Create your relationships/indexes on your writes
One thing to remember is because there are no implicit indexes in Redis, you will generally need to setup your indexes/relationships needed for reading yourself during your writes. Basically you need to think about all your query requirements up front and ensure you set up the necessary relationships at write time. The above RedisStackOverflow source code is a good example that shows this.
Note: the ServiceStack.Redis C# provider assumes you have a unique field called Id that is its primary key. You can configure it to use a different field with the ModelConfig.Id() config mapping.
Redis Persistance
2) Redis supports 2 types persistence modes out-of-the-box RDB and Append Only File (AOF). RDB writes routine snapshots whilst the Append Only File acts like a transaction journal recording all the changes in-between snapshots - I recommend adding both until your comfortable with what each does and what your application needs. You can read all Redis persistence at http://redis.io/topics/persistence.
Note Redis also supports trivial replication you can read more about at: http://redis.io/topics/replication
Redis loves RAM
3) Since Redis operates predominantly in memory the most important resource is that you have enough RAM to hold your entire dataset in memory + a buffer for when it snapshots to disk. Redis is very efficient so even a small AWS instance will be able to handle a lot of load - what you want to look for is having enough RAM.
Visualizing your data with the Redis Admin UI
Finally if you're using the ServiceStack C# Redis Client I recommend installing the Redis Admin UI which provides a nice visual view of your entities. You can see a live demo of it at:
http://servicestack.net/RedisAdminUI/AjaxClient/
What are your recommendations for solutions that allow backing up [either by streaming or snapshot] a single riak bucket to a file?
Backing up just a single bucket is going to be a difficult operation in Riak.
All of the solutions will boil down to the following two steps:
List all of the objects in the bucket. This is the tricky part, since there is no "manifest" or a list of contents of any bucket, anywhere in the Riak cluster.
Issue a GET to each one of those objects from the list above, and write it to a backup file. This part is generally easy, though for maximum performance you want to make sure you're issuing those GETs in parallel, in a multithreaded fashion, and using some sort of connection pooling.
As far as listing all of the objects, you have one of three choices.
One is to do a Streaming List Keys operation on the bucket via HTTP (e.g. /buckets/bucket/keys?keys=stream) or Protocol Buffers -- see http://docs.basho.com/riak/latest/dev/references/http/list-keys/ and http://docs.basho.com/riak/latest/dev/references/protocol-buffers/list-keys/ for details. Under no circumstances should you do a non-streaming regular List Keys operation. (It will hang your whole cluster, and will eventually either time out or crash once the number of keys grows large enough).
Two is to issue a Secondary Index (2i) query to get that object list. See http://docs.basho.com/riak/latest/dev/using/2i/ for discussion and caveats.
And three would be if you're using Riak Search and can retrieve all of the objects via a single paginated search query. (However, Riak Search has a query result limit of 10,000 results, so, this approach is far from ideal).
For an example of a standalone app that can backup a single bucket, take a look at Riak Data Migrator, an experimental Java app that uses the Streaming List Keys approach combined with efficient parallel GETs.
The Basho function contrib has an erlang solution for backing up a single bucket. It is a custom function but it should do the trick.
http://contrib.basho.com/bucket_exporter.html
As far as I know, there's no automated solution to backup a single bucket in Riak. You'd have to use the riak-admin command line tool to take care of backing up a single physical node. You could write something to retrieve all keys in a single bucket and using low r values if you want it to be fast but not secure (r = 1).
Buckets are a logical namespace, all of the keys are stored in the same bitcask structure. That's why the only way to get just a single node is to write a tool to stream them yourself.