How to list all Memcachier's keys on Heroku? - ruby-on-rails-3

Is it possible to list all memcachier keys of a Rails app? My app used just 3 keys and there are more than 30 on Memcachier app's page.
Thanks

Use this script: https://gist.github.com/bkimble/1365005
And we can used in our apps~

You can't list all keys in memcached. memcached is a cache, not a database– if you need to consistently retrieve all keys, then memcached is probably not the tool you want to use.
With that in mind, 2 things:
It's actually possible to retrieve the first meg or so of keys: http://www.darkcoding.net/software/memcached-list-all-keys/ . Your prod server should not depend on this.
You could setup a system where you keep a key in memcached (named for example index), that has for its value a list of all the keys stored. Every time you're adding/deleting a key, you would also update index's list of keys. You can just retrieve index to get a list of all the keys. However, keep in mind that memcached can evict keys before they expire, so your app shouldn't rely on this technique for critical stuff.

Related

Best way to analyze ~500K keys in Redis?

I have a database system which processes read operations against logical objects that are identified by OIDs (object identifiers). Example is 1.2.4.3.544. There are users identified by GUIDs. API gets results from DB and puts them into Redis. Key example is
SomePrefix_<oid>1.2.4.3.544</oid>_somedetails_<user>1f0c6cfe-ee9d-472c-b320-190df55f9527</user>
There are about a couple of hundreds unique OIDs in the system and about a hundred registered users. Also keys could not have user part for anonymous requests. Number of keys could vary from 80K to.. I suppose 500k.
I need to provide per-OID and per-user statistics to the UI and implement the possibility to delete-per-OID (the same for user). So the task is splitted into two. First version I had implemented was unsuccessful - I used .Keys("*") method in c# app which turns into Redis SCAN * to get all keys to the app to run through them, collect and distinct OIDs/users and show them on the UI. This operation was taking too much time to extract all keys to the app so I switched to another approach - on every save/delete key I incremented/decremented counters stored in Redis in another DB. Counters are simply integers with keys. This was almost okay but I got a requirement to set TTL for every cached request, so I faced with a dilemma how to store/track my statistics to keep it up-to-date with actually stored keys. Options I think
A) run LUA script to scan all keys and get all unique OIDS and unique users with counts, and return them to app. For deletion option - run SCAN inside LUA script and DEL all keys matched with pattern. Pros - no need for separate stats tracking. Cons - need to scan for all keys on every call. I have Zabbix dashboard to show statistics requested through my app and it could be painful to scan keys on every N seconds. These are cons.
B) In separate Redis DB store keys with sets where key is OID (or for users - key is GUID) and inside set store keys to cached requests. But how I could delete them when keys to cached requests are expired? Can I somehow link a value stored in a set with a key from another DB and make this value disappear from a set when key expires?

Subscribe to data store changes in Workers KV

Cloudflare Workers KV is an eventually consistent data store. You can write values assigned to keys, and you can read values by key from it. But is there any possibility to listen to some key's value?
In a regular relational DB you can subscribe to changes of an individual row, but are there any similar options for KV?
There is currently no built-in listen.
You could poll keys for changes, which would probably only make sense if you had a small number of keys. (You can list keys to iterate over them.)
But, since you're the owner of your KV namespace, the best option is probably to wrap your write operations so that you notify some other service/queue that a change has been made.

Query multiple keys in Redis in Cluster mode

I'm using Redis in Cluster mode(6 nodes, 3 masters, and 3 slaves) and I'm using SE.Redis, However, commands with multiple keys in different hash slots are not supported as usual
so I'm using HashTags to be sure that certain key belongs to a particular hash slot using the {}. for example I have 2 keys like cacheItem:{1}, cacheItem:{94770}
I set those keys using ( each key in a separate request):
SEclient.Database.StringSet(key,value)
this works fine,
but now I want to query key1 and key2 which belongs to multiple hash slot
SEclient.Database.StringGet(redisKeys);
above will fail and throws an exception because those keys belong to multiple hash slots
while querying keys, I can't make sure that my keys will belong to the same hash slot,
this example is just 2 keys I have hundreds of keys which I want to query.
so I have following questions:
how can I query multiple keys when they belong to different hash slots?
what's the best practice to do that?
should I calculate hash slots on my side and then send individual requests per hash slot?
can I use TwemProxy for my scenario?
any helps highly appreciated
I can’t speak to SE.Redis, but you are on the right track. You either need to:
Make individual requests per key to ensure they go to the right cluster node, or...
Precalculate the shard + server each key belongs to, grouping by the host. Then send MGET requests with those keys to each host that owns them
Precalculating will require you (or your client) to know the cluster topology (hash slot owners) and the Redis key hashing method (don’t worry, it is simple and well documented) up front.
You can query cluster info from Redis to get owned slots.
The basic hashing algorithm is HASH_SLOT=CRC16 (key) mod 16384. Search around and you can find code for that for about any language 🙂 Remember that the use of hash tags makes this more complicated! See also: https://redis.io/commands/cluster-keyslot
Some Redis cluster clients will do this for you with internal magic (e.g. Lettuce in Java), but they are not all created equal 🙂
Also be aware that cluster topology can change at basically any time, and the above work is complicated. To be durable you’ll want to have retries if you get cross slot errors. Or you can just make many requests for single keys as it is much much simpler to maintain.

Redis Persistence Partial

I have multiple keys in redis most of which are insignificant and can be lost in case my redis server goes down.
However I have one or two keys, which I cannot afford to lose.
Hence I would like that whenever the server restarts, redis reads only these few keys from its persistent storage, and keep on persisting these as and when they change.
Does redis have this feature? If yes what command makes a key persisted to file and how to differ b/w persisted and unpersisted keys.
If no, What approach can I use such that I need not make my own persistent file before writing to Redis
Limitations(If the answer is no)
I do not want to change client code that enters in redis.
I cannot add more servers to redis(if any such solution exists, would like to know about it though).
EDIT
Another reason I would not want to save most keys as persistence because it is huge data, hundreds of records per second- Most of which expires in 10 minutes.

How to find Keys with specific member value?

I'm new in Redis and use Redis 2.8 with StackExchange.Redis Libarary.
How can I write a KEYS pattern to get all keys with specific Hashed member value?
As I use StackExchange.Redis and want to get Keys with a pattern like this (when username is a member for a key): KEYS "username:*AAA*".
database.HashKeys("suggest me a pattern :) ")
I will call this method many times on HTTP user request to find out user's session data stored in Redis database, do you suggest a better alternative solution for this approach?
This simply isn't a direct fit for any redis features. You certainly shouldn't use KEYS for this - in addition to being expensive (you should prefer SCAN, btw), that scans the keys, not the values.