I'm trying to implement some bitwise operation on keys in redis.
I have to store let's say
key value
12:CA foo
12:US bar
42:CA tag
And I'm trying to be able to query all with CA or all with 12
Is this even possible ?
Thanks
I may be mis-interpreting the question, but if you have key => values like
12:CA => foo
12:US => bar
42:CA => tag
And you want to pull all keys matching CA, or 12, then you can just use the keys operator along with a wildcard
keys *:CA
keys 12:*
NOTE This only returns the matching keys. To get the values, you need to use the returned keys.
Related
Today we save data like that:
redisClient->set($uniquePageID, $data);
and output the data like that:
redisClient->get($uniquePageID)
But now we need to remove the data base on a userID. So we need something like that:
redisClient->set($uniquePageID, $data)->tag($userID);
So we can remove all the keys that related to this userID only, for example:
redisClient->tagDel($userID);
Does REDIS can solve something like that?
Thanks
There's no built-in way to do that. Instead, you need to tag these pages by yourself:
When setting a page-data pair, also put the page id into a SET of the corresponding user.
When you want to remove all pages of a given user, scan the SET of the user to get the page ids of this user, and delete these pages.
When scanning the SET, you can use either SMEMBERS or SSCAN command, depends on the size of the SET. If it's a big SET, prefer SSCAN to avoid block Redis for a long time.
I used HSET and HDEL to store and delete items like this:
$this->client = new Predis\Client(array...);
$this->client->hset($key, $tag, $value);
$this->client->hdel($key, $tags)
and if you want to delete every item KEY no matter tag or value you can use del key, it works with any data type including hset
$this->client->del($key);
I have the following information that I need to store in Redis:
url => {title, author, email}
Each of URL has title, author, email
So, I shall ensure that information are not dubplicated in store.
I think to use Sorted sets like as:
ZADD links_urls url "title"
ZADD links_author url "author"
ZADD links_email url "email"
What do you think about this? Am I wrong?
This is not the correct way to use a sorted set. You are using url as a score. However, scores must be numeric (they define the sort order).
If I understand your constraint correctly, each url is unique. If that is the case, I would use a hash to store everything.
I would use the url as a key, and then concatenate or JSON-encode the fields together, like this:
HSET links <url> '<title>::<author>::<email>'
This ensures constant time lookup and amortized constant time insertion.
I have a series of metric snapshot data I am uploading into my database on a daily basis. I take the input and check to determine if it is already in the database, and if it's not I add it. Each record uses a composite key made up of three columns, and also has a primary key.
I have since tried to add logic so that I can optionally force an update on records that already exist in the database, in addition to adding those that don't yet exist. I run into an error though preventing me, saying that there is already an object with the specified key being tracked.
The instance of entity type 'MembershipSnapshot' cannot be tracked
because another instance of this type with the same key is already
being tracked. When adding new entities, for most key types a unique
temporary key value will be created if no key is set (i.e. if the key
property is assigned the default value for its type). If you are
explicitly setting key values for new entities, ensure they do not
collide with existing entities or temporary values generated for other
new entities. When attaching existing entities, ensure that only one
entity instance with a given key value is attached to the context.
Here's a snippet of my code.
// Get the composite keys from the supplied list
var snapshotKeys = snapshots.Select(s => new { s.MembershipYear, s.DataDate, s.Aggregate }).ToArray();
// Find which records already exist in the database, pulling their composite keys
var snapshotsInDb = platformContext.MembershipSnapshots.Where(s => snapshotKeys.Contains(new { s.MembershipYear, s.DataDate, s.Aggregate }))
.Select(s => new { s.MembershipYear, s.DataDate, s.Aggregate }).ToArray();
// And filter them out, so we remain with the ones that don't yet exist
var addSnapshots = snapshots.Where(s => !snapshotsInDb.Contains(new { s.MembershipYear, s.DataDate, s.Aggregate }))
.ToList();
// Update the ones that already exist
var updateSnapshots = snapshots.Where(s => snapshotsInDb.Contains(new { s.MembershipYear, s.DataDate, s.Aggregate }))
.ToList();
platformContext.MembershipSnapshots.AddRange(addSnapshots);
platformContext.MembershipSnapshots.UpdateRange(updateSnapshots);
platformContext.SaveChanges();
How do I go about accomplishing this task?
I don't have a compelling reason why I have an auto-increment primary key, other than perhaps whatever performance issues it might give SQL internally?
EDIT: The way I've currently solved this issue is my removing my surrogate key, which I'm not using at all for anything. Still, it would be nice to know a workaround without having to remove this as a surrogate key could come in handy in the future.
Say I have a namespaced key for user + id:
lastMessages
isNice attribute
So - it goes like this :
>lpush user:111:lastMessages a
>lpush user:111:lastMessages b
>lpush user:111:lastMessages c
ok
let's add the isNice prop:
>set user:111:isNice 1
so : let's see all keys for 111 :
> keys user:111*
result :
1) "user:111:isNice"
2) "user:111:lastMessages"
Ok , BUT
I want to expire the namespaced entry at it's whole ! ( so when timeout - all the keys should go at once. I don't want start managing each namespaced key and time left because not all props are added at the same time - but i want all props to be dead at the same time...)
Question :
Does it mean that I have to set expire for each namespaced key entry ?
if not , what is the correct way of doing it ?
Yes, the way you have it set up, these are all just separate keys. You can think of the namespace as an understanding you have with all the people who will access the Redis store
Okay guys, here's the deal. We're all going to use keys that look like this:
user:{user_id}:lastMessages
That way, we all understand where to look to get user number 325's last messages.
But really, there's nothing shared between user:111:lastMessages and user:111:isNice.
The fix
A way you can do what you're describing is using a hash. You will create a hash whose key is user:111 and then add fields lastMessages and isNice.
> hset user:111 lastMessages "you are my friend!"
> hset user:111 isNice true
> expire user:111 1000
Or, all at once,
> hmset user:111 lastMessages "you are my friend!" isNice true
> expire user:111 1000
Here is a page describing redis' data types. Scroll down to where it says 'Hashes' for more information.
Edit
Ah, I hadn't noticed you were using a list.
If you don't have too many messages (under 20, say), you could serialize them into JSON and store them as one string. That's not a very good solution though.
The cleanest way might just be to set two expires.
I wish to create an index with, lets say the following fields :
UID
title
owner
content
out of which, I don't want UID to be searchable. [ like meta data ]
I want the UID to behave like docID so that when I want to delete or update,
I'll use this.
Is this possible ? How to do this ?
You could mark is as non-searchable by adding it with Store.YES and Index.NO, but that wont allow you easy updating/removal by using it. You'll need to index the field to allow replacing it (using IndexWriter.UpdateDocument(Term, Document) where term = new Term("UID", "...")), so you need to use either Index.ANALYZED with a KeywordAnalyzer, or Index.NOT_ANALYZED. You can also use the FieldCache if you have a single-valued field, which a primary key usually is. However, this makes it searchable.
Summary:
Store.NO (It can be retrieved using the FieldCache or a TermsEnum)
Index.NOT_ANALYZED (The complete value will be indexed as a term, including any whitespaces)