Assume I have multiple hash lists as below:
HMSET myhash1 type Car company BMW
HMSET myhash2 type Car company Benz
HMSET myhash3 type Car company BMW
HMSET myhash4 type Car company Honda
HMSET myhash5 type Car company BMW
HMSET myhash6 type Car company Toyota
HMSET myhash7 type Car company Benz
I want to count how many hash list I have with company = BMW which is 3
You have to build some form of secondary index to accomplish this efficently.
Use a Set
You can use a set, and create a set for each company, and add each key to it.
SADD myhash:Company:BMW myhash1 myhash3 myhash5
SADD myhash:Company:Benz myhash2 myhash7
SADD myhash:Company:Honda myhash4
SADD myhash:Company:Toyota myhash6
Then when you want to query it, you would just use SCARD, so if you wanted to know how many BMWs there were you'd just run
SCARD myhash:Company:BMW
With Redis Stack
Redis Stack has native secondary indexing capabilities which you can leverage here this is much easier to maintain (and can actually work across shards in a scaled out environment if need be). You'd just need to create the secondary index
FT.CREATE hashes PREFIX 1 myhash SCHEMA company tag
Then you'd just need to query them (if you don't care to get the actual cars matching your query back just pass in LIMIT 0 0
FT.SEARCH hashes "#company:{BMW}" LIMIT 0 0
The thing with non-relational databases is that you have to make relations by hand if needed. So here you are "obliged" to have another key holding this information.
If you want all informations, you can have a set key holding it like :
SADD BMW myhash1
//use SCARD to know how many BMW there are
SCARD BWM
If you only want to know the number of BMWs you can have a simple key holding a number like :
INCR BMW //autoset to 0 before applying INCR, holds 1 after this command
//use GET to have the number of BMWs
GET BWM //returns 1
//if you want to delete a hash you can use INCRBY to update the number
INCRBY BMW -1
Related
I want to use redis to store data that is sourced from a sql db. In the db, each row has an ID, date, and value, where the ID and date make up a composite key (there can only be one value for a particular ID and date). An example is below:
ID Date Value
1 01/01/2001 1.2
1 02/01/2001 1.5
1 04/23/2002 1.5
2 05/05/2009 0.4
Users should be able to query this data in redis given a particular ID and date range. For example, they might want all values for 2019 with ID 45. If the user does not specify a start or end time, we use the system's Date.Min or Date.Max respectively. We also want to support refreshing redis data from the database using the same parameters (ID and date range).
Initially, I used a zset:
zset key zset member score
1 01/01/2001_1.2 20010101
1 02/01/2001_1.5 20010201
1 04/23/2002_1.5 20020423
2 05/05/2009_0.4 20090505
Only, what happens if the value field changes in the db? For instance, ID 1 and date 01/01/2001 might change to 1.3 later on. I would want the original value to be updated, but instead, a new member will be inserted. Rather, I would need to first check that a member for a particular score exists, and delete if it does before inserting a new member. I imagine this could get expensive if refreshing, for example, 10 years worth of data.
I thought of two possible fixes to this:
1.) Use a zset and string key-value:
zset key zset value score
1 1_01/01/2001 20010101
1 1_02/01/2001 20010201
1 1_04/23/2002 20020423
2 2_05/05/2009 20090505
string key string value
1_01/01/2001 1.2
1_02/01/2001 1.5
1_04/23/2002 1.5
2_05/05/2009 0.4
This allows me to easily update the value, and query for a date range, but adds some complexity as now I need to use two redis data structures instead of 1.
2.) Use a hash table:
hash key sub-key value
1 01/01/2001 1.2
1 02/01/2001 1.5
1 04/23/2002 1.5
2 05/05/2009 0.4
This is nice because I only have to use 1 data structure and although it would be O(N) to get all values for a particular hash key, solution 1 would have the same drawback when getting values for all string keys returned from the zset.
However, with this solution, I now need to generate all sub-keys between a given start and end date in my calling code, and not every date may have a value. There are also some edge cases that I now need to handle (what if the user wants all values up until today? Do I use HGETALL and just remove the ones in the future I don't care about? At what date range size should I use HGETALL rather than HMGET?)
In my view, there are pro's and con's to each solution, and I'm not sure which one will be easier in the long term to maintain. Does anyone have thoughts as to which structure they would choose in this situation?
I am looking to use redis to house a set of unique IDs (call them campaign_ids) and each campaign_id will have an array of promotional_ids to go along with them.
{ 123: [345, 543] } is an example of something I am trying to achieve, but I am just not sure how to set this up in Redis
Well there isn't simple answer to this because Redis data structures should be designed by the way you use them.
For example you could use Sets:
SADD 123 345
SADD 123 543
SMEMBERS 123
1) "345"
2) "543"
SREM 123 345
SMEMBERS 123
1) "543"
I'm building the data model of my app and basically, for a given user, I'd like to keep a list of his/her friends and the status of each of them (if they accepted the request to be friends or if they did not yet)
I end up with several keys: (one for each friend of tom)
friends:tom:status:jessica => joined
friends:tom:status:stephan => joined
friends:tom:status:hubert => pending
friends:tom:status:peter => declined
Is that the best way to handle that or should a list be used in some other way ?
You can try to use for example hash structure where hash key would be friends:tom:status, field would represent friend name/ID and value would represent his status. Hash structure is more memory efficient than dedicated keys in general.
You could use an ordered set for this.
Have each status have a score associated with it, joined 1, pending 2, declined 3
zadd user1_friends 1 userid 1 userid 2 userid
then you can easily retrieve all users by category
zscore user1_friends 1
or you could split into 3 separate sets
sadd user1_joined userid1
sadd user1_pending userid3
Depending on what you want to do either will work
I went through the command list on REDIS Hashes.Is it possible to assign multiple values to a hash key in REDIS? For instance,I am trying to represent the following table in form of a hash.
Prod_Color | Prod_Count | Prod_Price | Prod_Info
------------------------------------------------------------
Red | 12 | 300 | In Stock
Blue | 8 | 310 | In Stock
I tried the following hash commands subsequently
HMSET Records Prod_Color "Red" Prod_Count 12 Prod_Price 300 Prod_Info "In Stock"
HMSET Records Prod_Color "Blue" Prod_Count 8 Prod_Price 310 Prod_Info "In Stock"
However,when I try to retrieve the hash using the command HGETALL Records, I am seeing only the second row of inserted values(i.e. Blue,8,310,In Stock)! I understand that I can create a separate hash and insert the second row of values,however, I intend to insert all the values in a single hash.
What you could do, and I saw this in other places besides my code, is to key the hash using a suffix. You probably have a suffix which identifies each record, I will use the colors here:
AT INSERT TIME:
HMSET Records:red Prod_Color "Red" Prod_Count 12 Prod_Price 300 Prod_Info "In Stock"
HMSET Records:blue Prod_Color "Blue" Prod_Count 8 Prod_Price 310 Prod_Info "In Stock"
/* For each HMSET above, you issue SADD */
SADD Records:Ids red
SADD Records:Ids blue
AT QUERY TIME:
/* If you want to get all products, you first get all members */
SMEMBERS Records:Ids
/* ... and then for each member, suppose its suffix is ID_OF_MEMBER */
HGETALL Records:ID_OF_MEMBER
/* ... and then for red and blue (example) */
HGETALL Records:red
HGETALL Records:blue
You probably want to use the primary key as the suffix, since this should be available to you from the relational database records. Also, you must maintain the set of members (e.g. SREM Records:Ids red), when deleting hash keys (e.g. DEL Records:red). And also remember that Redis is really good as an improved cache, you must set it up well to persist values (and maintain performance with that).
You can't have multiple items with the same key in a hash. However, if you want to either retrieve all items or a single row by key you can use JSON:
Records red = {color:red, price:12, info:"300 in stock"}
Records blue = {color:blue, price:8, info:"310 in stock"}
If you don't want to use json you'll have to use multiple hashes, with a hash being a row in the table. You can support the get all function by also storing a set that contains the key of each of the hashes.
I think you're misunderstanding how hashes work. You can't have two identical fields with different values. Your second HMSET command is overwriting the values from the first command. You will either need to use unique fields or a different key.
I have a table in my database, newvehicles which has the following fields:
id (AUTO_INCREMENT)
make
model
doors
bodystyle
price
and this is an example:
1 Volkswagen Golf 2.0 GTI 3 Hatchback $39,490
2 Ford Mondeo 2.3 Zetec 4 Sedan $54,450
3 BMW 3-Series 318i 4 Sedan $62,667
4 Renault Clio 1.2 Base 3 Hatchback $22,686
5 Volvo S60 3.2T SE 4 Sedan $49,460
6 BMW 5-Series 540i 4 Sedan $89,990
If I deleted, say, row 4, it would have rows 1, 2, 3, 5 and 6, and in order to reset the increment I use this code:
UPDATE automobiles SET id=id-1 WHERE id > 3
However, is there any way I can automatically get phpMyAdmin to reset the increment values for the id field (since it has an auto increment in) and I don't really want to keep using the above code every time.
By the way, not sure if this relevant, but the database is stored as InnoDB, just in case I have to do foreign keys for other databases that may link to it.
I'm fairly new to this, so would appreciate the help!
The auto-increment value is used as a unique identifier. So later if you save that key elsewhere you will always pull that particular record.
If you want a number that is always sequential, I would suggest using a counter when you display them. I think that would save work in the long run.