I am storing objects as hash ,for example: key-> customer:123 ,email->dk#gmail.com,mobile->828212,name->darshan etc...
Now is it possible in redis to query customers based on email without storing the cross relationship as set which is more of a workaround.
like for example,at the time of insertion of customer storing Set as key->email:dk#gmail.com value->customer:123 and so on.
Lets say if I have 100 fields in a hash, and i need to query 20 of them(like email)
it increases the count of keys in redis instance significantly if we create each entry of those fields in Sets as well.
Is there any other alternative or better approach?
Redis doesn't have inbuilt indexing/searching by fields because it is not a database but more like a data structures server(each key holds a data structure like set/list/map/sortedset/number of unique values etc), but if you are using redis 4.0 you can use the search module to accomplish it. The link is here.
Related
I recently got to know Redis, integrated it into my project and now I am facing the following use case.
My question in short:
Which data type can I use to get all entries sorted AND to be able to overwrite single entries?
My question in long:
I have a huge amount of point cloud models that I want to store and work with via Redis.
My point cloud model consists of three things:
Unique id (stays the same)
Point Cloud as a string (changes over time)
Priority as an integer (changes over time)
Basically I would like to be able to do only two things with Redis. However, if I understand the documentation correctly, these are seen as benefits of two different data types, so I can't find a data type that exactly fits my use case. I hope, however, that I am wrong about this and that someone here can help me.
Use case:
Get quick all models, all already sorted
Overwrite/update a specific model
Sorted Sets
Advantage
Get all entries in sorted order
my model property Priority can be used here as a score, which determines the order.
Disadvantage
No possibility to access a special value via a key and overwrite it.
Hashes:
Advantage
Overwrite specific entry via Key > Field
Get all entries via Key
Disadvantage
No sorting
I would suggest to just use two distinct data types:
a hash with all the properties of your model, with the exception of the priority;
a sorted set which allows to easily sort your collection and deal with the scores / priorities.
You could then link the two by storing each hash key (or a distinctive value which allows to reconstruct the final hash key) as the related sorted set member.
For example:
> HSET point-cloud:123 foo bar baz suppiej
> ZADD point-clouds-by-priority 42 point-cloud:123
You will keep all the advantages you mentioned, with no disadvantages at all.
I need to understand how one can search attributes of a DynamoDB that is part of an array.
So, in denormalising a table, say a person that has many email addresses. I would create an array into the person table to store email addresses.
Now, as the email address is not part of the sort key, and if I need to perform a search on an email address to find the person record. I need to index the email attribute.
Can I create an index on the email address, which is 1-many relationship with a person record and it's stored as an array as I understand it in DynamoDB.
Would this secondary index be global or local? Assuming I have billions of person records?
If I could create it as either LSI or GSI, please explain the pros/cons of each.
thank you very much!
Its worth getting the terminology right to start with. DynamoDB supported data types are
Scalar - String, number, binary, boolean
Document - List, Map
Sets - String Set, Number Set, Binary Set
I think you are suggesting you have an attribute that contains a list of emails. The attribute might look like this
Emails: ["one#email.com", "two#email.com", "three#email.com"]
There are a couple of relevant points about Key attributes described here. Firstly keys must be top-level attributes (they cant be nested in JSON documents). Secondly they must be of scalar types (i.e. String, Number or Binary).
As your list of emails is not a scalar type, you cannot use it in a key or index.
Given this schema you would have to perform a scan, in which you would set the FilterExpression on your Emails attribute using the CONTAINS operator.
Stu's answer has some great information in it and he is right, you can't use an Array it's self as a key.
What you CAN sometimes do is concatenate several variables (or an Array) into a single string with a known seperator (maybe '_' for example), and then use that string as a Sort Key.
I used this concept to create a composite Sort Key that consisted of multiple ISO 8061 date objects (DyanmoDB stores dates as ISO 8061 in String type attributes). I also used several attributes that were not dates but were integers with a fixed character length.
By using the BETWEEN comparison I am able to individually query each of the variables that are concatenated into the Sort Key, or construct a complex query that matches against all of them as a group.
In other words a data object could use a Sort Key like this:
email#gmail.com_email#msn.com_email#someotherplace.com
Then you could query that (assuming you knew what the partition key is) with something like this:
SELECT * FROM Users
WHERE User='Bob' AND Emails LIKE '%email#msn.com%'
YOU MUST know the partition key in order to perform a Query no matter what you choose as your Sort Key and no matter how that Sort Key is constructed.
I think the real question you are asking is what should my sort keys and partition keys be? That will depend on exactly which queries you want to make and how frequently each type of query is used.
I have found that I have way more success with DynamoDB if I think about the queries I want to make first, and then go from there.
A word on Secondary Indexes (GSI / LSI)
The issue here is that you still need to 'know' the Partition Key for your secondary data structure. GSI / LSI help you avoid needing to create additional DynamoDB tables for the sole purpose of improving data access.
From Amazon:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html
To me it sounds more like the issue is selecting the Keys.
LSI (Local Secondary Index)
If (for your Query case) you don't know the Partition Key to begin with (as it seems you don't) then a Local Secondary Index won't help — since it has the SAME Partition Key as the base table.
GSI (Global Secondary Index)
A Global Secondary Index could help in that you can have a DIFFERENT Partition Key and Sort Key (presumably a partition key that you could 'know' for this query).
So you could use the Email attribute (perhaps composite) as the Sort Key on your GSI and then something like a service name, or sign-up stage, as your Partition Key. This would let you 'know' what partition that user would be in based on their progress or the service they signed up from (for example).
GSI / LSI still need to generate unique values using their keys so keep that in mind!
I'm new to nosql databases so forgive my sql mentality but I'm looking to store data that can be 'queried' by one of 2 keys. Here's the structure:
{user_id, business_id, last_seen_ts, first_seen_ts}
where if this were a sql DB I'd use the user_id and business_id as a primary composite key. The sort of querying I'm looking for is a
1.'get all where business_id = x'
2.'get all where user_id = x'
Any tips? I don't think I can make a simple secondary index based on the 2 retrieval types above. I looked into commands like 'zadd' and 'zrange' but there isn't really any sorting involved here.
The use case for Redis for me is to alleviate writes and reads on my SQL database while this program computes (doing its storage in redis) what eventually will be written to the SQL DB.
Note: given the OP's self-proclaimed experience, this answer is intentionally simplified for educational purposes.
(one of) The first thing(s) you need to understand about Redis is that you design the data so every query will be what you're used to think about as access by primary key. It is convenient, in that sense, to imagine Redis' keyspace (the global dictionary) as something like this relational table:
CREATE TABLE redis (
key VARCHAR(512MB) NOT NULL,
value VARCHAR(512MB),
PRIMARY KEY (key)
);
Note: in Redis, value can be more than just a String of course.
Keeping that in mind, and unlike other database models where normalizing data is the practice, you want to have your Redis ready to handle both of your queries efficiently. That means you'll be saving the data twice: once under a primary key that allows searching for businesses by id, and another time that allows querying by user id.
To answer the first query ("'get all where business_id = x'"), you want to have a key for each x that hold the relevant data (in Redis we use the colon, ':', as separator as a matter of convention) - so for x=1 you'd probably call your key business:1, for x=a1b2c3 business:a1b2c3 and so forth.
Each such business:x key could be a Redis Set, where each member represents the rest of the tuple. So, if the data is something like:
{user_id: foo, business_id: bar, last_seen_ts: 987, first_seen_ts: 123}
You'd be storing it with Redis with something like:
SADD business:bar foo
Note: you can use any serialization you want, Set members are just Strings.
With this in place, answering the first query is just a matter of SMEMBERS business:bar (or SSCANing it for larger Sets).
If you've followed through, you already know how to serve the second query. First, use a Set for each user (e.g. user:foo) to which you SADD user:foo bar. Then SMEMBERS/SSCAN and you're almost home.
The last thing you'll need is another set of keys, but this time you can use Hashes. Each such Hash will store the additional information of the tuple, namely the timestamps. We can use a "Primary Key" made up of the bussiness and the user ids (or vice versa) like so:
HMSET foo:bar first 123 last 987
After you've gotten the results from the 1st or 2nd query, you can fetch the contents of the relevant Hashes to complete the query (assuming that the queries return the timestamps as well).
The idiomatic way of doing this in Redis is to use a SET for each type of query you want to do.
In your case you would create:
a hash for each tuple (user_id, business_id, last_seen_ts, first_seen_ts)
a set with a name like user:<user_id>:business:<business_id>, to store the keys of the hashes for this user and this business (you have to add the ID of the hashes with SADD)
Then to get all data for a given user and business, you have to get the SET content with SMEMBERS first, and then to GET every HASH whose ID is in the SET.
Suppose I have many SQL tables with 10 columns (at least) per each.
Let's take for example:
HR Table: ID, FirstName, LastName, PhoneNumber, Gender, City, Street, Height, Weight, IQ
I need to build a cache layer for all of my SQL tables.
What would be the best way to store the data in Couchbase ?
Should I store the whole document for each row ?
Here is a potential key, For example - A key that brings me a JSON document that contains all columns where its row ID=4:
HR_4
Or should I implement it like key-value store ?
For instance - A key that brings me a specific value (not the entire columns):
HR_4_FirstName
Please put in mind that I DO need to get an entire row for key in my application, but sometimes I need to get just one specific column.
The question is: Should I go for the second way, and if I need a few values - just send a few requests from my application and aggregate them ?
On the other hand, going the second way is much more keys to handle (That actually means having a key for each db field).
I would look at how your application uses and accesses the data. It may be worthwhile to have several objects for the data you are trying to store depending on access patterns and what you want to optimize for. May I recommend this article on data modeling for a user profile store in Couchbase. Let me know if this does not help.
The think I'm trying to implement is an id table. Basically it has the structure (user_id, lecturer_id) which user_id refers to the primary key in my User table and lecturer_id refers to the primary key of my Lecturer table.
I'm trying to implement this in redis but if I set the key as User's primary id, when I try to run a query like get all the records with lecturer id=5 since lecturer is not the key, but value I won't be able to reach it in O(1) time.
How can I form a structure like the id table I mentioned in above, or Redis does not support that?
One of the things you learn fast while working with redis is that you get to design your data structure around your accessing needs, specially when it comes to relations (it's not a relational database after all)
There is no way to search by "value" with a O(1) time complexity as you already noticed, but there are ways to approach what you describe using redis. Here's what I would recommend:
Store your user data by user id (in e.g. a hash) as you are already doing.
Have an additional set for each lecturer id containing all user ids that correspond to the lecturer id in question.
This might seem like duplicating the data of the relation, since your user data would have to store the lecture id, and your lecture data would store user ids, but that's the (tiny) price to pay if one is to build relations in a no-relational data store like redis. In practical terms this works well; memory is rarely a bottleneck for small-ish data-sets (think thousands of ids).
To get a better picture at how are people using redis to model applications with relations, I recommend reading Design and implementation of a simple Twitter clone and the source code of Lamernews, both of which are written by redis author Salvatore Sanfilippo.
As already answered, in vanilla Redis there is no way to store the data only once and have Redis query them for you.
You have to maintain secondary indexes yourself.
However with the modules in Redis, this is not necessary true. Modules like zeeSQL, or RediSearch allow to store data directly in Redis and retrieve them with a SQL query (for zeeSQL) or simil SQL for RediSearch.
In your case, a small example with zeeSQL.
> ZEESQL.CREATE_DB DB
OK
> ZEESQL.EXEC DB COMMAND "CREATE TABLE user(user_id INT, lecture_id INT);"
OK
> ZEESQL.EXEC DB COMMAND "SELECT * FROM user WHERE lecture_id = 3;"
... your result ...