DynamoDB: When to use what PK type? - primary-key

I am trying to read up on best practices on DynamoDB. I saw that DynamoDB has two PK types:
Hash Key
Hash and Range Key
From what I read, it appears the latter is like the former but supports sorting and indexing of a finite set of columns.
So my question is why ever use only a hash key without a range key? Is it a viable choice only when the table is not searched?
It'd also be great to have some general guidelines on when to use what key type. I've read several guides (including Amazon's own documentation on DynamoDB) but none of them appear to directly address this question.
Thanks

The choice of which key to use comes down to your Use Cases and Data Requirements for a particular scenario. For example, if you are storing User Session Data it might not make much sense using the Range Key since each record could be referenced by a GUID and accessed directly with no grouping requirements. In general terms once you know the Session Id you just get the specific item querying by the key. Another example could be storing User Account or Profile data, each user has his own and you most likely will access it directly (by User Id or something else).
However, if you are storing Order Items then the Range Key makes much more sense since you probably want to retrieve the items grouped by their Order.
In terms of the Data Model, the Hash Key allows you to uniquely identify a record from your table, and the Range Key can be optionally used to group and sort several records that are usually retrieved together. Example: If you are defining an Aggregate to store Order Items, the Order Id could be your Hash Key, and the OrderItemId the Range Key. Whenever you would like to search the Order Items from a particular Order, you just query by the Hash Key (Order Id), and you will get all your order items.
You can find below a formal definition for the use of these two keys:
"Composite Hash Key with Range Key allows the developer to create a
primary key that is the composite of two attributes, a 'hash
attribute' and a 'range attribute.' When querying against a composite
key, the hash attribute needs to be uniquely matched but a range
operation can be specified for the range attribute: e.g. all orders
from Werner in the past 24 hours, or all games played by an individual
player in the past 24 hours." [VOGELS]
So the Range Key adds a grouping capability to the Data Model, however, the use of these two keys also have an implication on the Storage Model:
"Dynamo uses consistent hashing to partition its key space across its
replicas and to ensure uniform load distribution. A uniform key
distribution can help us achieve uniform load distribution assuming
the access distribution of keys is not highly skewed."
[DDB-SOSP2007]
Not only the Hash Key allows to uniquely identify the record, but also is the mechanism to ensure load distribution. The Range Key (when used) helps to indicate the records that will be mostly retrieved together, therefore, the storage can also be optimized for such need.
Choosing the correct keys to represent your data is one of the most critical aspects during your design process, and it directly impacts how much your application will perform, scale and cost.
Footnotes:
The Data Model is the model through which we perceive and manipulate our data. It describes how we interact with the data in the database [FOWLER]. In other words, it is how you abstract your data model, the way you group your entities, the attributes that you choose as primary keys, etc
The Storage Model describes how the database stores and manipulates the data internally [FOWLER]. Although you cannot control this directly, you can certainly optimize how the data is retrieved or written by knowing how the database works internally.

Related

Redis + .NET 6 - Best data type for querying all entries and updating individual entries

I recently got to know Redis, integrated it into my project and now I am facing the following use case.
My question in short:
Which data type can I use to get all entries sorted AND to be able to overwrite single entries?
My question in long:
I have a huge amount of point cloud models that I want to store and work with via Redis.
My point cloud model consists of three things:
Unique id (stays the same)
Point Cloud as a string (changes over time)
Priority as an integer (changes over time)
Basically I would like to be able to do only two things with Redis. However, if I understand the documentation correctly, these are seen as benefits of two different data types, so I can't find a data type that exactly fits my use case. I hope, however, that I am wrong about this and that someone here can help me.
Use case:
Get quick all models, all already sorted
Overwrite/update a specific model
Sorted Sets
Advantage
Get all entries in sorted order
my model property Priority can be used here as a score, which determines the order.
Disadvantage
No possibility to access a special value via a key and overwrite it.
Hashes:
Advantage
Overwrite specific entry via Key > Field
Get all entries via Key
Disadvantage
No sorting
I would suggest to just use two distinct data types:
a hash with all the properties of your model, with the exception of the priority;
a sorted set which allows to easily sort your collection and deal with the scores / priorities.
You could then link the two by storing each hash key (or a distinctive value which allows to reconstruct the final hash key) as the related sorted set member.
For example:
> HSET point-cloud:123 foo bar baz suppiej
> ZADD point-clouds-by-priority 42 point-cloud:123
You will keep all the advantages you mentioned, with no disadvantages at all.

searching on array items on a DynamoDB table

I need to understand how one can search attributes of a DynamoDB that is part of an array.
So, in denormalising a table, say a person that has many email addresses. I would create an array into the person table to store email addresses.
Now, as the email address is not part of the sort key, and if I need to perform a search on an email address to find the person record. I need to index the email attribute.
Can I create an index on the email address, which is 1-many relationship with a person record and it's stored as an array as I understand it in DynamoDB.
Would this secondary index be global or local? Assuming I have billions of person records?
If I could create it as either LSI or GSI, please explain the pros/cons of each.
thank you very much!
Its worth getting the terminology right to start with. DynamoDB supported data types are
Scalar - String, number, binary, boolean
Document - List, Map
Sets - String Set, Number Set, Binary Set
I think you are suggesting you have an attribute that contains a list of emails. The attribute might look like this
Emails: ["one#email.com", "two#email.com", "three#email.com"]
There are a couple of relevant points about Key attributes described here. Firstly keys must be top-level attributes (they cant be nested in JSON documents). Secondly they must be of scalar types (i.e. String, Number or Binary).
As your list of emails is not a scalar type, you cannot use it in a key or index.
Given this schema you would have to perform a scan, in which you would set the FilterExpression on your Emails attribute using the CONTAINS operator.
Stu's answer has some great information in it and he is right, you can't use an Array it's self as a key.
What you CAN sometimes do is concatenate several variables (or an Array) into a single string with a known seperator (maybe '_' for example), and then use that string as a Sort Key.
I used this concept to create a composite Sort Key that consisted of multiple ISO 8061 date objects (DyanmoDB stores dates as ISO 8061 in String type attributes). I also used several attributes that were not dates but were integers with a fixed character length.
By using the BETWEEN comparison I am able to individually query each of the variables that are concatenated into the Sort Key, or construct a complex query that matches against all of them as a group.
In other words a data object could use a Sort Key like this:
email#gmail.com_email#msn.com_email#someotherplace.com
Then you could query that (assuming you knew what the partition key is) with something like this:
SELECT * FROM Users
WHERE User='Bob' AND Emails LIKE '%email#msn.com%'
YOU MUST know the partition key in order to perform a Query no matter what you choose as your Sort Key and no matter how that Sort Key is constructed.
I think the real question you are asking is what should my sort keys and partition keys be? That will depend on exactly which queries you want to make and how frequently each type of query is used.
I have found that I have way more success with DynamoDB if I think about the queries I want to make first, and then go from there.
A word on Secondary Indexes (GSI / LSI)
The issue here is that you still need to 'know' the Partition Key for your secondary data structure. GSI / LSI help you avoid needing to create additional DynamoDB tables for the sole purpose of improving data access.
From Amazon:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html
To me it sounds more like the issue is selecting the Keys.
LSI (Local Secondary Index)
If (for your Query case) you don't know the Partition Key to begin with (as it seems you don't) then a Local Secondary Index won't help — since it has the SAME Partition Key as the base table.
GSI (Global Secondary Index)
A Global Secondary Index could help in that you can have a DIFFERENT Partition Key and Sort Key (presumably a partition key that you could 'know' for this query).
So you could use the Email attribute (perhaps composite) as the Sort Key on your GSI and then something like a service name, or sign-up stage, as your Partition Key. This would let you 'know' what partition that user would be in based on their progress or the service they signed up from (for example).
GSI / LSI still need to generate unique values using their keys so keep that in mind!

Composite Primary Key equivalent in Redis

I'm new to nosql databases so forgive my sql mentality but I'm looking to store data that can be 'queried' by one of 2 keys. Here's the structure:
{user_id, business_id, last_seen_ts, first_seen_ts}
where if this were a sql DB I'd use the user_id and business_id as a primary composite key. The sort of querying I'm looking for is a
1.'get all where business_id = x'
2.'get all where user_id = x'
Any tips? I don't think I can make a simple secondary index based on the 2 retrieval types above. I looked into commands like 'zadd' and 'zrange' but there isn't really any sorting involved here.
The use case for Redis for me is to alleviate writes and reads on my SQL database while this program computes (doing its storage in redis) what eventually will be written to the SQL DB.
Note: given the OP's self-proclaimed experience, this answer is intentionally simplified for educational purposes.
(one of) The first thing(s) you need to understand about Redis is that you design the data so every query will be what you're used to think about as access by primary key. It is convenient, in that sense, to imagine Redis' keyspace (the global dictionary) as something like this relational table:
CREATE TABLE redis (
key VARCHAR(512MB) NOT NULL,
value VARCHAR(512MB),
PRIMARY KEY (key)
);
Note: in Redis, value can be more than just a String of course.
Keeping that in mind, and unlike other database models where normalizing data is the practice, you want to have your Redis ready to handle both of your queries efficiently. That means you'll be saving the data twice: once under a primary key that allows searching for businesses by id, and another time that allows querying by user id.
To answer the first query ("'get all where business_id = x'"), you want to have a key for each x that hold the relevant data (in Redis we use the colon, ':', as separator as a matter of convention) - so for x=1 you'd probably call your key business:1, for x=a1b2c3 business:a1b2c3 and so forth.
Each such business:x key could be a Redis Set, where each member represents the rest of the tuple. So, if the data is something like:
{user_id: foo, business_id: bar, last_seen_ts: 987, first_seen_ts: 123}
You'd be storing it with Redis with something like:
SADD business:bar foo
Note: you can use any serialization you want, Set members are just Strings.
With this in place, answering the first query is just a matter of SMEMBERS business:bar (or SSCANing it for larger Sets).
If you've followed through, you already know how to serve the second query. First, use a Set for each user (e.g. user:foo) to which you SADD user:foo bar. Then SMEMBERS/SSCAN and you're almost home.
The last thing you'll need is another set of keys, but this time you can use Hashes. Each such Hash will store the additional information of the tuple, namely the timestamps. We can use a "Primary Key" made up of the bussiness and the user ids (or vice versa) like so:
HMSET foo:bar first 123 last 987
After you've gotten the results from the 1st or 2nd query, you can fetch the contents of the relevant Hashes to complete the query (assuming that the queries return the timestamps as well).
The idiomatic way of doing this in Redis is to use a SET for each type of query you want to do.
In your case you would create:
a hash for each tuple (user_id, business_id, last_seen_ts, first_seen_ts)
a set with a name like user:<user_id>:business:<business_id>, to store the keys of the hashes for this user and this business (you have to add the ID of the hashes with SADD)
Then to get all data for a given user and business, you have to get the SET content with SMEMBERS first, and then to GET every HASH whose ID is in the SET.

Relational database design - surrogate keys vs natural keys in context of query speed, ORM and app development

Suppose I have a set of 'model' entities, and a set of difficulty levels. Each model has a certain percentage success rate for a given day on a given difficulty level.
A model entity has a name, which is both unique and immutable under any circumstances, so it makes a natural primary key. A difficulty level is described by its name only (easy, normal, etc). Difficulty levels are very, very unlikely to change, though it's possible a new one could be added. A success rate record is uniquely identified by the model it pertains to, the difficulty level, and the date.
Here's the most trivial db design for this scenario:
In this design, 'name' is the primary key for the table 'models' and is represented by a VARCHAR(20) field. Likewise, a VARCHAR(20) field 'name' is the primary key for the table 'difficulty_levels' (a lookup table). In the 'success_rates' table, 'model_name' is a foreign key referencing the 'name' field in the 'model' table, and 'difficulty_level' is a foreign key referencing the 'name' field in the 'difficulty_levels' table. The fields 'model_name', 'difficulty_level' and 'date' make up a composite primary key for the 'success_rates' table.
The most used queries would be:
getting all success rates for a certain model, difficulty level, and date period
getting the most/least successful models for a certain period and difficulty level.
Now, my question is - do I need to add surrogate primary keys to the 'models' and 'difficulty_levels' tables? I guess storing int values as opposed to varchar values in the foreign key fields of 'success_rates' takes up less space, and maybe the queries will be faster (just my wild guess, not sure about that)?
The problem I see with surrogate keys is that they have zero relevance to the business logic itself. I'm planning on using a mini-ORM (most likely Dapper), and without the surrogate keys I can operate on classes that very cleanly represent the entities I'm working with. Now, if I add surrogate keys, I'll have to add 'Id' properties to my classes, and I'm really against adding a database storage implementation like that to a class that can be used anywhere in the app, not even in connection with a database storage. I could add proxy storage classes with an Id property, but that adds another level of complexity. Plus the fact that the 'Id' property won't be readonly (so the ORM can set the ids after saving the entity to the database) means that it would be possible to accidentally set it to a random/invalid value.
I'm not very familiar with ORM's and I have zero knowledge of Dapper, so correct me if I was wrong in any of these points.
What would be the best approach here?
The problem I see with surrogate keys is that they have zero relevance to the business logic itself.
That is actually the benefit, not the problem. The key's value is the immutable identifier of the entity, regardless of whatever else changes. I really doubt that there is any widely used ORM that cannot easily make use of it, as the alternative (cascading changes to a key value to every child record) is so much more difficult to implement.
I'd also suggest that you add a value to your difficulty levels that allows the hierarchy of increasing difficulty to be represented in the data model, otherwise "harder than" or "easier than" cannot be robustly represented.

Database Design Question: GUID + Natural Numbers

For a database I'm building, I've decided to use natural numbers as the primary key. I'm aware of the advantages that GUID's allow, but looking at the data, the bulk of row's data were GUID keys.
I want to generate XML records from the database data, and one problem with natural numbers is that I don't want to expose my database key's to the outside world, and allow users to guess "keys." I believe GUID's solve this problem.
So, I think the solution is to generate a sparse, unique iD derived from the natural ID (hopefully it would be 2-way), or just add an extra column in the database and store a guid (or some other multibyte id)
The derived value is nicer because there is no storage penalty, but it would be easier to reverse and guess compared to a GUID.
I'm (buy) curious as to what others on SO have done, and what insights they have.
What you can do to compute a "GUID" is to calculate a MD5 hash of the ID with some salt (table name for instance), load this into a GUID and set a few bits so that it is a valid version 3 (MD5) GUID.
This is almost 2-way since you can have a SQL computed column (which can also be indexed in certain cases) holding the GUID without persisting it in the table, and you can always re-compute a GUID with the correct ID and salt, which should be harder for users since they don't know the salt nor the actual ID.