Since the below got a bit long: Here's the tl;dr; version: Is there an existing key/value best-practice for fast key and value lookup, something like a hash-based set with persistent indices?
I'm interested in the world of key-value databases and have so far failed to figure out how one would efficiently implement the following use-case:
Assume we want to serialize some data and reference them somewhere else by a persistent, unique integer index. Thus e.g.: Key = unsigned int, Value = MyData.
The database should have fast key lookup and ensure that MyData is unique.
Now, when I insert a new value into my the database, I could assign it a new index key, e.g. the current size of the database or to prevent clashes after removing items, I could keep some counter externally.
But how would I ensure that I do not insert the same MyData value into my database? So far, it looks to me as if this is not efficiently possible with key-value databases - is this correct? I.e. I do not want to iterate over the whole database just to ensure MyData value is not in there already...
What is the best pratice to implement this, then?
For background: I work on KDevelop where we use the above for our code analysis cache. We actually have a custom implementation of the above use-case 1. Search for Bucket and ItemRepository if you are interested in the internals, and see 2 for an examplatory usage of the ItemRepository.
But you will probably agree, that this code is quite hard to understand and thus hard to maintain. I want to compare its performance to alternative solutions which might result in simpler code - but only if it does not incur a severe performance penalty. Considering the hype around the performance of key-value storages such as OpenLDAP MDB, Kyoto Cabinet and LevelDB, this is where I wanted to start.
What we have in KDevelop - as far as I figured out - is basically a sort of hybrid on-disk/in-memory hash map which gets saved to disk periodically (which of course can result in major data corruption in case of crashes etc.). Items are stored in a location based on their hash value which then of course also allows relatively fast value lookups as long as the hash function is fast. The added twist is that you also get some sort of persistent database index which can be used to lookup the items quite efficiently.
So - long story short - how would one do that with a key/value database such as LevelDB, Kyoto Cabinet, OpenLDAP MDB - you name it?
Sounds like you want to do what OpenLDAP does with its Equality index. Perhaps this is the same as the OrientDB example, I didn't read it.
The main table is indexed by a monotonically increasing integer key (called the entryID), and stores the data value. The equality index is indexed by a hash of the value, and stores a list of entryIDs that match the hash. Since the hash might have collisions, just the existence of an entry in the equality index doesn't prove uniqueness or duplication. You still need to check the actual values.
A faster/simpler approach, if you're using MDB, BDB, or some other database that supports duplicate keys, is to just keep one table, using the hash as the key. In both MDB and BDB there is a GET_BOTH request which matches both the key and the data to perform a fetch. If it succeeds then you know for certain that the value already exists. Otherwise, it allows you to save whatever data values and not worry whether or not there are hash collisions.
A caveat here, in MDB using duplicate keys, the size of the values is limited to less than one half of a disk page.
Unless I'm missing something here - typically your hash algorithm is consistent and will provide the same key for the same data. Thus you should only need to look up the key to see if it already exists, or handle the (likely duplicate key) error the DB gives back to you.
afaik Key/Value DBs can and will enforce a unique Value constraint for you i.e. you will get an error if you try and save a value that already exists.
How big are your value strings?
I would just store them in a key and let the database do all the work.
Typical LevelDB style, which applies to most KV stores, would be to use a pair of keys, prefixed to indicate type
eg:
Key = 'i' + ID
Value = valueString
Key = 'v' + valueString
Value = ID
In a system that needs to allow for multiple identical valueStrings you would move the ID into the tail of the second key
Key = 'v' + valueString + ID
Value = empty
Related
I know that UUIDs and incrementing integers are often used for primary keys.
I'm thinking of nanoids instead because those are URL friendly without being guessable / brute-force scrapeable (like incrementing integers).
Would there be any reason not to use nanoids as primary keys in a database like Postgres? (For example: Maybe they drastically increase query time since they aren't ... aligned or something?)
https://github.com/ai/nanoid
Most databases use incrementing id's because it's more efficient to insert a new value onto the end of a B-tree based index.
If you insert a new value into a random place in the middle of a B-tree, it may have to split the B-tree nonterminal node, and that could cause the node at the next higher level to split, and so on up to the top of the B-tree.
This also has a greater risk of causing fragmentation, which means the index takes more space for the same number of values.
Read https://www.percona.com/blog/2015/04/03/illustrating-primary-key-models-in-innodb-and-their-impact-on-disk-usage/ for a great visualization about the tradeoff between using an auto-increment versus UUID in a primary key.
That blog is about MySQL, but the same issue applies to any B-tree based data structure.
I'm not sure if there is a disadvantage to using nanoids, but they are often unnecessary. While UUIDs are long, they can be translated to a shorter format without losing entropy.
See the NPM package (https://www.npmjs.com/package/short-uuid).
UUIDs are standardized by the Open Software Foundation (OSF) and described by the RFC 4122. That means that there will be far more chances for other tools to give you some perks around it.
Some examples:
MongoDB has a special type to optimize the storage of UUIDs. Not only a NanoID string will take more space, but even the binary takes more bits (126 in Nano ID and 122 in UUID)
Once saw a logging tool extracting the timestamp from the uids, can't remember which, but is is available
Also the long, non reduced version of UUIDs are very easy to identify visually. When the end user is a developer, it might help to understand the nature/source of the ID (like clearly not a database auto-increment key)
Every day, we receive huge files from various vendors in different formats (CSV, XML, custom) which we need to upload into a database for further processing.
The problem is that these vendors will send the full dump of their data and not just the updates. We have some applications where we need only send the updates (that is, the changed records only). What we do currently is to load the data into a staging table and then compare it against previous data. This is painfully slow as the data set is huge and we are occasionally missing SLAs.
Is there a quicker way to resolve this issue? Any suggestions or help greatly appreciated. Our programmers are running out of ideas..
There are a number of patterns for detecting deltas, i.e. changed records, new records, and deleted records, in full dump data sets.
One of the more efficient ways I've seen is to create hash values of the rows of data you already have, create hashes of the import once it's in the database, then compare the existing hashes to the incoming hashes.
Primary key match + hash match = Unchanged row
Primary key match + hash mismatch = Updated row
Primary key in incoming data but missing from existing data set = New row
Primary key not in incoming data but in existing data set = Deleted row
How to hash varies by database product, but all of the major providers have some sort of hashing available in them.
The advantage comes from only having to compare a small number of fields (the primary key column(s) and the hash) rather than doing a field by field analysis. Even pretty long hashes can be analyzed pretty fast.
It'll require a little rework of your import processing, but the time spent will pay off over and over again in increased processing speed.
The standard solution to this is hash functions. What you do is have the ability to take each row, and calculate an identifier + a hash of its contents. Now you compare hashes, and if the hashes are the same then you assume that the row is the same. This is imperfect - it is theoretically possible that different values will give the same hash value. But in practice you have more to worry about from cosmic rays causing random bit flips in your computer than you do about hash functions failing to work as promised.
Both rsync and git are examples of widely used software that use hashes in this way.
In general calculating a hash before you put it in the database is faster than performing a series of comparisons inside of the database. Furthermore it allows processing to be spread out across multiple machines, rather than bottlenecked in the database. And comparing hashes is less work than comparing many fields, whether you do it in the database or out.
There are many hash functions that you can use. Depending on your application, you might want to use a cryptographic hash though you probably don't have to. More bits is better than fewer, but a 64 bit hash should be fine for the application that you describe. After processing a trillion deltas you would still have less than 1 chance in 10 million of having made an accidental mistake.
I have a large (2B + records) DynamoDB table.
I want to implement a distributed locking process by adding a new field, 'index_due_at' when an item is created or updated. After the create/update, I will do some further processing on the item and then remove the 'index_due_at' field.
I'd like to create a sweeper job which will periodically extract any records with an outstanding 'index_due_at' field (on the assumption that something about the above process failed) to give those records further treatment. I would anticipate at most 100s of records in this state at any one time, more likely 10s.
To optimise the performance of the sweeper, I want to create a GSI including the new field (and project the key data into it).
It seems that using a timestamp (in millis) as the GSI HASH key ought to give a good distribution. And I don't need to query on this field's value, just on its presence. Can anyone identify any drawbacks in this approach and if so, suggest an alternative?
Issues I can anticipate include:
* Non-uniqueness in timestamps at milli level.
* Possible hash key problems with numeric values?
* Possible hash key problems with numeric values that don't vary much in the most significant digits.
This is less of a problem than you might be thinking. GSI hash keys don't actually have to be unique, so you're fine on than front.
You probably already know this, but your GSI will only contain items with GSI keys, so your GSI should be pretty small (100s of items).
One thought I have is that the index_due_at might actually be better as a GSI sort key rather than hash key. Data is sorted within a partition by sort key. So you could have a GSI hash key of index_due_at_flag which would be Y if present, then a sort key of index_due_at. This would mean all your data would be sorted naturally, so you could process it in date order.
That said, you are probably never going to Query this GSI, so I suspect your choice of keys hardly matters at all. Presumably you will just do a Scan, get all the items and try and process them all. In which case you would never even use the keys. Just having a key attribute present would put the item in the GSI.
Another thought is that you need to handle the fact GSIs are not perfectly synchronous with the base table. Its possible (admittedly unlikely) that an item in your GSI has actually just been processed. Therefore if your sweeper script picks up an item from the GSI, you should handle the fact its possible its already been updated in the base table (e.g. by checking the base table item before attempting to process it).
Good luck with it. I answered because I liked your bio! Hope staying on the right side of barrel shaped is working out :)
This should be a perfect scenario for using DynamoDB Sparse Index
Use the 'index_due_at' as sort key in GSI, and only the items you are interested will be in the index, greatly reducing the space needed and the performance.
I have a SQL table that is accessed continually but changes very rarely.
The Table is partitioned by UserID and each user has many records in the table.
I want to save database resources and move this table closer to the application in some kind of memory cache.
In process caching is too memory intensive so it needs to be external to the application.
Key Value stores like Redis are proving inefficient due to the overhead of serializing and deserializing the table to and from Redis.
I am looking for something that can store this table (or partitions of data) in memory, but let me query only the information I need without serializing and deserializing large blocks of data for each read.
Is there anything that would provide Out of Process in memory database table that supports queries for high speed caching?
Searching has shown that Apache Ignite might be a possible option, but I am looking for more informed suggestions.
Since it's out-of-process, it has to do serialization and deserialization. The problem you concern is how to reduce the serialization/deserizliation work. If you use Redis' STRING type, you CANNOT reduce these work.
However, You can use HASH to solve the problem: mapping your SQL table to a HASH.
Suppose you have the following table: person: id(varchar), name(varchar), age(int), you can take person id as key, and take name and age as fields. When you want to search someone's name, you only need to get the name field (HGET person-id name), other fields won't be deserialzed.
Ignite is indeed a possible solution for you since you may optimize serialization/deserialization overhead by using internal binary representation for accessing objects' fields. You may refer to this documentation page for more information: https://apacheignite.readme.io/docs/binary-marshaller
Also access overhead may be optimized by disabling copy-on-read option https://apacheignite.readme.io/docs/performance-tips#section-do-not-copy-value-on-read
Data collocation by user id is also possible with Ignite: https://apacheignite.readme.io/docs/affinity-collocation
As the #for_stack said, Hash will be very suitable for your case.
you said that Each user has many rows in db indexed by the user_id and tag_id . So It is that (user_id, tag_id) uniquely specify one row. Every row is functional depends on this tuple, you could use the tuple as the HASH KEY.
For example, if you want save the row (user_id, tag_id, username, age) which values are ("123456", "FDSA", "gsz", 20) into redis, You could do this:
HMSET 123456:FDSA username "gsz" age 30
When you want to query the username with the user_id and tag_id, you could do like this:
HGET 123456:FDSA username
So Every Hash Key will be a combination of user_id and tag_id, if you want the key to be more human readable, you could add a prefix string such as "USERINFO". e.g. : USERINFO:123456:FDSA .
BUT If you want to query with only a user_id and get all rows with this user_id, this method above will be not enough.
And you could build the secondary indexes in redis for you HASH.
as the above said, we use the user_id:tag_id as the HASH key. Because it can unique points to one row. If we want to query all the rows about one user_id.
We could use sorted set to build a secondary indexing to index which Hashes store the info about this user_id.
We could add this in SortedSet:
ZADD user_index 0 123456:FDSA
As above, we set the member to the string of HASH key, and set the score to 0. And the rule is that we should set all score in this zset to 0 and then we could use the lexicographical order to do range query. refer zrangebylex.
E.g. We want to get the all rows about user_id 123456,
ZRANGEBYLEX user_index [123456 (123457
It will return all the HASH key whose prefix are 123456, and then we use this string as HASH key and hget or hmget to retrieve infomation what we want.
[ means inclusive, and ( means exclusive. and why we use 123457? it is obvious. So when we want to get all rows with a user_id, we shoud specify the upper bound to make the user_id string's leftmost char's ascii value plus 1.
More about lex index you could refer the article I mentioned above.
You can try apache mnemonic started by intel. Link -http://incubator.apache.org/projects/mnemonic.html. It supports serdeless features
For a read-dominant workload MySQL MEMORY engine should work fine (writing DMLs lock whole table). This way you don't need to change you data retrieval logic.
Alternatively, if you're okay with changing data retrieval logic, then Redis is also an option. To add to what #GuangshengZuo has described, there's ReJSON Redis dynamically loadable module (for Redis 4+) which implements document-store on top of Redis. It can further relax requirements for marshalling big structures back and forth over the network.
With just 6 principles (which I collected here), it is very easy for a SQL minded person to adapt herself to Redis approach. Briefly they are:
The most important thing is that, don't be afraid to generate lots of key-value pairs. So feel free to store each row of the table in a different key.
Use Redis' hash map data type
Form key name from primary key values of the table by a separator (such as ":")
Store the remaining fields as a hash
When you want to query a single row, directly form the key and retrieve its results
When you want to query a range, use wild char "*" towards your key. But please be aware, scanning keys interrupt other Redis processes. So use this method if you really have to.
The link just gives a simple table example and how to model it in Redis. Following those 6 principles you can continue to think like you do for normal tables. (Of course without some not-so-relevant concepts as CRUD, constraints, relations, etc.)
using Memcache and REDIS combination on top of MYSQL comes to Mind.
When creating a key in Redis, I get using the ":" format and treating it similar to a URL structure.
But what if that structure itself contains key-value type combinations? Does one put the key in the structure?
Made-up Example:
Option A) "country:usa:manufacturer:ford:vehicle:f150:color" = black
or
Option B) "usa:ford:f150:color" = black
In some ways, I think that there is strength in the structure of Option A, but it also adds a lot of complexity to the key.
Thoughts?
While keeping in mind your made-up example (do try to use an actual example, you'll get better answers) I would have to say neither.
I would go with an ID for the key, likely an int. then I'd put each key/value pair in your option A as a hash member and value.
For example:
HSET 1 country USA
HSET 1 manufacturer ford
And so on. Or you could use an hmset operation to set them all at once.
Why? You get the benefit of keeping the fields as describing the data (which you lose in your option b), the memory advantages of hashes over strings, and reduced complexity on key structure, not to mention the memory benefits of a short integer as keyname versus a long string.
Further, you have a memory cheap way to create indexes as integer sets. for example a key called "country:1" could be a set of entry IDs which then give you a way to "pull all entries for country ID 1" - USA in the example. By using integers you get the benefit of being able to store these all in a very memory efficient way, at the minor cost of a lookup table. This could even be done in lua to avoid a network hop.
The greater the range of possible combinations and entries, the more valuable the memory savings are. If you've got millions or billions of them, you'll want to follow the integer-ID & lookup route. This would also set you up nicely if you ever need to shard data - either server side or client side.