Best way to store a small key-value list in Redis - redis

I'm trying to use Redis as a primary database for a small game I'm making (mostly to mess around with programming and using Redis).
However I came across a scenario that I couldn't find an answer to:
I wish to store a list of the names of different maps that people can be on (not many of them) along with their id. Note: I never need to get the ID from the name.
The two ways I believe this can be done are either storing the information as a string or as a hash.
i.e:
1) String based:
set maps:0 "Main"
set maps:1 "Island"
etc (and maybe a maps:id to
store an auto increment value)
2) Hash based:
hset maps "0" "Main"
hset maps "1" "Island"
etc
My question is which way seems the best. Given that there will never be that many maps I'm leaning towards the single hashed object. Partially because this provides a nice method to return all the maps in existence. But is there any particular reason that the string based queries would be more useful.
Hopefully you can give me some clear information.
Thank you,
Pluckerpluck

The String based values are actually discouraged because it consumes a lot more memory than a hash.
Redis optimizes small hashes and encodes them in a memory efficient manner. This encoding is called zipmap (or ziplist in redis 2.6). See http://redis.io/topics/memory-optimization, specially the section "Use hashes when possible".

Related

cloudflare Durable Objects update object value

Halo! I'm recently diving into cloudflare Workers, especially Durable Objects. I could make a simple request which put a js object into the assigned key. Let's say the key is key0, and the put object value is {"fieldA": "val0", "fieldB": "val1"}. In this case, how can i update the field-value of fieldA without removing fieldB? I've tried simply executing put("key0", {"fieldA": "newVal0"}) and it has kept removing {"fieldB": "val1"}.
Of course it is a common behaviour in js operations, but i cannot find out anything like ~["key0"]["fieldA"] = "newVal0" in docs(maybe i'm missing sth). OTL
Hope this question reach to the gurus in the community! Thanks in advance [:
EDIT after the answers:
In theory, it would be wonderful if flare durable objects support and work just like a normal js object. Such possible worker feature feels like a killer app for the cloud db services, since the average cpu time is quite fast and flare also has super low pricing compared to other big bros. If it happens, i would eager to migrate everything into the flare platform [:
Durable Objects' KV storage only supports get and put operations -- it doesn't have any sort of "update". So, you have two options:
get() the key, modify it, and then write the modified version back. This may sound inefficient, but keep in mind that commonly-accessed keys will likely be in in-memory cache. In fact, this get/modify/put implemented in your JavaScript is probably about as fast as any modification operation that Durable Objects itself could possibly implement built-in. That said, you probably don't want to use this approach with large objects, since the whole object has to be written to disk again after every update.
Split your object across multiple keys. E.g. instead of having the key foo map to {"fieldA": "val0", "fieldB": "val1"}, you could have separate keys foo:fieldA and foo:fieldB. Note that you can fetch all the keys at once using storage.list({prefix: "foo:"}). This approach is not as convenient but allows each field to be written separately to disk.
get and put deal with whole JS objects, so if you want to change part of the object you should get it, update it using normal JS, and then put the entire object back.

Dict vs Record in elm

While implementing a simple app I ran into the problem of trying to update a nested record. I found a solution online but it really seems like a whole lot of bloated code.
As I was looking for alternatives I found Dictionaries. This seem like a solution to that problem -- If I use a dictionary inside of a record I can avoid all that bloated code and get nested updates.
Seeing dictionaries and records next to each other made me wonder, why would I use a record instead of a dictionary, or vice versa? The two seem really similar to me, so I am not sure I see the advantage of one or the other. Of course I can see that there is a difference in syntax, but is that all ?
I learned somewhere that the access time complexity of Dict is O(log(n)) -- does it do a binary search on the keys ? -- but I can't find the access time complexity for record, but I am wondering if that is O(1) and that is one of the advantages.
Either way, they both seem to map to 1 single data structure in other languages (e.g Python's dictionaries, JS objects, Java hash-tables), why do we need two in elm ?
Dicts and records might seem very similar when coming from JavaScript, but in a statically typed language they are actually very different. I think just about the only property they have in common is that they are both key-value containers.
The biggest differences, I think, are that Dicts are homogeneous, meaning values must be of the same type, and "dynamically" keyed and sized, meaning keys are not statically checked (ie. at compile-time) and that key-value pairs can be added at runtime. Records on the other hand includes the key names and value types in the record type, which means they can hold values of different types, but also can't have keys added or removed at runtime without changing the type itself.
The benefits of easily being able to insert and update a Dict is something you pay for when you try to get it back out. Dict.get returns a Maybe which you'll then have to handle, because the type doesn't give any guarantee that it contains anything at all. You also won't get a compiler error if you mistype the name of a key.
Overall, a Dict forsakes most of the benefits of static typing. I think a good rule of thumb is that if you know the key names, you should most likely go with records. If you don't, go with Dict.
You also seem about right regarding performance, but I think that's a secondary concern. Record access should be equivalent to accessing the elements of an array by index, since so much information is known at compile time that it can essentially be compiled down to a fixed-size array.

Isn't storing all of its data as strings an overhead in terms of memory usage?

Isn't storing only strings as data type a big overhead in terms of memory consumption?
e.g.: To store "304.2" in any application is more expensive than to store 304.2 as float/double.
Even if internally the value is indeed stored as a numeric value, delegating to every client the responsibility of "parsing" the string isn't another source of inefficiency?
I was getting super excited to start using redis but my case of usage is to cache a key x value structure like "string" x "doubles[]". Even if it would probably pay off in comparison with disk those two points really turns me off in adopting the technology.
I would love to be proven wrong, this is why I'm asking the question.
Thank you,
For point 1: You can't store 304.2 as a float/double; you can only store a close approximation to it. To store it, you need e.g. a dedicated decimal type, or more general rational type. Or a string.
For point 2:
RESP is a compromise between the following things:
Simple to implement.
Fast to parse.
Human readable.
Human readable means that no matter how numbers are stored internally, they still would be sent as strings and clients would have to parse them.
After all I've chosen Infinispan which gave me the APIs was looking for. Pros of the choosen solution is the actual ability to refer to the cache as a generic key x value concurrent map. Cons: probably less flexible in terms of out of the box client supported programming languages, even though you can always use google protobuff.

Redis key design

I'm wondering if the way we "design" keys in Redis can impact performance and scalability.
For example, if I store content related to "users" under keys like "user:<user_id>" and content related to say, groups, under keys like "group:<group_id>", all my keys will start with either "user:" or "group:".
Will this have a negative impact on the way Redis hashes keys internally?
There is no negative impact. Precisely the design you mention is recommended in the official Redis docs, which are quite clear on this:
Very long keys are not a good idea, for instance a key of 1024 bytes is a bad idea ...
but, read on:
Very short keys are often not a good idea. There is little point in writing "u1000flw" as a key if you can instead write "user:1000:followers". The latter is more readable and the added space is minor compared to the space used by the key object itself and the value object. While short keys will obviously consume a bit less memory, your job is to find the right balance.
Try to stick with a schema. For instance "object-type:id" is a good idea, as in "user:1000". Dots or dashes are often used for multi-word fields, as in "comment:1234:reply.to" or "comment:1234:reply-to".
(Emphases mine.)
See also: Redis key naming conventions?
As it is basically a hash table under the hood, there is nothing analogous to a SQL-style WHERE. That's where bad design could effect performance.
No, it shouldn't be any issue with prefixing your keys like that. Redis uses a hash table internally which in turn uses a proper hash function (one of the murmur hashes if I recall correctly) that won't budge by prefixes.

Why does Redis use integer database numbers?

Why does Redis use integer database numbers instead of strings? It seems like it would be trivial to keep a small internal data structure which maps strings to the “actual” integer.
the reason why Redis does not use strings as DB names but indexes is that the goal and ability of Redis databases is not to provide an outer level of dictionary: Redis dictionaries can't scale to many dictionaries, but just to a small number (it is a tradeoff), nor we want to provide nested data structures per design, so this are just "a few namespaces" and as a result using a numerical small index seemed like the best option.
Having named databases doesn't really fit the design goals of redis. For a start, in a system designed for maximum performance, adding a string lookup to every call isn't a great idea when most users put everything in DB 0 anyway.
Another one of the design goals is keeping the core simple - If a requested new command can be implemented by combining existing commands on the client without a huge performance penalty it won't get added to the core system. If you really need named databases, it is trivial to update your client code read a string and send a number to redis.