Lets say I have a hash of a hash e.g.
$data = {
'harry' : {
'age' : 25,
'weight' : 75,
},
'sally' : {
'age' : 25,
'weight' : 75,
}
}
What would the 'usual' way to store such a data structure (or would you not?)
Would you be able to directly get a value (e.g. get harry : age ?
Once stored could you directly change the value of a sub key (e.g. sally : weight = 100)
What would the 'usual' way to store such a data structure (or would
you not?)
For example harry and sally would be stored each in separate hashes where fields would represent their properties like age and weight. Then set structure would hold all the members (harry, sally, ...) which you have stored in redis.
Would you be able to directly get a value (e.g. get harry : age ?)
Yes, see HGET or HMGET or HGETALL.
Once stored could you directly change the value of a sub key (e.g.
sally : weight = 100)
Yes, see HSET.
Lets take a complex data that we have to store in redis ,
for example this one:
$data = {
"user:1" : {
name : "sally",
password : "123"
logs : "25th october" "30th october" "12 sept",
friends : "34" , "24", "10"
}
"user:2" :{
name : ""
password : "4567"
logs :
friends: ""
}
}
The problem that we face is that the friends & logs are lists.
So what we can do to represent this data in redis is use hashes and lists something like this :
Option 1. A hash map with keys as user:1 and user:2
hmset user:1 name "sally" password "12344"
hmset user:2 name "pally" password "232342"
create separate list of logs as
logs:1 { here 1 is the user id }
lpush logs:1 "" "" ""
lpush logs:2 "" "" ""
and similarly for friends.
Option 2: A hash map with dumped json data as string encode
hmset user:1 name "sally" password "12344" logs "String_dumped_data" friends "string of dumped data"
Option 3: This is another representation of #1
something like user:1:friends -> as a list
and user:2:friends -> as a list
Please , correct me if i m wrong.
Depends on what you want to do, but if your datastructure is not deeper nested and you need access to each field, I would recommend using hashes: http://redis.io/commands#hash
Here is a good overview over the redis datatypes, each with pro and contra: http://redis.io/topics/data-types
Related
I'm working with Redisgraph.
I have a node Person with three properties: name (string), age (number), isAlive (boolean).
If I store the age as number, without the quotes, it correctly store it as a number. So, if I query:
MATCH (p:Person) RETURN p
what I have is:
{ name: 'John', age: 30, isAlive: 'true' }
but there's a way to query and get real booleans?
What I want is:
{ name: 'John', age: 30, isAlive: true }
Thank you!
It sounds like you're querying RedisGraph using redis-cli. The RESP protocol that processes module replies only allows strings and integers as primitive data types that can be passed, so your request can't be accomplished through redis-cli.
All of the client libraries, however, will decode replies to their correct type. I'd recommend using one as an intermediary to interact with RedisGraph - https://oss.redis.com/redisgraph/clients/.
Redisgraph can return a compact format where the type of the values are included. In order to use this you need to pass the --compact flag (which also works in redis-cli):
GRAPH.QUERY demo "MATCH (a) RETURN a" --compact
Some client libraries takes advantage of this compact format in order to return the correct type. The type of value is returned as an integer:
typedef enum {
PROPERTY_UNKNOWN = 0,
PROPERTY_NULL = 1,
PROPERTY_STRING = 2,
PROPERTY_INTEGER = 3,
PROPERTY_BOOLEAN = 4,
PROPERTY_DOUBLE = 5,
} PropertyTypeUser;
You can read more about the compact format here.
I have a key which has fields and values. All the fields have string values.
One of these fields I want it to be a table or set or list (meaning holding multiple values). This field is called zonetable
I only know how to use hset but as far as I know it cannot do what I want. I would like to do something like that
hmset L0001:ad65ed38-66b0-46b4-955c-9ff4304e5c1a field1 blabla field2 blibli zonetable [1,2,3,4]
Key : L0001:ad65ed38-66b0-46b4-955c-9ff4304e5c1a
field1: "string value"
field2: "string value"
zonetable: [1,2,3,4] ---- the table
Maybe you can make use of Json. use json serialize your table (list or something) into a json string, then use hset to save it into your redis.
When you want to retrieve it, first get it from redis and then deserialize it from json to list.
If you use python, you can do it like this:
table = json.dumps(zonetable)
redis.hset(Key, 'zonetable', table)
when you want to retrieve it :
table = redis.hget(Key, 'zonetable')
zonetable = json.loads(table)
As you say, you use the native command, you can also do this.
first, convert your zonetable to json string using python interpreter
>>> import json
>>> table = [1,2,3,4]
>>> json.dumps(table)
'[1, 2, 3, 4]'
then use this in redis-cli
hmset L0001:ad65ed38-66b0-46b4-955c-9ff4304e5c1a field1 blabla field2 blibli zonetable '[1,2,3,4]'
Yes, one more thing I want to say, if you know the rule of how to convert object to json, you could do it by yourself.
here is the problem I am facing
I have a table called GAMELOG (well, could be SQL Table or NoSQL column family), that looks like this :
ID INT,
REQUESTDATE DATE,
REQUESTMESSAGE VARCHAR,
RESPONSEDATE DATE,
RESPONSEMESSAGE VARCHAR
the REQUESTMESSAGE and RESPONSEMESSAGE column is JSON-formatted.
let's say for example a specific value in REQUESTMESSAGE is :
{
"name" : "John",
"specialty" : "Wizard",
"joinDate" : "17-Feb-1988"
}
and for the RESPONSEMESSAGE is :
{
"name" : "John Doe",
"specialty" : "Wizard",
"joinDate" : "17-Feb-1988",
"level" : 89,
"lastSkillLearned" : "Megindo"
}
now, the data in my table has grown to incredibly large (around billion rows, few terabytes harddisk space)
What I want to do is query the row which contains JSON Property of "name" that has a value of "John" in the REQUESTMESSAGE
What I understand about SQL Database, well Oracle that I've used before, I have to make the REQUESTMESSAGE and RESPONSEMESSAGE as a CLOB, and query using LIKE, i.e.
SELECT * FROM GAMELOG WHERE REQUESTMESSAGE LIKE '%"name" : "John"%';
But, the result is very slow and painfully.
Now, I move to Cassandra, but I don't know how to properly query it, well I haven't used Apache Hadoop yet to get the data, which I intended to use it to get the data sometimes later.
My question is, is there a database product that support the query to select the JSON-formatted JSON Attribute inside the table/column family? As far as I know, MongoDB stores document in JSON, but that means that all of my column family will be stored as JSON, i.e.
{
"ID" : 1,
"REQUESTMESSAGE" : "{
"name" : "John",
"specialty" : "Wizard",
"joinDate" : "17-Feb-1988"
}",
"REQUESTDATE" : "17-Feb-1967"
"RESPONSEMESSAGE" : "{
"name" : "John Doe",
"specialty" : "Wizard",
"joinDate" : "17-Feb-1988",
"level" : 89,
"lastSkillLearned" : "Megindo"
}",
"RESPONSEDATE" : "17-Feb-1967"
}
and I still have trouble to get the JSON Attributes inside the REQUESTMESSAGE column (please correct me if I'm wrong)
Thank you very much
If you aren't committed to storing your data in Apache Cassandra, MySQL has SQL query functions that can extract data from JSON values, in particular, you would want to look at the JSON_EXTRACT function: https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html
In your case, the query should look something like the following:
SELECT REQUESTMESSAGE, JSON_EXTRACT(REQUESTMESSAGE, "$.name")
FROM GAMELOG
WHERE JSON_EXTRACT(REQUESTMESSAGE, "$.name") = "John";
I have the following hash:
HMSET rules:1231231234_11:00_17:00 fw 4444 dm test.abc.com days 'thu, tue, wed'
HMSET rules:1231231234_9:00_10:59 fw 2211 dm anothertest.abc.com days 'thu'
Is there anyway I can search the rules hash and find all records that have a prefix of 1231231234?
Something like
HGET rules:1231231234*
OR... perhaps the way I've created the data is wrong. What's the best way to create a data set like this:
(json notation)
{
pn: 1231231234,
rules: [{
"expiration_date" : "",
"days_of_week" : "Thu, Tue, Wed",
"start_time" : "11:00",
"end_time" : "17:00",
"fw" : "9999"
},
{
"rule_expiration_date" : "",
"days_of_week" : "Thu",
"start_time" : "9:00",
"end_time" : "10:59",
"fw" : "2222"
}]
}
How this data will be used:
I will need to find the rule that applies to me, based on the current time.
So for example, when my application gets a request to "process" pn 1231231234, I need to lookup all rules for that pn number, and then find which rule matches my current day of week, and time stamp.
I don't mind getting back all the rules for a given pn and then having the client code loop through to find the right rule.
EDIT 1
Using my data the way it currently has been created, I tried HSCAN like this:
127.0.0.1:6379[1]> HSCAN rules 0 MATCH 1231231234*
1) "0"
2) (empty list or set)
127.0.0.1:6379[1]>
EDIT 2
As a test, I tried this type of a structure instead:
HMSET rules:1231231234 tue_11:00_17:00 fw 9999
HMSET rules:1231231234 wed_11:00_17:00 fw 9999
HMSET rules:1231231234 thur_11:00_17:00 fw 9999
HMSET rules:1231231234 thu_9:00_10:59 fw 2222
Then I can just see all rules for the main pn. and the use my client app to loop through the results...
?
You need to use scan instead of hscan.
Combining SCAN and HGETALL you can achieve this.
1) Do Scan and get all the values matching your pattern
127.0.0.1:6379> scan 0 match rules:1231231234*
1) "0"
2) 1) "rules:1231231234_11:00_17:00"
2) "rules:1231231234_9:00_10:59"
2) Then for each key in your app logic iterate over them and do an hgetall
127.0.0.1:6379> hgetall rules:1231231234_11:00_17:00
1) "fw"
2) "4444"
3) "dm"
4) "test.abc.com"
5) "days"
6) "thu, tue, wed"
3) if it matches your criteria process.
4) Repeat the same throughout the iteration.
Hope this helps
I am using MonoDB as a databse.......
I am going to generate a _id for each document for
that i use useId and FolderID for that user
here userId is different for each User and also Each user has different FolderIds
i generate _id as
userId="user1"
folderId="Folder1"
_id = userId+folderId
is there any effect of this id generation on mongoDB Indexing...
will it work Fast like _id generated by MongoDB
A much better solution would be to leave the _id column as it is and have separate userId and folderId fields in your document, or create a separate field with them both combined.
As for if it will be "as fast" ... depends on your query, but for ordering by "create" date of the document for example you'd lose the ability to simply order by the _id you'd also lose the benefits for sharding and distribution.
However if you want to use both those ID's for your _id there is one other option ...
You can actually use both but leave them separate ... for example this is a valid _id:
> var doc = { "_id" : { "userID" : 12345, "folderID" : 5152 },
"field1" : "test", "field2" : "foo" };
> db.crazy.save(doc);
> db.crazy.findOne();
{
"_id" : {
"userID" : 12345,
"folderID" : 5152
},
"field1" : "test",
"field2" : "foo"
}
>
It should be fine - the one foreseeable issue is that you'll lose the ability to reverse out the date / timestamp from the MongoID. Why not just add another ID object within the document? You're only losing a few bytes, and you're not screwing with the built in indexing system.