ways to select values by keys list? - redis

For example I have key structure entity:product:[id] where id - is an integer [0-n]
So I can use this keys entity:product:* but I don't how much load does this query to the redis server.
Another solution is
Create one list key that will store Ids of the entity:products.
RPUSH entity:products:ids 1
RPUSH entity:products:ids 2
RPUSH entity:products:ids 3
RPUSH entity:products:ids 4
And then (pseudo-code)
entityProducts = redis.LRANGE('entityLproducts:ids, 0, -1)
foreach (entityProducts as id)
{
redis.GET('entity:product:' + id)
}
What is the better way? What will be faster and what will do less load to the redis server?

Related

convert SQL data (with OR relationship) to redis data structure

The data in SQL DB is:
We want to store this data in redis. But, the application will have "keys" and "values" data, and it want to fetch the corresponding "Id" from redis.
The "values" against any "key" are in OR relationship, but there is AND relationship between keys for any "Id" .
So, the application query to redis will be based on keys and values, and it wants to get the Id.
e.g. query from app : (K1 == V0 OR K1 == V1) AND (K2 == V2) ; then return c1 .
But, same key can be used by other Id .
e.g. (K1 == V2 OR K1 == V8) AND (K5 == V7) AND (K6 == V9 OR K1 == V8) ; should return c2 .
How do we store such data in Redis, to support these operations (without exponential growth)?

How do I setup model for table object with arrays of responses in sequelize

I am having challenge on how to setup model for table object with arrays of responses in Sequelize ORM. I use Postgres DB. I have a table say foo. Foo has columns
A
B
C
> C1_SN
C1_Name
C1_Address
C1_Phone
D
E
The column C has a boolean question, if the user select true, he will need to provide array of responses for C1. Such as we now have:
C1_SN1
C1_Name1
C1_Address1
C1_Phone1
------
C1_SN2
C1_Name2
C1_Address2
C1_Phone2
-----
C1_SN3
C1_Name3
C1_Address3
C1_Phone3
I expect multiple teams to be filling this table. How do I setup the model in sequelize? I have two options in mind.
Option 1
The first option I think of is to create an extra 1:1 table between Foo and C1. But going with this option, I don't know how to bulkCreate the array of C1 responses in the C1 table.
Option 2
I think it's also possible to make C1 column in Foo table have a nested array of values. Such that if userA submit his data, it will have the nested array of C1. But I don't know how to go about this method as well.
You need to create separate table for C if user select true then need pass array of object and then pass in bulkCreate like.
C1_SN AutoIncrement
C1_NAME
C1_Address
C1_Phone
value=[{"C1_NAME":"HELLo","C1_Address":"HELLo","C1_Phone":"987456321"},{"C1_NAME":"HELLo1","C1_Address1":"HELLo","C1_Phone":"987456321s"}]
foo.bulkCreate(value).then(result=>{
console.log(result)
}).catch(error=>{
console.log(error)
})
From the official you can check this link:
Sequelize bulkCreate

Getting null Records while reading multiple records in Aerospike

I have a namespace: test and set: user in Aerospike database. I add four records in users through the following command on console:
ascli put test users barberakey '{"username":"Barbera","password":"barbera","gender":"f","region":"west","company":"Audi"}'
Through aql command, I can view these four records.
aql> select * from test.users.
I know the method to get records one by one and it runs fine at my side, but it is very expensive operation for my task.I want to read multiple records(batch read) and to perform multiple algorithms on them. I took guidance from https://www.aerospike.com/docs/client/java/usage/kvs/batch.html and write code as followed:
Key[] keys = new Key[4];
for (int i = 0; i < 4; i++) {
keys[i] = new Key("test", "users", (i + 1));
System.out.println("Keys: "+keys[i]);
}
Record[] records = client.get(batchPolicy, keys);
System.out.println("Length of Records : "+records.length);
for(int a=0;a<records.length;a++){
System.out.println("RECORDS IN ARRAY: "+records[a]);
}
But, the problem is that it reads keys but giving Array records null.
Output:
Reading records from Aerospike DB
Keys: test:users:1:3921e84015258aed3b93d7ef5770cd27b9bb4167
Keys: test:users:2:1effb3ce25b23f92c5371dee0ac8e6b34f5703c6
Keys: test:users:3:d17519d72e22beab2c3fa1552910ea3380c262bd
Keys: test:users:4:3f09a505c913db8ad1118e20b78c5bb8495fb0f9
Length of Records : 4
RECORDS IN ARRAY: null
RECORDS IN ARRAY: null
RECORDS IN ARRAY: null
RECORDS IN ARRAY: null
Please guide me for the case.
......
Appears you wrote the records using the key "barberakey". Then read the records by an integer key "i+1". Thus, the records are not found.

LinkedHashMap behaviour with Redis Hashes?

I want to use Hashes data structure in Redis (Jedis client) but also want to maintain the insertion order something like LinkedHashMap in Java. I am totally new to Redis and gone through all the data structures and commands but somehow not able to think of any straight forward solution. Any help or suggestions will be appreciated.
Hashes in Redis do not maintain insertion order. You can achieve the same effect by using a Sorted Set and a counter to keep track of order. Here is a simple example (in Ruby, sorry):
items = {foo: "bar", yin: "yang", some_key: "some_value"}
items.each do |key, value|
count = redis.incr :my_hash_counter
redis.hset :my_hash, key, value
redis.zadd :my_hash_order, count, key
end
Retrieving the values in order would look something like this:
ordered_keys = redis.zrange :my_hash_order, 0, -1
ordered_hash = Hash[
ordered_keys.map {|key| [key, redis.hget(:my_hash, key)] }
]
# => {"foo"=>"bar", "yin"=>"yang", "some_key"=>"some_value"}
No need to use Sorted Set either counter. Just use a https://redis.io/commands#list, because it keeps the insertion order.
HSET my_hash foo bar
RPUSH my_ordered_keys foo
HSET my_hash yin yang
RPUSH my_ordered_keys yin
HSET my_hash some_key some_value
RPUSH my_ordered_keys some_key
LRANGE my_ordered_keys 0 10
1) "foo"
2) "yin"
3) "some_key"

Difference between storing data as a key and as property of a hash object

Right now, I'm storing user objects as follows:
user1 = ( id: 1, name: "bob")
user2 = { id: 2, name: "steve"}
HMSET "user:1", user1
HMSET "user:2", user2
HGETALL "user:1" would return the user1 object
HGETALL "user:2" would return the user2 object
I'm wondering if there would be any significant difference (performance or other) if I did:
user1 = ( id: 1, name: "bob")
user2 = { id: 2, name: "steve"}
HSET "USER", 1, JSON.stringify(user1)
HSET "USER", 2, JSON.stringify(user2)
HGET "USER", 1 would give me the string representation of user1 object
HGET "USER", 2 woudl give me the string representation of user2 object
There's not a huge difference either way. It's mostly going to boil down to a design decision based on what you're doing, although whichever you use you should stay consistent throughout the project to avoid confusion.
Here are some pros to method 2:
using JSON could help maintain type consistency
Redis will use less memory and may be a tiny bit faster, since it doesn't have to store or lookup those extra keys
might be easier to think about and work with in code
The main negative for method 2 is summed up in the following example. Say you need to update a user's name. Here's how you would do it with each method.
// Method 1:
HMSET user:1 name newname
// Method 2:
result = JSON.parse(HGET user 1)
result.name = newname
HSET user 1 JSON.stringify(result)