redis from python display in cli - redis

I am pushing data into redis from python like this:
ts = datetime.datetime.now().timestamp()
if msg.field == 2:
seq = [ts, 'ask', msg.price]
r.rpush(contractTuple[0], *seq)
I expect the inserted data (seq) to be one object in redis. However, when I look at the data from the reds-clithe fields of the python list are on separate lines:
127.0.0.1:6379> lrange ES 0 -13
406) "1523994426.496158"
407) "ask"
408) "2699.5"
127.0.0.1:6379>
Is this the way redis-cli displays data (strange if true imo), or am I pushing data into redis incorrectly?

See: http://redis-py.readthedocs.io/en/latest/index.html#redis.StrictRedis.rpush:
rpush(name, *values)
Push values onto the tail of the list name
Redis doesn't have a concept of "objects". If you want these values to be grouped, you'll have to implement your own methods to (de)serialize them into strings.

Related

Redis Gears events in cluster

I have a redis cluster with the following configuration :
91d426e9a569b1c1ad84d75580607e3f99658d30 127.0.0.1:7002#17002 myself,master - 0 1596197488000 1 connected 0-5460
9ff311ae9f413b48578ff0519e97fef2ced57b1e 127.0.0.1:7003#17003 master - 0 1596197490000 2 connected 5461-10922
4de4d36b968bd0b5b5dc8023cb00a5a2ab62effc 127.0.0.1:7004#17004 master - 0 1596197492253 3 connected 10923-16383
a32088043c31c5d3f20828bfe06306b9f0717635 127.0.0.1:7005#17005 slave 91d426e9a569b1c1ad84d75580607e3f99658d30 0 1596197490251 1 connected
b5e9ec7851dfd8dc5ab0cf35c230a0e716dd934c 127.0.0.1:7006#17006 slave 9ff311ae9f413b48578ff0519e97fef2ced57b1e 0 1596197489000 2 connected
a34cc74321e1c75e4cf203248bc0883833c928c7 127.0.0.1:7007#17007 slave 4de4d36b968bd0b5b5dc8023cb00a5a2ab62effc 0 1596197492000 3 connected
I want to create a set with all keys in the cluster by listening key operations with redis gears and store key names in a redis set called keys.
To do thant, I run this redis gears command
RG.PYEXECUTE "GearsBuilder('KeysReader').foreach(lambda x: execute('sadd', 'keys', x['key'])).register(readValue=False)"
It work, but only if the updated key is store on the same node of the key keys
Example :
With my cluster configuration, the key keys is store un node 91d426e9a569b1c1ad84d75580607e3f99658d30 (the first node).
If i run :
SET foo bar
SET bar foo
SMEMBERS keys
I have the following result :
127.0.0.1:7002> SET foo bar
-> Redirected to slot [12182] located at 127.0.0.1:7004
OK
127.0.0.1:7004> SET bar foo
-> Redirected to slot [5061] located at 127.0.0.1:7002
OK
127.0.0.1:7002> SMEMBERS keys
1) "bar"
2) "keys"
127.0.0.1:7002>
The first key name foo is not saved in the set keys.
Is it possible to have key names on other nodes saved in the keys set with redis gears ?
Redis version : 6.0.6
Redis gears version : 1.0.1
Thanks.
If the key was written to a shard that does not contain the 'keys' key you need to make sure to move it to another shard with the repartition operation (https://oss.redislabs.com/redisgears/operations.html#repartition), so this should work:
RG.PYEXECUTE "GearsBuilder('KeysReader').repartition(lambda x: 'keys').foreach(lambda x: execute('sadd', 'keys', x['key'])).register(readValue=False)"
The repartition operation will move the record to the correct shard and the 'sadd' will succeed.
Another option is to maintain a set per shard and collect them using another Gear function. To do that you need to use the hashtag function (https://oss.redislabs.com/redisgears/runtime.html#hashtag) to make sure the set created belongs to the current shard. So the following registration will maintain a set per shard:
RG.PYEXECUTE "GearsBuilder('KeysReader').foreach(lambda x: execute('sadd', 'keys{%s}' % hashtag(), x['key'])).register(mode='sync', readValue=False)"
Notice that the sync mode tells RedisGears not to start a distributed execution and it should be much faster to follow the keys this way.
Then to collect all the values:
RG.PYEXECUTE "GB('ShardsIDReader').flatmap(lambda x: execute('smembers', 'keys{%s}' % hashtag())).run()"
The first approach is good for read-intensive use cases and the second approach is good for write-intensive use cases. Depends on your use case you need to chose the right approach.

Is MULTI supposed to work on Redis clustered?

I'm using Redis on a clustered db (locally). I'm trying the MULTI command, but it seems that it is not working. Individual commands work and I can see how the shard moves.
Is there anything else I should be doing to make MULTI work? The documentation is unclear about whether or not it should work. https://redis.io/topics/cluster-spec
In the example below I just set individual keys (note how the port=cluster changes), then trying a multi command. The command executes before EXEC is called
127.0.0.1:30001> set a 1
-> Redirected to slot [15495] located at 127.0.0.1:30003
OK
127.0.0.1:30003> set b 2
-> Redirected to slot [3300] located at 127.0.0.1:30001
OK
127.0.0.1:30001> MULTI
OK
127.0.0.1:30001> HSET c f val
-> Redirected to slot [7365] located at 127.0.0.1:30002
(integer) 1
127.0.0.1:30002> HSET c f2 val2
(integer) 1
127.0.0.1:30002> EXEC
(error) ERR EXEC without MULTI
127.0.0.1:30002> HGET c f
"val"
127.0.0.1:30002>
MULTI transactions, as well as any multi-key operations, are supported only within a single hashslot in a clustered Redis deployment.

Aerospike AQL count(*) SQL analogue script

Ok, so the problem is that I need to do aggregation queries on aerospike's aql console. Specifically, I would like to take an average of a bins of records in a set and to count all the records in a set. I am not sure how to even begin...
aql> SHOW SETS will give you the numbers of objects in yours sets, with the column n_objects
Then, you use the n_objects values to calculate your average
SQL-like aggregation functions are implemented in Aerospike using stream UDFs, which are written in Lua. A stream UDF is a map-reduce operation that is applied on a stream of records returned from a scan or secondary index query.
The stream UDF module (let's assume it's contained in the file aggr_funcs.lua) would implement COUNT(*) by returning 1 for each record it sees, and reducing to an aggregated integer value.
local function one(record)
return 1
end
local function sum(v1, v2)
return v1 + v2
end
function count_star(stream)
return stream : map(one) : reduce(sum)
end
You would register the UDF module with the server, then invoke it. Here's an example of how you'd do that in Python using aerospike.Query.apply:
import aerospike
from aerospike import predicates as p
config = {'hosts': [('127.0.0.1', 3000)],
'lua': {'system_path':'/usr/local/aerospike/lua/',
'user_path':'/usr/local/aerospike/usr-lua/'}}
client = aerospike.client(config).connect()
query = client.query('test', 'demo')
#query.where(p.between('my_val', 1, 9)) optionally use a WHERE predicate
query.apply('aggr_funcs', 'count_star')
num_records = query.results()
client.close()
However, you should get metrics such as the number of objects using an info command. Aerospike has an info subsystem that is used by the command line tools such as asinfo, the AMC dashboard, and the info methods of the language clients.
To get the number of objects in the cluster:
asinfo -h 33.33.33.91 -v 'objects'
23773
You can also get the number of objects in a specific namespace. I have a two node cluster, and I'll query each one:
asinfo -h 33.33.33.91 -v 'namespace/test'
type=device;objects=23773;sub-objects=0;master-objects=12274;master-sub-objects=0;prole-objects=11499;prole-sub-objects=0;expired-objects=0;evicted-objects=0;set-deleted-objects=0;nsup-cycle-duration=0;nsup-cycle-sleep-pct=0;used-bytes-memory=2139672;data-used-bytes-memory=618200;index-used-bytes-memory=1521472;sindex-used-bytes-memory=0;free-pct-memory=99;max-void-time=202176396;non-expirable-objects=0;current-time=201744558;stop-writes=false;hwm-breached=false;available-bin-names=32765;used-bytes-disk=6085888;free-pct-disk=99;available_pct=99;memory-size=2147483648;high-water-disk-pct=50;high-water-memory-pct=60;evict-tenths-pct=5;evict-hist-buckets=10000;stop-writes-pct=90;cold-start-evict-ttl=4294967295;repl-factor=2;default-ttl=432000;max-ttl=0;conflict-resolution-policy=generation;single-bin=false;ldt-enabled=false;ldt-page-size=8192;enable-xdr=false;sets-enable-xdr=true;ns-forward-xdr-writes=false;allow-nonxdr-writes=true;allow-xdr-writes=true;disallow-null-setname=false;total-bytes-memory=2147483648;read-consistency-level-override=off;write-commit-level-override=off;migrate-order=5;migrate-sleep=1;migrate-tx-partitions-initial=4096;migrate-tx-partitions-remaining=0;migrate-rx-partitions-initial=4096;migrate-rx-partitions-remaining=0;migrate-tx-partitions-imbalance=0;total-bytes-disk=8589934592;defrag-lwm-pct=50;defrag-queue-min=0;defrag-sleep=1000;defrag-startup-minimum=10;flush-max-ms=1000;fsync-max-sec=0;max-write-cache=67108864;min-avail-pct=5;post-write-queue=0;data-in-memory=true;file=/opt/aerospike/data/test.dat;filesize=8589934592;writethreads=1;writecache=67108864;obj-size-hist-max=100
asinfo -h 33.33.33.92 -v 'namespace/test'
type=device;objects=23773;sub-objects=0;master-objects=11499;master-sub-objects=0;prole-objects=12274;prole-sub-objects=0;expired-objects=0;evicted-objects=0;set-deleted-objects=0;nsup-cycle-duration=0;nsup-cycle-sleep-pct=0;used-bytes-memory=2139672;data-used-bytes-memory=618200;index-used-bytes-memory=1521472;sindex-used-bytes-memory=0;free-pct-memory=99;max-void-time=202176396;non-expirable-objects=0;current-time=201744578;stop-writes=false;hwm-breached=false;available-bin-names=32765;used-bytes-disk=6085888;free-pct-disk=99;available_pct=99;memory-size=2147483648;high-water-disk-pct=50;high-water-memory-pct=60;evict-tenths-pct=5;evict-hist-buckets=10000;stop-writes-pct=90;cold-start-evict-ttl=4294967295;repl-factor=2;default-ttl=432000;max-ttl=0;conflict-resolution-policy=generation;single-bin=false;ldt-enabled=false;ldt-page-size=8192;enable-xdr=false;sets-enable-xdr=true;ns-forward-xdr-writes=false;allow-nonxdr-writes=true;allow-xdr-writes=true;disallow-null-setname=false;total-bytes-memory=2147483648;read-consistency-level-override=off;write-commit-level-override=off;migrate-order=5;migrate-sleep=1;migrate-tx-partitions-initial=4096;migrate-tx-partitions-remaining=0;migrate-rx-partitions-initial=4096;migrate-rx-partitions-remaining=0;migrate-tx-partitions-imbalance=0;total-bytes-disk=8589934592;defrag-lwm-pct=50;defrag-queue-min=0;defrag-sleep=1000;defrag-startup-minimum=10;flush-max-ms=1000;fsync-max-sec=0;max-write-cache=67108864;min-avail-pct=5;post-write-queue=0;data-in-memory=true;file=/opt/aerospike/data/test.dat;filesize=8589934592;writethreads=1;writecache=67108864;obj-size-hist-max=100
Notice that the value of master-objects on each of the nodes adds up together to the cluster-wide objects value.
To get the number of objects in a set:
asinfo -h 33.33.33.91 -v 'sets/test/demo'
n_objects=23771:n-bytes-memory=618046:stop-writes-count=0:set-enable-xdr=use-default:disable-eviction=false:set-delete=false;

Spring Session Token

Explored spring session and redis it looks really good.
Trying to solve one question for a long time , how to retrieve list of session token from redis db based on the spring session token value in the hash .
I know its not a relational database and there is no straightforward way to achieve but is that a way to figure this out which is really important for us to solve problems
I read in blogs we need to keep a set to track , are there any ways to acheive this when using spring session. i am not even sure how to do this
Any help is highly appreciated .
Thank you
Useful Commands:
redis-cli : To enter into redis console
Example:
root#root>redis-cli
127.0.0.1:6379> _
keys * :Shows all keys stored in redis DB
Example:
127.0.0.1:6379>keys *
“spring:session:expirations:1440354840000“
“spring:session:sessions:3b606f6d-3d30-4afb-bea6-ef3a4adcf56b“
monitor : To monitor the redis DB
127.0.0.1:6379> monitor
OK
1441273902.701071 [0 127.0.0.1:49137] "PING"
1441273920.000888 [0 127.0.0.1:49137] "SMEMBERS"
hgetall SESSION_ID :To check all the keys stored inside a session
example: :
127.0.0.1:6379>hgetall spring:session:sessions:3b606f6d-3d30-4afb-bea6-ef3a4adcf56b
flushall Remove all keys from the DB.
Example :
127.0.0.1:6379> flushall
ok
Open redis-cli then run
127.0.0.1:6379> keys *
1) "spring:session:expirations:1435594380000"
2) "spring:session:sessions:05adb1d7-c7db-4ffb-99f7-47d7bd1867ee"
127.0.0.1:6379> type spring:session:sessions:05adb1d7-c7db-4ffb-99f7-47d7bd1867ee
hash
127.0.0.1:6379> hgetall spring:session:sessions:05adb1d7-c7db-4ffb-99f7-47d7bd1867ee
1) "sessionAttr:SPRING_SECURITY_CONTEXT"
2) ""
3) "sessionAttr:javax.servlet.jsp.jstl.fmt.request.charset"
4) "\xac\xed\x00\x05t\x00\x05UTF-8"
5) "creationTime"
6) "\xac\xed\x00\x05sr\x00\x0ejava.lang.Long;\x8b\xe4\x90\xcc\x8f#\xdf\x02\x00\x01J\x00\x05valuexr\x00\x10java.lang.Number\x86\xac\x95\x1d\x0b\x94\xe0\x8b\x02\x00\x00xp\x00\x00\x01N?\xfb\xb6\x83"
7) "maxInactiveInterval"
8) "\xac\xed\x00\x05sr\x00\x11java.lang.Integer\x12\xe2\xa0\xa4\xf7\x81\x878\x02\x00\x01I\x00\x05valuexr\x00\x10java.lang.Number\x86\xac\x95\x1d\x0b\x94\xe0\x8b\x02\x00\x00xp\x00\x00\a\b"
9) "lastAccessedTime"
10) "\xac\xed\x00\x05sr\x00\x0ejava.lang.Long;\x8b\xe4\x90\xcc\x8f#\xdf\x02\x00\x01J\x00\x05valuexr\x00\x10java.lang.Number\x86\xac\x95\x1d\x0b\x94\xe0\x8b\x02\x00\x00xp\x00\x00\x01N?\xfb\xb6\xa6"
127.0.0.1:6379>

Nested multi-bulk replies in Redis

In the redis protocol specification, under the "Multi-bulk replies section":
A Multi bulk reply is used to return an array of other replies. Every element of a Multi Bulk Reply can be of any kind, including a nested Multi Bulk Reply.
However, I can't figure out a way to get Redis to return such output. Can anyone provide an example?
Only certain commands (especially those returning list of values) return multi-bulk replies, you can try by using LRANGE for example but you can check the command reference for more details.
Usually multi-bulk replies are only 1-level deep but some Redis commands can return nested multi-bulk replies (max 2 levels), notably EXEC (depending on the commands executed while inside the transaction context) and both EVAL / EVALSHA (depending on the value returned by the Lua script).
Here is an example using EXEC:
redis 127.0.0.1:6379> MULTI
OK
redis 127.0.0.1:6379> LPUSH metavars foo foobar hoge
QUEUED
redis 127.0.0.1:6379> LRANGE metavars 0 -1
QUEUED
redis 127.0.0.1:6379> EXEC
1) (integer) 4
2) 1) "hoge"
2) "foobar"
3) "foo"
4) "metavars"
The second element of the multi-bulk reply to EXEC is a multi-bulk itsef.
PS: I added a clarification in the comments regarding the actual maximum level of nesting of multi-bulk replies when using Lua scripts. tl;dr: there's basically no limit.