how to get only the ids of redis stream entries - redis

Is there a way to get only the ids that are contained in the stream? something like an XKEYS command?
XKEYS "test:stream"
=>
1599031407838-0
1599031407839-0

There is no way to get this with a Redis command.
You can get this with Lua Scripts - EVAL command.
Using the XRANGE command, you get the ids and the field-value pairs.
> XRANGE streamkey - +
1) 1) "1599077066502-0"
2) 1) "fielda"
2) "valuea"
3) "fieldb"
4) "valueb"
2) 1) "1599077076318-0"
2) 1) "fielda"
...
In a Lua Script you can discard the field-value pairs from the response, leaving just the IDs. This way at least you reduce the size of the response saving on network payload and Client Output Buffers.
This script would get you started:
local resp = redis.call('XRANGE', KEYS[1], ARGV[1], ARGV[2])
for key,value in ipairs(resp) do
resp[key] = value[1]
end
return resp
Use as
EVAL "local resp = redis.call('XRANGE', KEYS[1], ARGV[1], ARGV[2]) for key,value in ipairs(resp) do resp[key] = value[1] end return resp" 1 streamkey - +
with the key start end of your choice as parameters.
You get as response:
EVAL "local resp ... return resp" 1 streamkey - +
1) "1599077066502-0"
2) "1599077076318-0"
3) "1599077085694-0"
4) ...

Related

How does the RarePackFour smartcontract generate a new "unique" number from a given random number?

I'm trying to understand the RarePackFour smart contract from the game gods unchained. I noticed that they use a random number to generate other "random" (in parenthesis because i dont think the newly generated numbers are random).
This the code im trying to understand. Could you help me understand what is happening here ?
function extract(uint random, uint length, uint start) internal pure returns (uint) {
return (((1 << (length * 8)) - 1) & (random >> ((start * 8) - 1)));
}
Bitwise operators are not realy a strong point for me so it would really help if you can help understand what is happening in the code.
Let's go with example:
length = 1
start = 1
random = 3250 # Binary: 0b110010110010
1. ((1 << (length * 8)) - 1) = 2^8 - 1 = 255 = 0B11111111 # Binary
2. (random >> ((start * 8) - 1))) = 0b110010110010 >> 7 = 0B11001 # Decimal 25
0B11111111 & 0B11001 = 0B11001 = 25
Usually if length * 8 > ((start * 8) - 1), the function returns random / (start * 8 - 1). Notice that it's only integer calculation in Solidity.

Lua, how to access an index that uses an array

How can I check an array inside an index? [{4, 8}] to confirm if 'vocation' aka 8 exists?
local outfits = {
[7995] = {
[{1, 5}] = {94210, 1},
[{2, 6}] = {94210, 1},
[{3, 7}] = {94210, 1},
[{4, 8}] = {94210, 1}
}
}
local item = 7995
local vocation = 8
if outfits[item] then
local index = outfits[item]
--for i = 1, #index do
-- for n = 1, #index[i]
-- if index[i]
-- ????
end
You just need to iterate using pairs rather than a basic for loop.
With pairs you get your key value pairs and can then loop over the key to inspect it's contents.
local found = 0
if outfits[item] then
local value = outfits[item]
for k, v in pairs(value) do
for n = 1, #k do
if k[n] == vocation then
found = k
break;
end
end
end
end
print(outfits[item][found][1])
That said this is not the a very efficient method of storing values for look up and wont scale well for larger groups of record.

WEP shared-key authentication response generation

While I was capturing packets with Wireshark using my phone I tried connect to my access point which has WEP shared-key authentication (only for testing purposes) and I got the authentication packets which contained the IV, challenge text, etc. Then I tired to represent the ciphertext what my phone sent. So I already know the password and I took the IV, after that concatenated these two and put in the RC4 algorithm what gave me a keystream. I xored the keystream and the challenge text but this always gives me different chipertext than my phone sent.
Maybe I concatenate the IV and password in the wrong way or I'm using wrong algorithm and why is the response in the provided image is 147 bytes long?
Image of wireshark captured packets
Code what I'm using
def KSA(key):
keylength = len(key)
S = range(256)
j = 0
for i in range(256):
j = (j + S[i] + key[i % keylength]) % 256
S[i], S[j] = S[j], S[i] # swap
return S
def PRGA(S):
i = 0
j = 0
while True:
i = (i + 1) % 256
j = (j + S[i]) % 256
S[i], S[j] = S[j], S[i] # swap
K = S[(S[i] + S[j]) % 256]
yield K
def RC4(key):
S = KSA(key)
return PRGA(S)

loop get vs mget, Is there any performance difference in redis lua?

Given a single instance redis which supports lua scripts. Is there any performance difference between calling 'mget' once and calling 'get' multiple times to retrieve the value of multiple keys?
Time-complexity-wise, both result in the same: O(N) = N*O(1).
But there is overhead associated with processing each command and parsing the result back to Lua. So MGET will give you better performance.
You can measure this. The following scripts receive a list of keys, one calls GET multiple times, the other one calls MGET.
Calling GET multiple times:
local t0 = redis.call('TIME')
local res = {}
for i = 1,table.getn(KEYS),1 do
res[i] = redis.call('GET', KEYS[i])
end
local t1 = redis.call('TIME')
local micros = (t1[1]-t0[1])*1000000 + t1[2]-t0[2]
table.insert(res,'Time taken: '..micros..' microseconds')
table.insert(res,'T0: '..t0[1]..string.format('%06d', t0[2]))
table.insert(res,'T1: '..t1[1]..string.format('%06d', t1[2]))
return res
Calling MGET once:
local t0 = redis.call('TIME')
local res = redis.call('MGET', unpack(KEYS))
local t1 = redis.call('TIME')
local micros = (t1[1]-t0[1])*1000000 + t1[2]-t0[2]
table.insert(res,'Time taken: '..micros..' microseconds')
table.insert(res,'T0: '..t0[1]..string.format('%06d', t0[2]))
table.insert(res,'T1: '..t1[1]..string.format('%06d', t1[2]))
return res
Calling GET multiple times took 51 microseconds, vs MGET once 20 microseconds:
> EVAL "local t0 = redis.call('TIME') \n local res = {} \n for i = 1,table.getn(KEYS),1 do \n res[i] = redis.call('GET', KEYS[i]) \n end \n local t1 = redis.call('TIME') \n local micros = (t1[1]-t0[1])*1000000 + t1[2]-t0[2] \n table.insert(res,'Time taken: '..micros..' microseconds') \n table.insert(res,'T0: '..t0[1]..string.format('%06d', t0[2])) \n table.insert(res,'T1: '..t1[1]..string.format('%06d', t1[2])) \n return res" 10 key:1 key:2 key:3 key:4 key:5 key:6 key:7 key:8 key:9 key:10
1) "value:1"
2) "value:2"
3) "value:3"
4) "value:4"
5) "value:5"
6) "value:6"
7) "value:7"
8) "value:8"
9) "value:9"
10) "value:10"
11) "Time taken: 51 microseconds"
12) "T0: 1581664542637472"
13) "T1: 1581664542637523"
> EVAL "local t0 = redis.call('TIME') \n local res = redis.call('MGET', unpack(KEYS)) \n local t1 = redis.call('TIME') \n local micros = (t1[1]-t0[1])*1000000 + t1[2]-t0[2] \n table.insert(res,'Time taken: '..micros..' microseconds') \n table.insert(res,'T0: '..t0[1]..string.format('%06d', t0[2])) \n table.insert(res,'T1: '..t1[1]..string.format('%06d', t1[2])) \n return res" 10 key:1 key:2 key:3 key:4 key:5 key:6 key:7 key:8 key:9 key:10
1) "value:1"
2) "value:2"
3) "value:3"
4) "value:4"
5) "value:5"
6) "value:6"
7) "value:7"
8) "value:8"
9) "value:9"
10) "value:10"
11) "Time taken: 20 microseconds"
12) "T0: 1581664667232092"
13) "T1: 1581664667232112"
It will be different in some cases.
MGET key [key ...] Time complexity: O(N) where N is the number of keys to retrieve.
GET key Time complexity: O(1)
According to time complexity, GET is more efficient.
However, there is a difference between passing a single command to redis and passing it multiple times.
If you don't use pool, the difference is even greater.
Of course, what I would recommend is to use it depending on the type of data you want to deal with.
You decide whether it's more efficient to manage your data in Maps.

Matrix addition in awk

I have a bunch of variables which look like this:
(DURATION 1.57) + (DURATION 2.07)
(T0 10) (T1 0) (TX 0) + (T0 12) (T1 0) (TX 1)
(TC 1) (IG 0) + (TC 2) (IG 3)
Is it possible to have awk process this such that the result is:
(DURATION 3.64)
(T0 22) (T1 0) (TX 1)
(TC 3) (IG 3)
Or can anyone recommend another unix program I can use to do this?
Here is one way to do it:
awk '{
gsub(/[()+]/, "")
for(nf=1; nf<=NF; nf+=2) {
flds[$nf] += $(nf+1)
}
sep = ""
for(fld in flds) {
printf "%s(%s %g)", sep, fld, flds[fld]
sep = FS
}
print "";
delete flds
}' file
(DURATION 3.64)
(T0 22) (T1 0) (TX 1)
(TC 3) (IG 3)
We remove the special characters ()+ using gsub() function.
We iterate over all fields adding variables to an array and adding the values
We iterate over the array, printing them in our desired format.
Add a new line after we are done printing
Delete the array so that we can re-use it on next line
Note: The order of lines will be same as input file but using the in operator for our for loop the variables on each line may appear in random order.