I'm trying to push an entry in a list in Redis and also want to update the TTL of the list every time a new entry comes in. I can do that my simple calling the EXPIRE "my-list" ttl using Redis. Since my application is receiving heavy traffic, I want to reduce the number of calls to redis.
Can I set my TTL during the push operation in Redis, i.e RPUSH "mylist" I1 I2...IN ex "TTL", does redis support this time of command functionality. I can see that it does support this for the String data structures.
Redis does not have dedicated commands to push and expire the List, although as you've mentioned it does have something like that for the String data type.
The way you'd go about this challenge is to compose your own "command" from existing ones. Instead of having your application call these commands, however, you would use a Lua script as explained in the EVAL documentation page.
Lua scripts are cached and run atomically on the server. One such as the following would probably help in your case - it expects to get the key name, the pushed element and the expiry value:
local reply = redis.call('RPUSH', KEYS[1], ARGV[1])
redis.call('EXPIRE', KEYS[1], ARGV[2])
return reply
Related
i'm currently working about Redis Expire Event
My goal : get the Value, Field to do something in next process after Data in Redis already expire
so i had found Redis Keyspace Notifications Feature
That Allow client to Subscribe to Channel in Redis to Recieve Event affecting Data in reset like Expire
so i have some example code : https://github.com/toygame/nodejs-redis-keyspace-notifications
subscriber.subscribe("__keyevent#0__:expired")
subscriber.on('message', async(channel, message) => {
// do somethings
console.log(message);
})
Result : Key0
And this work find but the Result i got is only Key that i have set into redis and expired
I have already do some research
https://medium.com/nerd-for-tech/redis-getting-notified-when-a-key-is-expired-or-changed-ca3e1f1c7f0a
but its found only Event that maybe i can get as result but not for Value, Field that i expect
is their anyway to get Those Value and Field ?
FYI. Document https://redis.io/topics/notifications
UPDATE
according to this https://stackoverflow.com/a/42556450/11349357
Keyspace notifications do not report the value, only the key's name and/or command performed are included in the published message.The main underlaying reasoning for this is that Redis values can become quite large.
If you really really really need this kind of behavior, well that's pretty easy actually. Because keyspace notifications are using Pub/Sub messages, you can just call PUBLISH yourself after each relevant operation, and with the information that you're interested in.
look like i can't use this Redis Keyspace but i have to publish its on my own
You can use RedisGears to process keyspace notification and get both key and value.
You can write your processing code in python and register it in Redis.
e.g. Capture each keyspace event and store to a Stream
GearsBuilder() \
.foreach(lambda x: execute('XADD', "notifications-stream", '*', *sum([[k,v] for k,v in x.items()],[]))) \
.register(prefix="person:*", eventTypes=['hset', 'hmset'], mode='sync')
You can read more about this example here: https://oss.redis.com/redisgears/examples.html#reliable-keyspace-notification
We're looking for the best way to ingest data in warp10. We are on a Microservices architecture that use Kafka mainly.
Two solutions:
Use Ingress endpoint as defined here: https://www.warp10.io/content/03_Documentation/03_Interacting_with_Warp_10/03_Ingesting_data/01_Ingress (This is the solution we use for now)
Use the warp10 Kafka plugin as defined here: https://blog.senx.io/introducing-the-warp-10-kafka-plugin/
As described here, we use Ingress solution as of now, based on an aggregation of data for x seconds, and call the Ingress API to send data per packet. (Instead of calling the API each time we need to insert something).
For few days, we are experimenting with the Kafka Plugin. We successfully set up the plugin and create an .mc2 responsible to consume data from a given topic and then insert them using UPDATE into warp10.
Questions:
Using the Kafka plugin, would it be better to apply the same buffer mechanism as the one applied when we use the Ingress endpoint? Or, is there any specific implementation in warp10 Kafka plugin that allows to consume message per message in the topic and call the UPDATE function for each ?
Today, as both solutions are working, we're trying to find differences to get the best performance results during ingestion of data. And if possible, without having to apply any buffer mechanism because we are trying to be in real-time as much as possible.
MC2 file:
{
'topics' [ 'our_topic_name' ] // List of Kafka topics to subscribe to
'parallelism' 1 // Number of threads to start for processing the incoming messages. Each thread will handle a certain number of partitions.
'config' { // Map of Kafka consumer parameters
'bootstrap.servers' 'kafka-headless:9092'
'group.id' 'senx-consumer'
'enable.auto.commit' 'true'
}
'macro' <%
// macro executed each time a kafka record is consumed
/*
// received record format :
{
'timestamp' 123 // The record timestamp
'timestampType' 'type' // The type of timestamp, can be one of 'NoTimestampType', 'CreateTime', 'LogAppendTime'
'topic' 'topic_name' // Name of the topic which received the message
'offset' 123 // Offset of the message in 'topic'
'partition' 123 // Id of the partition which received the message
'key' ... // Byte array of the message key
'value' ... // Byte array of the message value
'headers' { } // Map of message headers
}
*/
"recordArray" STORE
"preprod.write" "token" STORE
// macro can be called on timeout with an empty entry map
$recordArray SIZE 0 !=
<%
$recordArray 'value' GET // kafka record value is retrieved in bytes
'UTF-8' BYTES-> // convert bytes to string (WARP10 INGRESS format)
JSON->
"value" STORE
"Records received through Kafka" LOGMSG
$value LOGMSG
$value
<%
DROP
PARSE
// PARSE outputs a gtsList, including only one gts
0 GET
// GTS rename is required to use UPDATE function
"gts" STORE
$gts $gts NAME RENAME
%>
LMAP
// Store GTS in Warp10
$token
UPDATE
%>
IFT
%> // end macro
'timeout' 10000 // Polling timeout (in ms), if no message is received within this delay, the macro will be called with an empty map as input
}
If you want to cache something in Warp 10 to avoid lots of UPDATE per second, you can use SHM (SHared Memory). This is a built-in extension you need to activate.
Once activated, use it with SHMSTORE and SHMLOAD to keep objects in RAM between two WarpScript executions.
In you example, you can push all the incoming GTS in a list, or a list of list of GTS, using +! to append elements to an existing list.
The MERGE of all the GTS in the cache (by name + labels) and UPDATE in the database can then be done in a runner (don't forget to use a MUTEX)
Don't forget the total operation cost:
The ingress format can be optimized for ingestion speed, if you do not repeat classname and labels, and if you gather lines per gts. See here.
PARSE deserialize data from the Warp 10 ingress format.
UPDATE serialize data to the Warp 10 optimized ingress format (and push it to the update endpoint).
the update endpoint deserialize again.
It makes sense to do these deserialize/serialize/deserialize operation if your input data is far from the optimal ingress format. It also make sense if you want to RANGECOMPACT your data to save disk space, or do any preprocessing.
I use redis to cache my web blog.My article has a field "checked",if this field changed in database,I also need to set the new value to redis,here is code
if redis_conn.exists("article"):
redis_conn.hset("article", "checked",1)
it seems like ok,but if article key expired after exists and before hset,there will be some problems.the article key will only has one field of checked,other field like title,content,etc...will gone.
how to hset only if key exists,if key is expired just do nothing.
You can use a Lua script for that, i.e. (pseudo NodeJS):
redis_conn.eval("if redis.call('EXISTS', KEYS[1])==1 then redis.call('HSET', KEYS[1], ARGV[1], ARGV[2]) end", 1, "article", "checked", 1)
Server-side Lua scripts are atomic, so you are assured that the key will not expire in between calls.
Note: Redis does have the HSETNX command, but not the HSETEX command, which is apparently what you're looking for.
I am using Jedis java client for redis. My requirement is that when someone add item to list, say mylist by doing jedisClient.lpush("mylist", "this is my msg"), I need to get notification.
Is this possible ?
Yes, it is possible to achieve that in one of two ways.
The first approach is to use Redis' keyspace notifications. Configure Redis to generate list events with the following configuration directive:
CONFIG SET notify-keyspace-events El
Then, subscribe to the relevant channel/channels. If you want to subscribe only to mylist's changes, do:
SUBSCRIBE __keyevent#0__:mylist
Or, use PSUBSCRIBE and listen to events to matching key names that match a pattern.
Note, however, that keysapce notifications will not provide the actual pushed value. You can use Lua scripts as an alternate approach and implement your own notifications mechanism. For example, use the following script to push and publish a custom message to a custom channel:
local l = redis.call("LPUSH", KEYS[1], ARGS[1])
redis.call("PUBLISH", "mylistnotif:" .. KEYS[1], "Pushed value " .. ARGS[1])
return l
Make sure that "someone" uses that script to do the actual list-pushing and subscribe to the relevant channel/channels.
i am using the Booksleeve hash api for Redis. I am doing the following:
CurrentConnection.Hashes.Set(0, "item:1", "priority", task.priority.ToString());
var taskResult = CurrentConnection.Hashes.GetString(0, "item:1", "priority");
taskResult.Wait();
var priority = Int32.Parse(taskResult.Result)
However i am getting an Aggregate exception:
"ERR Operation against a key holding the wrong kind of value"
I am not sure what i am doing wrong here (except of blocking the task :)).
Note: CurrentConnection is an instance of BookSleeve.RedisConnection
Please help!
Thanks
That is not a Booksleeve issue - it is a redis error; in fact, the full error message you should be seeing is:
Redis server: ERR Operation against a key holding the wrong kind of value
(where I try to make it clear that this error has come from redis, not Booksleeve)
As for what causes this: each key in redis has a designated type; string, hash, list, etc. You cannot use hash operations on something that is not a hash.
My guess, then, is that "item:1" already exists, but as something other than a hash. I have unit tests that confirm this from Booksleeve (i.e. with/without a pre-existing non-hash value).
You can investigate this in redis using redis-cli or any other client (telnet works, at a push), with the command:
type item:1
(thanks #Sripathi)