Predis cluster slots manual set - redis

is there any way in Predis to define the cluster slots manually when creating the instance in order to avoid the random server selection and getting the slots config so the request is directly sent to the right server at once making the process faster?

Yes you can do that using connection parameters:
$client = new Predis\Client([
'tcp://node01?slots=0-5460',
'tcp://node02?slots=5461-10922',
'tcp://node03?slots=10923-16383',
], ['cluster' => 'redis']);
The "slots" connection parameter can accept a comma-separated list of single slots and ranges of contiguous slots, e.g.:
tcp://node01?slots=0,20-30,5461-10922

Related

Ingest data into warp10 - Performance tip

We're looking for the best way to ingest data in warp10. We are on a Microservices architecture that use Kafka mainly.
Two solutions:
Use Ingress endpoint as defined here: https://www.warp10.io/content/03_Documentation/03_Interacting_with_Warp_10/03_Ingesting_data/01_Ingress (This is the solution we use for now)
Use the warp10 Kafka plugin as defined here: https://blog.senx.io/introducing-the-warp-10-kafka-plugin/
As described here, we use Ingress solution as of now, based on an aggregation of data for x seconds, and call the Ingress API to send data per packet. (Instead of calling the API each time we need to insert something).
For few days, we are experimenting with the Kafka Plugin. We successfully set up the plugin and create an .mc2 responsible to consume data from a given topic and then insert them using UPDATE into warp10.
Questions:
Using the Kafka plugin, would it be better to apply the same buffer mechanism as the one applied when we use the Ingress endpoint? Or, is there any specific implementation in warp10 Kafka plugin that allows to consume message per message in the topic and call the UPDATE function for each ?
Today, as both solutions are working, we're trying to find differences to get the best performance results during ingestion of data. And if possible, without having to apply any buffer mechanism because we are trying to be in real-time as much as possible.
MC2 file:
{
'topics' [ 'our_topic_name' ] // List of Kafka topics to subscribe to
'parallelism' 1 // Number of threads to start for processing the incoming messages. Each thread will handle a certain number of partitions.
'config' { // Map of Kafka consumer parameters
'bootstrap.servers' 'kafka-headless:9092'
'group.id' 'senx-consumer'
'enable.auto.commit' 'true'
}
'macro' <%
// macro executed each time a kafka record is consumed
/*
// received record format :
{
'timestamp' 123 // The record timestamp
'timestampType' 'type' // The type of timestamp, can be one of 'NoTimestampType', 'CreateTime', 'LogAppendTime'
'topic' 'topic_name' // Name of the topic which received the message
'offset' 123 // Offset of the message in 'topic'
'partition' 123 // Id of the partition which received the message
'key' ... // Byte array of the message key
'value' ... // Byte array of the message value
'headers' { } // Map of message headers
}
*/
"recordArray" STORE
"preprod.write" "token" STORE
// macro can be called on timeout with an empty entry map
$recordArray SIZE 0 !=
<%
$recordArray 'value' GET // kafka record value is retrieved in bytes
'UTF-8' BYTES-> // convert bytes to string (WARP10 INGRESS format)
JSON->
"value" STORE
"Records received through Kafka" LOGMSG
$value LOGMSG
$value
<%
DROP
PARSE
// PARSE outputs a gtsList, including only one gts
0 GET
// GTS rename is required to use UPDATE function
"gts" STORE
$gts $gts NAME RENAME
%>
LMAP
// Store GTS in Warp10
$token
UPDATE
%>
IFT
%> // end macro
'timeout' 10000 // Polling timeout (in ms), if no message is received within this delay, the macro will be called with an empty map as input
}
If you want to cache something in Warp 10 to avoid lots of UPDATE per second, you can use SHM (SHared Memory). This is a built-in extension you need to activate.
Once activated, use it with SHMSTORE and SHMLOAD to keep objects in RAM between two WarpScript executions.
In you example, you can push all the incoming GTS in a list, or a list of list of GTS, using +! to append elements to an existing list.
The MERGE of all the GTS in the cache (by name + labels) and UPDATE in the database can then be done in a runner (don't forget to use a MUTEX)
Don't forget the total operation cost:
The ingress format can be optimized for ingestion speed, if you do not repeat classname and labels, and if you gather lines per gts. See here.
PARSE deserialize data from the Warp 10 ingress format.
UPDATE serialize data to the Warp 10 optimized ingress format (and push it to the update endpoint).
the update endpoint deserialize again.
It makes sense to do these deserialize/serialize/deserialize operation if your input data is far from the optimal ingress format. It also make sense if you want to RANGECOMPACT your data to save disk space, or do any preprocessing.

Using RPUSH with TTL in a single command in Redis

I'm trying to push an entry in a list in Redis and also want to update the TTL of the list every time a new entry comes in. I can do that my simple calling the EXPIRE "my-list" ttl using Redis. Since my application is receiving heavy traffic, I want to reduce the number of calls to redis.
Can I set my TTL during the push operation in Redis, i.e RPUSH "mylist" I1 I2...IN ex "TTL", does redis support this time of command functionality. I can see that it does support this for the String data structures.
Redis does not have dedicated commands to push and expire the List, although as you've mentioned it does have something like that for the String data type.
The way you'd go about this challenge is to compose your own "command" from existing ones. Instead of having your application call these commands, however, you would use a Lua script as explained in the EVAL documentation page.
Lua scripts are cached and run atomically on the server. One such as the following would probably help in your case - it expects to get the key name, the pushed element and the expiry value:
local reply = redis.call('RPUSH', KEYS[1], ARGV[1])
redis.call('EXPIRE', KEYS[1], ARGV[2])
return reply

Can I listen event on lpush operation of Redis?

I am using Jedis java client for redis. My requirement is that when someone add item to list, say mylist by doing jedisClient.lpush("mylist", "this is my msg"), I need to get notification.
Is this possible ?
Yes, it is possible to achieve that in one of two ways.
The first approach is to use Redis' keyspace notifications. Configure Redis to generate list events with the following configuration directive:
CONFIG SET notify-keyspace-events El
Then, subscribe to the relevant channel/channels. If you want to subscribe only to mylist's changes, do:
SUBSCRIBE __keyevent#0__:mylist
Or, use PSUBSCRIBE and listen to events to matching key names that match a pattern.
Note, however, that keysapce notifications will not provide the actual pushed value. You can use Lua scripts as an alternate approach and implement your own notifications mechanism. For example, use the following script to push and publish a custom message to a custom channel:
local l = redis.call("LPUSH", KEYS[1], ARGS[1])
redis.call("PUBLISH", "mylistnotif:" .. KEYS[1], "Pushed value " .. ARGS[1])
return l
Make sure that "someone" uses that script to do the actual list-pushing and subscribe to the relevant channel/channels.

Current MongoDB server time in VB.Net

How do I get the MongoDB's time or use it in a query from VB.NET?
For example, in the Mongo shell I would do:
db.Cookies.find({ expireOn: { $lt: new Date() } });
In PHP I can easily do something like this:
$model->expireOn = new MongoDate();
How do I approach this in VB.Net? I don't want to use the local machine's time. This obviously doesn't work...
MongoDB.Driver.Builders.Query.LT("expireOn", "new Date()")
If you merely want to remove expired cookies from your collection, you could use the TTL collection feature which will automatically remove expired entries using a background worker on the server, hence using the server's time:
db.Cookies.ensureIndex( { "expireOn": 1 }, { expireAfterSeconds: 0 } )
If you really need to query, use a service program that runs on the server or ensure your clocks are reasonably synchronized because clocks that are considerably off can cause a plethora of problems, especially for web servers and email servers. (Consider HTTP headers like Date, LastModified and If-Modified-Since, Email Timestamps, HMAC/timestamp validation against replay attacks, etc.).

Options for creating dynamic filters (xpath) in a Camel route

I've the following static route that is loaded at my server startup. It listens for UDP messages on a port and pushes these messages to the seda queue defined in the route below.
from("mina:udp://hostipaddress:9998?sync=false").wireTap(
"seda:sometag?size=100&blockWhenFull=true&multipleConsumers=true");
Now I can have multiple clients that want to receive/subscribe to these messages. They also want to dynamically select which feeds they need.
Each client send a subscription request (REST) to the server (implemented using Spring-MVC, Jetty, Camel).
As soon as the server receives a request I create a new Camel route that looks like:
from("seda:sometag?multipleConsumers=true")
.routeId(RouteIdCreator.createRouteId(toIP, toPort, "sometag"))
.filter()
.xpath(this.xpathFilter).unmarshal().jaxb("sometag").marshal()
.json().wireTap("mina:udp://client_ip_address:20001?sync=false");
Once this route is deployed it will start to send UDP messages to the client_ip_address: 20001 (as specified in the dynamic route above.)
The client can send different filters to the server.
In case this server receives the new filter it does the following
1. checks if there is a route running (based on client ip and port)
2. If there is route running it stops that route and deletes this route with the older filter
3. It then recreates a new route which differs from the last route only in the xpathfilter.
My issue is that step 2 takes a lot of time (to stop and restart)
Is there is a way to resolve this issue?
Basically I want to change the XPath expression in the route without stops/migrating the route.
PS: I've also posted this on the official Camel mailing list.
You can try to store the xpath filter in a database (basically a simple table with the ip and the filter associated) when you receive a new subscription. Then you can read this filter from the database in the route, and use it as a filter.
from("seda:sometag?multipleConsumers=true")
.routeId(RouteIdCreator.createRouteId(toIP, toPort, "sometag"))
.setHeader("ip").constant(client_ip_adresse)
.filter().xpath(simple("${bean:xpathFilterComponent?methode=find}"))
.unmarshal().jaxb("sometag").marshal()
.json().wireTap("mina:udp://client_ip_address:20001?sync=false");
And your bean should look like
public class XpathFilterCompnent {
public void save(String ip, String filter){
//store a filter for an ip in database, when a subscription is received
}
public void find(#Header("ip") String ip){
String filter = ... //retreive filter from database
return filter;
}
}