redis - fetch all hashes by pattern/prefix - redis

I have a hash pattern websocket:socket:*
$redis->hMSet('websocket:socket:1', ['block' => 9866]);
$redis->hMSet('websocket:socket:2', ['block' => 854]);
$redis->hMSet('websocket:socket:3', ['block' => 854]);
How can I fetch all hashes that matches pattern websocket:socket:* ??
Or what is the best way (performange wise) to keep track of a list of items?

Redis does not provide search-by-value out of the box. You'll have to implement some kind of indexing yourself.
Read more about indexing in Redis at Secondary indexing with Redis (or use RediSearch).

Related

is there option provided by redis to do partial updates to cache objects

I am storing data in redis using JCA(java caching api) where key is String and value is Object which is JSON string.
I have a requirement to perform partial update to cache value instead of retrieving cache value using key and then modify attribute and perform put operation with latest cache value
{
"attribute1" : "value1",
"attribute2 " : [
{
"attribute3" : "value3"
}
]
}
Above is sample json format. As explained above is it possible to update value of attribute1 from value1 to value2 without geting cache value using key in redis
Assuming you are using JCache API (ie JSR-107), you can use Cache#invoke(K key, EntryProcessor<K,V,T> entryProcessor, Object... arguments) to perform an update in-place instead of get-then-put. According to EntryProcessor javadoc, Cache#invoke is executed atomically on the key, so you don't have to worry about concurrent modifications to the same cache entry.
You can use a Lua script, so that using the CJSON Lua library you update the item. I have shared a similar example on How to nest a list into a structure in Redis to reduce top level?
Not familiar with JCA, so not sure if your client would make it simple to send an EVAL command.

Elastalert : Cluster health notification

I don't know elastalert much.
I just wanted to know whether it is possible to get notification when cluster status is RED using elastalert.
Thank you
its possible. but you need to access it via the cluster health query.
sample:
input{
exec {
curl -u <username>:<password> <elasticsearch-ip:port>/_cluster/health
codec => rubydebug
type => cluster-health
}
}
output {
if "health" in [type] {
elasticsearch{
index => "cluster-health-%{+YYYY.MM.dd}"
hosts => ["elasticsearch-host:port"]
}
}
}
if you have filters that are not parsing it add a filter with if [type] == "cluster-health" section to filter and parse it as json.
this gives you basic details. though you may need to update the field mapping of status.. at least i am facing issue and you have readily availability. Now you can wuery as you normally would under elastalert.
Looks like they don't have this functionality out of the box yet. But there are some hacky ways to achieve it https://github.com/Yelp/elastalert/issues/903
They recommend to use Marvel instead.
No as far as I know.
Current Elastalert documentation state that the index field is mandatory in the rule .yaml file.
The current way elasticsearch cluster provide health status is via the healthapi so no index is available for querying.

Add Redis expire to whole bunch of namespaced key?

Say I have a namespaced key for user + id:
lastMessages
isNice attribute
So - it goes like this :
>lpush user:111:lastMessages a
>lpush user:111:lastMessages b
>lpush user:111:lastMessages c
ok
let's add the isNice prop:
>set user:111:isNice 1
so : let's see all keys for 111 :
> keys user:111*
result :
1) "user:111:isNice"
2) "user:111:lastMessages"
Ok , BUT
I want to expire the namespaced entry at it's whole ! ( so when timeout - all the keys should go at once. I don't want start managing each namespaced key and time left because not all props are added at the same time - but i want all props to be dead at the same time...)
Question :
Does it mean that I have to set expire for each namespaced key entry ?
if not , what is the correct way of doing it ?
Yes, the way you have it set up, these are all just separate keys. You can think of the namespace as an understanding you have with all the people who will access the Redis store
Okay guys, here's the deal. We're all going to use keys that look like this:
user:{user_id}:lastMessages
That way, we all understand where to look to get user number 325's last messages.
But really, there's nothing shared between user:111:lastMessages and user:111:isNice.
The fix
A way you can do what you're describing is using a hash. You will create a hash whose key is user:111 and then add fields lastMessages and isNice.
> hset user:111 lastMessages "you are my friend!"
> hset user:111 isNice true
> expire user:111 1000
Or, all at once,
> hmset user:111 lastMessages "you are my friend!" isNice true
> expire user:111 1000
Here is a page describing redis' data types. Scroll down to where it says 'Hashes' for more information.
Edit
Ah, I hadn't noticed you were using a list.
If you don't have too many messages (under 20, say), you could serialize them into JSON and store them as one string. That's not a very good solution though.
The cleanest way might just be to set two expires.

Redis bitwise key

I'm trying to implement some bitwise operation on keys in redis.
I have to store let's say
key value
12:CA foo
12:US bar
42:CA tag
And I'm trying to be able to query all with CA or all with 12
Is this even possible ?
Thanks
I may be mis-interpreting the question, but if you have key => values like
12:CA => foo
12:US => bar
42:CA => tag
And you want to pull all keys matching CA, or 12, then you can just use the keys operator along with a wildcard
keys *:CA
keys 12:*
NOTE This only returns the matching keys. To get the values, you need to use the returned keys.

Re-indexing an index in ElasticSearch to change the number of shards

I need to change the number of shards in my index. The index is quite big and i may have to change the configuration 10-15 times for testing purposes before i'm satisfied with the result. is there a tool offering out of the box this kind of functionality? or what's the easiest way of accomplishing this?
Both the Perl and Ruby clients directly support reindexing.
In Perl, you'd do:
my $source = $es->scrolled_search(
index => 'old_index',
search_type => 'scan',
scroll => '5m',
version => 1
);
$es->reindex(
source => $source,
dest_index => 'new_index'
);
Find more information in the post by Clinton Gormley.
In Ruby, you'd do:
Tire.index('old').reindex 'new', settings: { number_of_shards: 3 }
Find more information in the relevant Tire commit.