Redis Booksleeve - How to use Hash API properly - api

i am using the Booksleeve hash api for Redis. I am doing the following:
CurrentConnection.Hashes.Set(0, "item:1", "priority", task.priority.ToString());
var taskResult = CurrentConnection.Hashes.GetString(0, "item:1", "priority");
taskResult.Wait();
var priority = Int32.Parse(taskResult.Result)
However i am getting an Aggregate exception:
"ERR Operation against a key holding the wrong kind of value"
I am not sure what i am doing wrong here (except of blocking the task :)).
Note: CurrentConnection is an instance of BookSleeve.RedisConnection
Please help!
Thanks

That is not a Booksleeve issue - it is a redis error; in fact, the full error message you should be seeing is:
Redis server: ERR Operation against a key holding the wrong kind of value
(where I try to make it clear that this error has come from redis, not Booksleeve)
As for what causes this: each key in redis has a designated type; string, hash, list, etc. You cannot use hash operations on something that is not a hash.
My guess, then, is that "item:1" already exists, but as something other than a hash. I have unit tests that confirm this from Booksleeve (i.e. with/without a pre-existing non-hash value).
You can investigate this in redis using redis-cli or any other client (telnet works, at a push), with the command:
type item:1
(thanks #Sripathi)

Related

Redis Keyspace Notifications subscription for field&value

i'm currently working about Redis Expire Event
My goal : get the Value, Field to do something in next process after Data in Redis already expire
so i had found Redis Keyspace Notifications Feature
That Allow client to Subscribe to Channel in Redis to Recieve Event affecting Data in reset like Expire
so i have some example code : https://github.com/toygame/nodejs-redis-keyspace-notifications
subscriber.subscribe("__keyevent#0__:expired")
subscriber.on('message', async(channel, message) => {
// do somethings
console.log(message);
})
Result : Key0
And this work find but the Result i got is only Key that i have set into redis and expired
I have already do some research
https://medium.com/nerd-for-tech/redis-getting-notified-when-a-key-is-expired-or-changed-ca3e1f1c7f0a
but its found only Event that maybe i can get as result but not for Value, Field that i expect
is their anyway to get Those Value and Field ?
FYI. Document https://redis.io/topics/notifications
UPDATE
according to this https://stackoverflow.com/a/42556450/11349357
Keyspace notifications do not report the value, only the key's name and/or command performed are included in the published message.The main underlaying reasoning for this is that Redis values can become quite large.
If you really really really need this kind of behavior, well that's pretty easy actually. Because keyspace notifications are using Pub/Sub messages, you can just call PUBLISH yourself after each relevant operation, and with the information that you're interested in.
look like i can't use this Redis Keyspace but i have to publish its on my own
You can use RedisGears to process keyspace notification and get both key and value.
You can write your processing code in python and register it in Redis.
e.g. Capture each keyspace event and store to a Stream
GearsBuilder() \
.foreach(lambda x: execute('XADD', "notifications-stream", '*', *sum([[k,v] for k,v in x.items()],[]))) \
.register(prefix="person:*", eventTypes=['hset', 'hmset'], mode='sync')
You can read more about this example here: https://oss.redis.com/redisgears/examples.html#reliable-keyspace-notification

How Spring store cache and key to Redis

I follow some tutorial on web to setup Spring Cache with redis,
my function look like this:
#Cacheable(value = "post-single", key = "#id", unless = "#result.shares < 500")
#GetMapping("/{id}")
public Post getPostByID(#PathVariable String id) throws PostNotFoundException {
log.info("get post with id {}", id);
return postService.getPostByID(id);
}
As I understand, the value inside #Cacheable is the cache name and key is the cache key inside that cache name. I also know Redis is an in-memory key/value store. But now I'm confused about how Spring will store cache name to Redis because looks like Redis only manages key and value, not cache name.
Looking for anyone who can explain to me.
Thanks in advance
Spring uses cache name as the key prefix when storing your data. For example, when you call your endpoint with id=1 you will see in Redis this key
post-single::1
You can customize the prefix format through CacheKeyPrefix class.

Using RPUSH with TTL in a single command in Redis

I'm trying to push an entry in a list in Redis and also want to update the TTL of the list every time a new entry comes in. I can do that my simple calling the EXPIRE "my-list" ttl using Redis. Since my application is receiving heavy traffic, I want to reduce the number of calls to redis.
Can I set my TTL during the push operation in Redis, i.e RPUSH "mylist" I1 I2...IN ex "TTL", does redis support this time of command functionality. I can see that it does support this for the String data structures.
Redis does not have dedicated commands to push and expire the List, although as you've mentioned it does have something like that for the String data type.
The way you'd go about this challenge is to compose your own "command" from existing ones. Instead of having your application call these commands, however, you would use a Lua script as explained in the EVAL documentation page.
Lua scripts are cached and run atomically on the server. One such as the following would probably help in your case - it expects to get the key name, the pushed element and the expiry value:
local reply = redis.call('RPUSH', KEYS[1], ARGV[1])
redis.call('EXPIRE', KEYS[1], ARGV[2])
return reply

Ignite and Kafka Integration

I am trying the Ignite and Kafka Integration to bring kafka message into Ignite cache.
My message key is a random string(To work with Ignite, the kafka message key can't be null), and the value is a json string representation for Person(a java class)
When Ignite receives such a message, it looks that Ignite will use the message's key(the random string in my case) as the cache key.
Is it possible to change the message key to the person's id, so that I can put the into the cache.
Looks that streamer.receiver(new StreamReceiver) is workable
streamer.receiver(new StreamReceiver<String, String>() {
public void receive(IgniteCache<String, String> cache, Collection<Map.Entry<String, String>> entries) throws IgniteException {
for (Map.Entry<String, String> entry : entries) {
Person p = fromJson(entry.getValue());
//ignore the message key,and use person id as the cache key
cache.put(p.getId(), p);
}
}
});
Is this the recommended way? and I am not sure whether calling cache.put in StreamReceiver is a correct way, since it is only a pre-processing step before writing to cache.
Data streamer will map all your keys to cache affinity nodes, create batches of entries and send batches to affinity nodes. After it StreamReceiver will receive your entries, get Person's ID and invoke cache.put(K, V). Putting entry lead to mapping your key to corresponding cache affinity node and sending update request to this node.
Everything looks good. But result of mapping your random key from Kafka and result of mapping Person's ID will be different (most likely different nodes). As result your will get poor performance due to redundant network hops.
Unfortunately, current KafkaStreamer implementations doesn't support stream tuple extractors (see e.g. StreamSingleTupleExtractor class). But you can easily create your own Kafka streamer implementation using existing one as example.
Also you can try use KafkaStreamer's keyDecoder and valDecoder in order to extract Person's ID from Kafka message. I don't sure, but it can help.

How to use .withoutSizeLimit in Akka-http (client) HttpRequest?

I'm using Akka 2.4.7 to read a web resource that is essentially a stream of JSON objects, delimited with newlines. The stream is practically unlimited in size.
When around 8MB has been consumed, I get an exception:
[error] (run-main-0) EntityStreamSizeException: actual entity size (None) exceeded content length limit (8388608 bytes)! You can configure this by setting `akka.http.[server|client].parsing.max-content-length` or calling `HttpEntity.withSizeLimit` before materializing the dataBytes stream.
The "actual entity size (None)" seems a bit funny, but my real question is, how to use the HttpEntity.withSizeLimit (or in my case, rather .withoutSizeLimit that should be there, as well).
My request code is like this:
val chunks_src: Source[ByteString,_] = Source.single(req)
.via(connection)
.flatMapConcat( _.entity.dataBytes )
I tried adding a .map( (x: HttpResponse) => x.withoutSizeLimit ), but it does not compile. What's the role of the HttpEntity when doing client side programming, anyways?
I can change the global config, but that's kind of missing the point. I'd like to flag "no limits" only for a particular request.
As a further question, I understand the need for a max-content-length on the server side, but why affect the client?
References:
Akka 2.4.7: Limiting message entity length
Akka 2.4.7: HttpEntity
I'm far from an expert on this topic, but it would seem you need to add the .withoutSizeLimit() to the entity like:
Source.single(req)
.via(connection)
.flatMapConcat( _.entity.withoutSizeLimit().dataBytes )