I follow some tutorial on web to setup Spring Cache with redis,
my function look like this:
#Cacheable(value = "post-single", key = "#id", unless = "#result.shares < 500")
#GetMapping("/{id}")
public Post getPostByID(#PathVariable String id) throws PostNotFoundException {
log.info("get post with id {}", id);
return postService.getPostByID(id);
}
As I understand, the value inside #Cacheable is the cache name and key is the cache key inside that cache name. I also know Redis is an in-memory key/value store. But now I'm confused about how Spring will store cache name to Redis because looks like Redis only manages key and value, not cache name.
Looking for anyone who can explain to me.
Thanks in advance
Spring uses cache name as the key prefix when storing your data. For example, when you call your endpoint with id=1 you will see in Redis this key
post-single::1
You can customize the prefix format through CacheKeyPrefix class.
Related
I am trying the Ignite and Kafka Integration to bring kafka message into Ignite cache.
My message key is a random string(To work with Ignite, the kafka message key can't be null), and the value is a json string representation for Person(a java class)
When Ignite receives such a message, it looks that Ignite will use the message's key(the random string in my case) as the cache key.
Is it possible to change the message key to the person's id, so that I can put the into the cache.
Looks that streamer.receiver(new StreamReceiver) is workable
streamer.receiver(new StreamReceiver<String, String>() {
public void receive(IgniteCache<String, String> cache, Collection<Map.Entry<String, String>> entries) throws IgniteException {
for (Map.Entry<String, String> entry : entries) {
Person p = fromJson(entry.getValue());
//ignore the message key,and use person id as the cache key
cache.put(p.getId(), p);
}
}
});
Is this the recommended way? and I am not sure whether calling cache.put in StreamReceiver is a correct way, since it is only a pre-processing step before writing to cache.
Data streamer will map all your keys to cache affinity nodes, create batches of entries and send batches to affinity nodes. After it StreamReceiver will receive your entries, get Person's ID and invoke cache.put(K, V). Putting entry lead to mapping your key to corresponding cache affinity node and sending update request to this node.
Everything looks good. But result of mapping your random key from Kafka and result of mapping Person's ID will be different (most likely different nodes). As result your will get poor performance due to redundant network hops.
Unfortunately, current KafkaStreamer implementations doesn't support stream tuple extractors (see e.g. StreamSingleTupleExtractor class). But you can easily create your own Kafka streamer implementation using existing one as example.
Also you can try use KafkaStreamer's keyDecoder and valDecoder in order to extract Person's ID from Kafka message. I don't sure, but it can help.
I have a camel route which processes a message from a process queue and sends it to upload queue.
from("activemq:queue:process" ).routeId("activemq_processqueue")
.process(exchange -> {
SomeImpl impl = new SomeImpl();
impl.process(exchange);
})
.to(ExchangePattern.InOnly, "activemq:queue:upload");
In impl.process I am populating an Id and destination server path. Now I need to define a new route which consumes messages from upload queue ,and copy a local folder (based on Id generated in previous route) and upload it to destination folder which is an ftp server (this is also populated in previous route)
So how to design a new route where both from and to endpoints are dynamic which would look something like below ?
from("activemq:queue:upload" )
.from("file:basePath/"+{idFromExchangeObject})
.to("ftp:"+{serverIpFromExchangeObject}+"/"+{pathFromExchangeObject});
I think there is a better alternative for your case, taking as granted that you are using a Camel version newer than 2.16.(alternatives for a previous version exist but the are more complicated and don't look elegant - ( e.g consumerTemplate & recipientList).
You can replace the first "dynamic from" with pollEnrich which enriches the message using a polling consumer and simple expression to build the dynamic file endpoint. For the second part, as already mentioned, a dynamic uri .toD will do the job. So your route would look like this:
from("activemq:queue:upload" )
.pollEnrich().simple("file:basePath/${header.idFromExchangeObject})
.aggregationStrategy(new ExampleAggregationStrategy()) // * see explanation
.timeout(2000) // the timeout is optional but recommended
.toD("ftp:${header.serverIpFromExchangeObject}/${header.pathFromExchangeObject}")
See content enricher section "Using dynamic uris"
http://camel.apache.org/content-enricher.html .
You will need an aggregation strategy, to combine the original exchange with the resource exchange in order to make sure that the headers serverIpFromExchangeObject, pathFromExchangeObject will be included in the aggregated exchange after the enrichment. If you don't include the custom strategy then Camel will by default use the body obtained from the resource. Have a look at the ExampleAggregationStrategy example in content-enricher.html to see how this works.
For the .toD() have a look at http://camel.apache.org/how-to-use-a-dynamic-uri-in-to.html
Adding a dynamic to endpoint in Camel (as noted in the comment) can be done with the .toD() which is described on this page on the Camel site.
I don't know of any fromD() equivalent. However, you could add a dynamic route by calling the addRoutes method on the CamelContext. This is described on this page on the Camel site.
Expanding slightly on the example from the Camel site here is something that should get you heading in the right direction.
public void process(Exchange exchange) throws Exception {
String idFromExchangeObject = ...
String serverIpFromExchangeObject = ...
String pathFromExchangeObject = ...
exchange.getContext().addRoutes(new RouteBuilder() {
public void configure() {
from("file:basePath/"+ idFromExchangeObject)
.to("ftp:"+ serverIpFromExchangeObject +"/"+pathFromExchangeObject);
}
});
}
There may be other options in Camel as well since this framework has an amazing number of EIP and capabilities.
I am using MemoryCache in my WCF Rest Service. First time I hit the Database and cache the data in the Memory cache.
I have implemented it successfully. In my project, I have a requirement. I need to check cache key insertion time.
I want to add a condition if cache key is more then 15 minutes, I will again update the cache key.
I know that I can use
policy.AbsoluteExpiration = DateTimeOffset.Now.AddMinutes(15);
If I use this code, it will expire after 15 minutes.
Is there any way to know that when a cachekey insert into cache?
You can encapsulate your data in an object which contains ModificationTime property and cache this object.
class CacheData<T>
{
public DateTime ModificationTime { get; set; }
public T Data { get; set; }
}
You can create an instance of this object, set the time and data properties and cache with a cache key.
i am using the Booksleeve hash api for Redis. I am doing the following:
CurrentConnection.Hashes.Set(0, "item:1", "priority", task.priority.ToString());
var taskResult = CurrentConnection.Hashes.GetString(0, "item:1", "priority");
taskResult.Wait();
var priority = Int32.Parse(taskResult.Result)
However i am getting an Aggregate exception:
"ERR Operation against a key holding the wrong kind of value"
I am not sure what i am doing wrong here (except of blocking the task :)).
Note: CurrentConnection is an instance of BookSleeve.RedisConnection
Please help!
Thanks
That is not a Booksleeve issue - it is a redis error; in fact, the full error message you should be seeing is:
Redis server: ERR Operation against a key holding the wrong kind of value
(where I try to make it clear that this error has come from redis, not Booksleeve)
As for what causes this: each key in redis has a designated type; string, hash, list, etc. You cannot use hash operations on something that is not a hash.
My guess, then, is that "item:1" already exists, but as something other than a hash. I have unit tests that confirm this from Booksleeve (i.e. with/without a pre-existing non-hash value).
You can investigate this in redis using redis-cli or any other client (telnet works, at a push), with the command:
type item:1
(thanks #Sripathi)
Looked on google and couldn't find anything.
Any good resources to get started designing my backend for a RESTless webapp thats going to rely heavily on API keys.
I know how to write restless webservices etc, just never used API-keys. Generally do people just generate guids for users etc?
Here's how I'm creating API keys for a web service:
string CreateApiKey(int length)
{
var bytes = new byte[length * 2];
using (var rng = new RNGCryptoServiceProvider())
rng.GetBytes(bytes);
var chars = Convert.ToBase64String(bytes)
.Where(char.IsLetterOrDigit)
.Take(length)
.ToArray();
var key = new String(chars);
return key;
}
GUID's are typically not "random" enough and can be easily guessed by the bad-guys.
Take some "random" data like the user's password hash, some random numbers and run the result through sha1 or a similar hash function.
If you want one API key per account, simply add it to the account metadata table. Otherwise use a table linked to the accountIds to store the api keys.
Server side use a cache using the api-key as the key to store temporarily the account metadata so you only need to go to the db once per session.
And of course everything must go over https to avoid that the API key be stolen.
Now if your service is "session" oriented you can consider using a temporary session key so you do not need to expose the API key. Look for public key encryption to investigate this further.