I'm in this situation: I have an Infinispan cluster (12.1) with two nodes and a replicated cache configured via xml.
I have also an hot rod client and when I try to call removeCache method, the first time, the cache is not removed but if I try a second one call to removeCache, the cache is deleted correctly.
I need a correctly removal at the first attempt.
Can anyone help me?
If you know beforehand you may need to remove caches, it's best to create them with CacheContainerAdmin.createCache() (or via the REST API/CLI/console) instead of the server XML configuration.
CacheContainerAdmin.removeCache() is under-specified: the javadoc doesn't say what it does when the cache was not created with CacheContainerAdmin.createCache(). As you've discovered, the current implementation only removes the cache on the server that processed the client request.
I have created ISPN-13048 to improve the documentation and maybe change the behaviour.
Related
I am facing a problem with hit-rate of Ignite through the below metrics exposed by Ignite
org_apache_dmp_user_mask_CacheHitPercentage{name="\"org.apache.ignite.internal.processors.cache.CacheClusterMetricsMXBeanImpl\"",} 98.48369598388672 org_apache_dmp_user_mask_CacheHitPercentage{name="\"org.apache.ignite.internal.processors.cache.CacheLocalMetricsMXBeanImpl\"",} 99.72936248779297
Although I tried to fulfill all data in a cache that will be used for serving requests, however somehow at the midnight (from 2 am to 5 am), the hit-rate reduced significantly.
hit-rate chart
So I'd like to record those cases for troubleshooting and understand what is happening at the midnight.
Anyone can help, please?
Write your own implementation of CacheStore (or rather extend the one that you're using currently). The method load or loadAll will be called whenever the system reads a key from the 3rd party storage. You can log the key there before reading the external key.
I have two legacy servers in GCE, which have both been flagged as using the deprecated metadata server endpoints. At this moment in time, they have hundreds of GB's of data between them in MySQL and MongoDB data, and risking upgrading something on these boxes which has an adverse affect is not an option.
We are currently in the process of migrating away from the data stored here, but for now, we need to keep them running.
Is anyone aware of any implications to either
a) doing nothing or
b) Just setting the disable-legacy-endpoints metadata flag to true ?
i.e. Will these instances stop working altogether if we leave them as they currently are?
After some more digging into what was actually using the Metadata API to start with, we found that they were being sent by stackdriver_agent which was installed an extremely long time ago while it was free, and just never removed.
Stopping this agent will remove all calls that we make with these two legacy servers.
If you are considering disabling with the disable-legacy-endpoints metadata flag, make sure to test it in a contained environment first, i.e. a new VM from a snapshot of the affected instance, before apply to production services.
For help identifying the instances making the calls, refer to this article
For help identifying the processes within the instances, refer to this article
Specifically, is there any way in .Net Core (3.0 or earlier) to use the local file system as a Response Cache instead of just in-memory?
After a fair amount of researching, the closest thing seems to be the Response Caching middleware [1], but this does not:
allow pages to be cached indefinitely,
preserve caches between application and server restarts,
allow invalidating the cache on a per-page basis (e.g. blog entry updated),
allow invalidating the entire cache when global changes are made (e.g. theme update, menu changes, etc.).
I'm guessing these features will require custom implementation of ResponseCaching that hits the local file system, but I don't want to reinvent it if it already exists.
Some background:
This will replace our use of a static site-generator, which is problematic for site-wide changes because of the sheer quantity of data (nearly 24 hours to generate and copy to all of the servers).
The scenario is very similar to an encyclopedia or news site -- the vast majority of the content changes infrequently, a few things are added per day, and there is no user-specific content (and if or when there is, it would be dynamically loaded via JS/Ajax). Additionally, the page loads happen to be processor/memory/database intensive.
We will be using a reverse proxy like CloudFlare or AWS CloudFront, but AWS automatically expires their edge caches daily. Edge node cache misses are still frequent.
This is different than IDistributedCache [2] in that it should be response caching, not just caching data used by the MVC Model.
We will also use in-memory cache [3], but again, that solves a different caching scenario.
References
[1] https://learn.microsoft.com/en-us/aspnet/core/performance/caching/middleware
[2] https://learn.microsoft.com/en-us/aspnet/core/performance/caching/distributed
[3] https://learn.microsoft.com/en-us/aspnet/core/performance/caching/memory
I implemented this.
https://www.nuget.org/packages/AspNetCore.ResponseCaching.Extensions/
https://github.com/speige/AspNetCore.ResponseCaching.Extensions
Doesn't support .net core 3 yet though.
Currently (April 2019) the answer appears to be: no, there is nothing out-of-the-box for this.
There are three viable approaches to accomplish this using .Net Core:
Fork the built-in ResponseCaching middleware and create a flag for cache-to-disk:
https://github.com/aspnet/AspNetCore/tree/master/src/Middleware/ResponseCaching
This might be annoying to maintain because the namespaces and class names will collide with the core framework.
Implement this missing feature in EasyCaching, which apparently already has caching to disk on their radar:
https://github.com/dotnetcore/EasyCaching/blob/master/ToDoList.md
A pull request may be more likely accepted, since it's a planned feature.
There is apparently a port of Strathweb.CacheOutput to .Net Core, which would allow one to implement IApiOutputCache to save to disk:
https://github.com/Iamcerba/AspNetCore.CacheOutput#server-side-caching
Although this question is about caching within .Net Core using the local file system, this could also be accomplished using a local instance of Sqlite on each server node, and then configuring EasyCaching for response caching and to point it to the Sqlite instance on localhost.
I hope this helps someone else who finds themselves in this scenario!
New to ignite, i have a use case, i need to run a job to clean up. I have ignite embedded in our spring boot application, for multiple instances, i am thinking have the job run on each instance, then just query the local data and clean up those. Do you see any issue with this? I am not sure how often ignite does reshuffing data?
Thanks
Shannon
You can surely do that.
With regards to data reshuffling, it will only happen when node is added or removed to cluster. However, ignite.compute().affinityRun() family of calls guarantees that code is ran near the data.
Otherwise, you could do ignite.compute().broadcast() and only iterate on each affected cache's local entries. You don't have the aforementioned guarantee then, though.
I'm writing a CMIS interface(server) for my application. The server needs to load data from a database to process the request. At the moment I'm loading the same data for every request.
Is there a common way to cache this data. Are cookies supported for each cmis client? Is there an other chance to cache this data?
Thank you
You should not rely on cookies. Several clients and client libraries support them but not all. Cookies can help and you should make use of them, but be prepared for simple clients without cookie support.
Since your data is usually bound to a user, you can build a cache based on the username. But it depends on your repository and your use case what you can and should cache. Repository infos and type definitions are good candidates. But you should be careful with everything else.