Moving from caffeine to redisson redis with AWS elatic cache CPU increase - redis

We are moving an in-memory cache implementation of DB results cache to redis(Aws elastic cache). The JVM memory usage for the redisson based redis implementaion in performance test show more CPU usage (about 30 to 50%
The blue line is the redisson to redis implementation of distributed cache and the yellow line is the in memory caffeine implementation. Is this expected legitimate increase due to more I/O ? Or some redission configuration tuning needed?

Related

Apache ignite takes a long time to create new cache

My application will create new cache on demand, but it seems Apache Ignite always takes seconds of time to create a new cache when there are hundreds of caches already. I find there are two stages
consuming most of the time when creating new cache :
stage1: Waiting in exchange queue
stage2: Waiting for full message
Is there any way I can optimize this process?
Apache ignige: 2.10.0, cluster mode, two nodes, jdbc thin client
Jvm: Java HotSpot(TM) 64 bit Server VM, 1.8.0_60
Cache creation operation is not cheap as you correctly highlighted, it is cluster-wide operation and requires PME and other internal routines. For that reasons, think of reusing the existing caches if you need best performance.
You can accelerate caches processing and reduce resource usage if you group them in a single Cache Group. But network communication will be required nevertheless.

Used Memory on GCP Memorystore instance despite no data in redis

We just created this GCP memorystore instance for redis. It shows 0.22 GB already used, however we are 100% certain that there is no data in the redis cache. We connect to the memorystore instance via a Compute Engine and run flushall to ensure that the cache is empty. What could possibly be the 0.22GB being used here?
Based on this documentation, when you are using Standard Tier on your Redis Instance, memory usage will provision an extra reserve 10% of instance capacity as a replication buffer.

on-heap eviction in apache ignite

In Apache Ignote, when an entry is evicted from an on-heap cache, is it placed in the off-heap region?
From the docs, it doesn't seem so but looking for a clarification.
The on-heap eviction policies remove the cache entries from Java heap only. The entries stored in the off-heap region of the memory are not affected.
Starting with Ignite 2.x, entries are always stored in off-heap memory and on-heap option allows enabling a lookup on-heap cache for the off-heap entries. When an entry is evicted from on-heap cache there is always a backing off-heap counterpart.
Ignite before 2.x uses different memory modes and eviction behavior differs depending on it.

How to monitor JVM memory in Apache NiFi

I am creating a memory monitoring reporting tasks in Apache NiFi, to monitory the JVM usage. But i don't know which memory pool is appropriate to monitor usage of JVM. Any suggestion will be appreciated.
Memory pools available:
Code Cache
Metaspace
Compressed Class Space
G1 Eden Space
G1 Survivor Space
G1 Old Gen
As per my knowledge G1 Eden Space, G1 Survivor Space and G1 Old Gen are younger generation memory pool, so these three used to monitor java heap space. correct me if i am wrong.
You can use MonitorMemory to monitor Java Heap.
Detail is here:
NIFI : Monitoring processor and nifi Service
Monitor Apache NiFi with Apache NiFi

spring-data-redis cluster recovery issue

We're running a 7-node redis cluster, with all nodes as masters (no slave replication). We're using this as an in-memory cache, so we've commented out all saves in redis.conf, and we've got the following other non-defaults in redis.conf:
maxmemory: 30gb
maxmemory-policy allkeys-lru
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-require-full-coverage no
The client for this cluster is a spring-boot rest api application, using spring-data-redis with jedis as the driver. We mainly use the spring caching annotations.
We had an issue the other day where one of the masters went down for a while. With a single master down in a 7-node cluster we noted a marked increase in the average response time for api calls involving redis, which I would expect.
When the down master was brought back online and re-joined the cluster, we had a massive spike in response time. Via newrelic I can see that the app started making a ton of redis cluster calls (newrelic doesn't tell me which cluster subcommand was being used). Our normal avg response time is around 5ms; during this time it went up to 800ms and we had a few slow sample transactions that took > 70sec. On all app jvms I see the number of active threads jump from a normal 8-9 up to around 300 during this time. We have configured the tomcat http thread pool to allow 400 threads max. After about 3 minutes, the problem cleared itself up, but I now have people questioning the stability of the caching solution we chose. Newrelic doesn't give any insight into where the additional time on the long requests is being spent (it's apparently in an area that Newrelic doesn't instrument).
I've made some attempt to reproduce by running some jmeter load tests against a development environment, and while I see some moderate response time spikes when re-attaching a redis-cluster master, I don't see anything near what we saw in production. I've also run across https://github.com/xetorthio/jedis/issues/1108, but I'm not gaining any useful insight from that. I tried reducing spring.redis.cluster.max-redirects from the default 5 to 0, which didn't seem to have much effect on my load test results. I'm also not sure how appropriate a change that is for my use case.