I am using Azure Redis Cache with 250MB storage, and am storing list of objects with expire time.When i save more list of objects with different key means expire time not working properly. If there is no data means its working fine, refreshing in every 10 mins. But in work load time its not working properly.
How to fix this?
Thank you.
A 250 MB redis cache is hosted on Extra Small (A0) virtual machines, which is hosted using shared cores, has limited bandwidth and is as such not recommended for production workloads. You could check your cache performance counters for CPU and Bandwidth to see if you might be hitting such limits.
Related
The Bluemix documentation leads a reader to believe that the only persistent storage for a virtual server is using Bluemix Block Storage. Also, the documentation leads you to believe that virtual server's own storage will not persist over restarts or failures. However, in practice, this doesn't seem to be the case at least as far as restarts are concerned. We haven't suffered any virtual server outages yet.
So we want a clearer understanding of the rationale for separating the virtual server's own storage from its attached Block Storage.
Use case: I am moving our Git server and a couple of small LAMP-based assets to a Bluemix Virtual Server as we simultaneously develop new mobile apps using Cloud Foundry. In our case, we don't anticipate scaling up the work that the virtual server does any time soon. We just want a reliable new home for an existing website.
Even if you separate application files and databases out into block storage, re-provisioning the virtual server in the event of its loss is not trivial even when the provisioning is automated with Ansible or the like. So, we are not expecting to have to be regularly provisioning the non-persistent storage of a Bluemix Virtual Server.
The Bluemix doc you reference is a bit misleading and is being corrected. The virtual server's storage on local disk does persist across restart, reboot, suspend/resume, and VM failure. If such was not the case then the OS image would be lost during any such event.
One of the key advantages of storing application data in a block storage volume is that the data will persist beyond the VM's lifecycle. That is, even if the VM is deleted, the block storage volume can be left in tact to persist data. As you mentioned, block storage volumes are often used to back DB servers so that the user data is isolated, which lends itself well to providing a higher class of storage specifically for application data, back up, recovery, etc.
In use cases where VM migration is desired the VMs can be set up to boot from a block storage volume, which enables one to more easily move the VM to a different hypervisor and simply point to the same block storage boot volume.
Based on your use case description you should be fine using VM local storage.
We're running a 7-node redis cluster, with all nodes as masters (no slave replication). We're using this as an in-memory cache, so we've commented out all saves in redis.conf, and we've got the following other non-defaults in redis.conf:
maxmemory: 30gb
maxmemory-policy allkeys-lru
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-require-full-coverage no
The client for this cluster is a spring-boot rest api application, using spring-data-redis with jedis as the driver. We mainly use the spring caching annotations.
We had an issue the other day where one of the masters went down for a while. With a single master down in a 7-node cluster we noted a marked increase in the average response time for api calls involving redis, which I would expect.
When the down master was brought back online and re-joined the cluster, we had a massive spike in response time. Via newrelic I can see that the app started making a ton of redis cluster calls (newrelic doesn't tell me which cluster subcommand was being used). Our normal avg response time is around 5ms; during this time it went up to 800ms and we had a few slow sample transactions that took > 70sec. On all app jvms I see the number of active threads jump from a normal 8-9 up to around 300 during this time. We have configured the tomcat http thread pool to allow 400 threads max. After about 3 minutes, the problem cleared itself up, but I now have people questioning the stability of the caching solution we chose. Newrelic doesn't give any insight into where the additional time on the long requests is being spent (it's apparently in an area that Newrelic doesn't instrument).
I've made some attempt to reproduce by running some jmeter load tests against a development environment, and while I see some moderate response time spikes when re-attaching a redis-cluster master, I don't see anything near what we saw in production. I've also run across https://github.com/xetorthio/jedis/issues/1108, but I'm not gaining any useful insight from that. I tried reducing spring.redis.cluster.max-redirects from the default 5 to 0, which didn't seem to have much effect on my load test results. I'm also not sure how appropriate a change that is for my use case.
I am working on converting an existing single tenant application to a multitenant one. Distributed caching is new to me. We have an existing primitive local cache system using the .NET cache that generates cloned objects from existing cached ones. I have been looking at utilizing Redis.
Can Redis cache and invalidate locally in addition to over the wire thus replacing all benefit of the local primitive cache? Or would it potentially be an ideal approach to have a tiered approach utilizing Redis distributed cache if the local one doesn't have the objects we need? I believe the latter would imply requiring expiration notifications to happen against the local caches when data is updated otherwise servers may have out of date inconsistent data
It seems like a set of local caches with expiration notifications would also qualify as a distributed cache, so I am a bit confused here on how Redis might be configured and if it would be distributed accross the servers serving the requests or live in it's own cluster.
When I say local I am implying not having to go over the wire for data.
We have a problem with Weblogic 10.3.2. We install a standard domain with default parameters. In this domain we only have a managed server and only running a web application on this managed server.
After installation we face performance problems. Sometimes user waits 1-2 minutes for application responses. (Forexample user clicks a button and it takes 1-2 minutes to perform GUI refresh. Its not a complicated task.)
To overcome these performance problems define parameters like;
configuraion->server start->arguments
-Xms4g -Xmx6g -Dweblogic.threadpool.MinPoolSize=100 -Dweblogic.threadpool.MaxPoolSize=500
And also we change the datasource connection pool parameters of the application in the weblogic side as below.
Initial Capacity:50
Maximum Capacity:250
Capacity Increment: 10
Statement Cache Type: LRU
Statement Cache Size: 50
We run Weblogic on 32 GB RAM servers with 16 CPUs. %25 resource of the server machine is dedicated for the Weblogic. But we still have performance problem.
Our target is servicing 300-400 concurrent users avoiding 1-2 minutes waiting time for each application request.
Defining a work manager can solve performance issue?
My datasource or managed bean definition is incorrect?
Can anyone help me?
Thanks for your replies
At our peak hour we need to serve around 250/rps. What we're doing is accepting a url for an image, pulling the image out of memcache, and returning it via Apache.
Our currently system is a dual-core machine with 4GB of memory: 2GB for the images in memcache and 2GB for Apache; but we're seeing a very high load (20-30) during our peak time. The average response time, as reported by Apache, is 30-80ms per request, which seems kind of slow for a simple Apache request served from memory.
Are there better tools for this? Serving from disk is not an option since the IO wait was holding it back, so we moved it to memory. How do CDN's do it?
EDIT: Well, the system works like this. A request comes in, we check a "queue" to see if we've seen this request before and if we have we serve the image(from disk...or memory). If not we increment the counter for that request in a memcached queue and there are worker machines that actually generate the image and then store it back on the main server. So, currently when a request comes in we're checking the memcached db if it exists then we'll connecting to another db for the actual image database. When the images were on disk we found that just the file_exist function would take 30+ ms to completed so we moved it to memory. If we moved the images to a ramdisk would this speed up the file_exist or would we still want a first check to see if we should even seek the image out?
Have you looked at nginx?
According to Netcraft in May 2009 nginx served or proxied 3.25% busiest sites. It can serve from memcached too.
Depending on size of your image, Apache should handle this with no problem at all. We have an Apache serving 2000 request/seconds, the average size of response is 12K. The machine has 32GB memory so all our content is cached.
Here are some tuning tips,
Use threaded MPM like worker, with lots of threads open (We have 256).
Use mod_cache so all the images will be in memory
Allocate as much memory as possible to the Apache process
When you say memcache, do you mean the memcached server? Running memcached will be slower because the latency on TCP connection (even though it's loopback) is much larger than direct memory access.
If you can fit all your images in memory, a RAM disk will also help a lot.