Hitting redis server with redis hash using JMeter (using redis-dataset plugin) - redis

I have a redis server running and I wanted to use JMeter to get the benchmarks and to find in how much time it hits 20K transactions per second. I have a hash setup. How should I go about querying it. I have put one of the keys as redis key and have put one of the fields of the hash as variable name.
If I use constant throughput timer, what should I enter in the name field.
Thanks in advance.

If you're planning to use Constant Throughput Timer and your target it to get 20k requests per second load you need to configure it as follows:
Target Throughput: 1200000 (20k per second * 60 seconds in minute)
Calculate Throughput based on: all active threads
See How to use JMeter's Throughput Constant Timer article for more details.
Few more recommendations:
Constant Throughput Timer can only pause the threads so make sure you have enough virtual users on Thread Group level
Constant Throughput Timer is accurate enough on "minute" level, so make sure your test lasts long enough so the timer will be correctly applied. Also consider reasonable ramp-up period.
Some people find Throughput Shaping Timer easier to use
20k+ concurrent threads is normally something you cannot achieve using single machine so it is likely you'll need to consider Distributed Testing when multiple JMeter instances act as a cluster.

Related

Jmeter test with infinite loop in 'loop controller',constant runtime and 'constant timer'.What are the advantages and how to tune with this approach

I have setup a jmeter script with constant runtime set in runtime controller, infinite loop in loop controller and a constant delay between threads in'constant timer'. How can I perform tuning using this setup? Is there a correlation between 'no of threads', 'rampup time' and 'delay' that should be kept in mind while trying different combinations of these values for performance testing?
Number of threads is basically the number of users you will be simulating. Each JMeter thread (or virtual user) must represent a real user using your application so treat it this way. If you have a requirement that the application must support 1000 concurrent users - stick to this number as the baseline for your testing. With regards to "how much load will my N JMeter users generate" - it depends on several factors like nature of your test, server response time, timers,
Ramp-up is the time for JMeter to kick off the virtual users from point 1. Unless you're doing spike testing you should be increasing the load gradually as if you release all the users right away you will get much less information and in case of gradual increasing the load you will be able to correlate it with increasing response time and decreasing throughput, number of errors, etc. Moreover it will allow to "wamp up" application under test and it will be more ready for the stress
Delay is time the virtual user is "thinking" between operations. Real users don't hammer application non-stop, they need some time to "think" before making the next step. Depending on what the user is "doing" the think time might be different so I would recommend going for Uniform Random Timer instead of the "Constant" one.

Clear cache in Apache Ignite every n seconds

How do we empty the cache every n seconds (so that we can run queries on the data that has come in for the n second window - batch window querying)? I could only find FIFO and LRU based eviction policies in the ignite code where the eviction policy is based on the cache entry getting added or modified.
I understand we can have a sliding window using CreatedExpiryPolicy
cfg.setExpiryPolicyFactory(FactoryBuilder.factoryOf(new CreatedExpiryPolicy(new Duration(SECONDS, 5))));
But I don't think this will help me maintain batch windows. Neither will FIFO or LruEvictionPolicy.
I need some eviction policy which is based on some static time window (every 5 seconds for example).
Will I have to write my own implementation?
Well, it's possible to change ExpiryPolicy for each added entry with
IgniteCache.withExpiryPolicy
and calculate remaining time every time, but it will be too big overhead - each entry will have their own EvictionPolicy.
I would recommend to schedule job that will clear the cache using cron based scheduling:

Configuring Redis expire algorithm

I would like to user Redis as a remote timers server. What I need is a way to schedule timer from one server, and get a notification when this timer is fired on all other servers.
I have already implemented this mechanism using expired keys and keyspace notifications and it works.
The problem is that the way the EXPIRE mechanism is configured, when I'll have many timers they might not fire...(http://redis.io/commands/expire)
I was wondering if there is a way to change the 25% rule of expiring, into something else in order to make sure all timers will be triggered? I can live with 1-2 seconds delay but I need ALL of the timers to fire.
I remember seeing somewhere that this parameter is configurable, but I can't find the docs for it..
You can`t set option to force redis to expire all keys at once, but you may set option to as close as possible to it. Please keep in mind how redis expire works.
In two words:
Specifically this is what Redis does 10 times per second:
Test 20 random keys from the set of keys with an associated expire.
Delete all the keys found expired.
If more than 25% of keys were expired, start again from step 1.
You may change this 10 times behaviour with hzoption in redis config file. From the original documentation:
Redis calls an internal function to perform many background tasks,
like closing connections of clients in timeout, purging expired keys
that are never requested, and so forth.
Not all tasks are performed with the same frequency, but Redis checks
for tasks to perform according to the specified "hz" value.
By default "hz" is set to 10. Raising the value will use more CPU when
Redis is idle, but at the same time will make Redis more responsive when
there are many keys expiring at the same time, and timeouts may be
handled with more precision.
So you may change it 100 or even 1000. That would mean - 1000 is not less then 20,000 keys per seconds would be expired. It is important to understand that your instance of redis would consume a lot of CPU in idle state if you have much of timers.

Redis performance testing

So I want to test redis by shooting 1000 set commands per sec and observe if the RAM usage shoots above a certain limit.
I tried using redis-benchmark but it does not provide the facility to limit the rate to 1000 set command per nor can i set an expiry for keys.
Also it simply returns the number of requests per second.
Also am going to use jedis client .I thought of using Jmeter to accomplish the above. Would that be a feasible option or is there any other tool or facility that redis provides to accomplish the same.
There is Redis Data Set extension which adds Redis load testing functionality into JMeter.
In order to set exact requests rate you can use Constant Throughput Timer.

Finding an applications scalibility point using JMeter

I am trying to find an applications scalibility point using JMeter. I define the scalability point as "The minimum number of concurrent users from which any increase no longer increases the Throughput per second".
I am using the following technique. Schedule my load test to run for an hour, starting a new thread sending SOAP/XML-RPC Requests every 30 seconds. I do this by setting my number of threads to 120 and my ramp up period to 3600 seconds.
Then looking at my TOTAL rows Throughput in my Summary Report Listener. A new row (thread) is added every 30 seconds, the total throughput number rises until it plateaus at about 123 requests per second after 80 of the threads are active in my case. It then slowly drops the throughput number to 120 per second as the last 20 threads are added. I then conclude that my applications scalability point is 123 requests per second with 80 active users.
My question, is this a valid way to find an application scalibility point or is there different technique that I should be trying?
From a technical perspective what you're doing does answer your question regarding one specific user scenario, though I think you might be missing the big picture.
First of all keep in mind that the actual HTTP request you're sending and ramp up times can often impact what you call a scalability point. Are your requests hitting a cache? Are they not random enough? Are they too random? Do they represent real world requests? is 30 seconds going to give you the same results as 20 seconds or 10 seconds?
From my personal experience it's MUCH easier and more intuitive to look at graphs when trying to analyze app performance. It's not just a question of raw numbers but also looking and trends and rates of change.
For example here is an example testing the ghost.org blogging platofom using JMeter with an interactive JMeter results graph.
http://blazemeter.com/blog/ghost-performance-benchmark