How to use Jmeter with timer - testing

I am having a problem with the JMETER, using it with Timer causes Crash to the Jmeter
The case is : I want to create a load of requests to be executed every half hour
Is that something you can do with Jmeter?
every-time i try it it causes Jmeter to keep loading and hangs and require a shut down

If you want to leave JMeter up and running forever make sure to follow JMeter Best Practices as certain test elements might cause memory leaks
If you need to create "spikes" of load each 30 minutes it might be a better idea to consider your operating system scheduling mechanisms to execute "short" tests each half an hour like:
Windows Task Scheduler
Unix cron
MacOS launchd
Or even better go for Continuous Integration server like Jenkins, it has the most powerful trigger mechanism allowing defining flexible criteria regarding when to start the job and you can also benefit from the Performance Plugin which allows automatically marking build as unstable or failed depending on test metrics and building performance trend charts

Related

jmeter Load Test Serevr down issues

I was used a load of 100 using ultimate thread group for execution in NON GUI Mode .
The Execution takes place around 5 mins. only . After that my test environment got shut down. I am not able to drill down the issues. What could be the reason for server downs. my environment supports for 500 users.
How do you know your environment supports 500 users?
100 threads don't necessarily map to 100 real users, you need to consider a lot of stuff while designing your test, in particular:
Real users don't hammer the server non-stop, they need some time to "think" between operations. So make sure you add Timers between requests and configure them to represent reasonable think times.
Real users use real browsers, real browsers download embedded resources (images, scripts, styles, fonts, etc) but they do it only once, on subsequent requests the resources are being returned from cache and no actual request is being made. Make sure to add HTTP Cache Manager to your Test Plan
You need to add the load gradually, this way you will be able to state what was amount of threads (virtual users) where response time start exceeding acceptable values or errors start occurring. Generate a HTML Reporting Dashboard, look into metrics and correlate them with the increasing load.
Make sure that your application under test has enough headroom to operate in terms of CPU, RAM, Disk space, etc. You can monitor these counters using JMeter PerfMon Plugin.
Check your application logs, most probably they will have some clue to the root cause of the failure. If you're familiar with the programming language your application is written in - using a profiler tool during the load test can tell you the full story regarding what's going on, what are the most resources consuming functions and objects, etc.

How can JMeter identify what to optimize in a website?

I'm new with website performance testing field and will be using JMeter. After playing with it, I am still having troubles with identifying what to optimize in a website load time?
I'm currently still learning about the load testing - who should I give the performance report to? Developers/Programmers? or Network department? Example of an error I usually get is 502 error or timeouts.
Thanks in advance.
JMeter cannot identify anything, all it does is executing HTTP requests and measuring response times. Ideally it should be you, who takes JMeter raw results, performs analysis and creating the final report highlighting current problems and bottlenecks (and ideally what needs to be done to fix them)
Consider the following checklist:
You load test needs to be realistic, a test which doesn't represent real-life application usage does not make sense. So make sure your JMeter test carefully represents real users in terms of cookies, headers, cache, downloading images, styles and scripts, virtual user groups distribution, etc.
Increase and decrease the load gradually, this way you will be able to correlate such metrics as transactions per second and response time with increasing/decreasing number of users so make sure you apply reasonable ramp-up and ramp-down settings.
Monitor the application under test health. The reason of error may be as simple as lack of hardware resources (CPU, RAM, Disk, etc.). It can be done using i.e. PerfMon JMeter Plugin.
Do the same for JMeter instance(s). JMeter measures response time from "just before sending the request" until "last response byte arrives" so if JMeter is not able to send requests fast enough - you will have high response time without other visible reason.
Website load time is a combination of many factors including the browser rendering time, script execution time, resource download time etc. You can't use JMeter to validate the front end time. You can achieve it using chrome developer tools and other similar tools available for each browser. Refer https://developers.google.com/web/fundamentals/performance/
JMeter is primarily used for measuring the protocol level performance to ensure that you server can process the heavy workloads when it is subjected to real time stress conditions from several customers. It won't compute the java script execution time or HTML parsing time. Your JMeter script should be written in such a way that it emulates the logic of your java script executions and other presentation logic to form the request inputs and the subsequent requests.
Your question is way too open ended and you might have to start with a mentor who can help you with the whole process and train you.
Also, the mindset for functional testing and performance testing are totally different. Lot of key players in the performance area have suggested to measure the load time as part of the functional testing efforts while the majority of the server side performance is validated by the performance team.

Spark - Automated Deployment & Performance Testing

We are developing an application which uses Spark & Hive to do static and ad-hoc reporting. For these static reports, they take a number of parameters and then run over a data set. We would like to make it easier to test performance of these reports on a cluster.
If we have a test cluster running with a sufficient sample data set which developers can share. To speed up development time, what is the best way to deploy a Spark application to a Spark cluster (in standalone) via an IDE?
I'm thinking we would create an SBT task which would run the spark submit script. Is there a better way?
Eventually this will feed into some automated performance testing which we plan to run as a twice daily Jenkins job. If its an SBT deploy task, it makes it easy to call in Jenkins. Is there a better way to do this?
I've found a project on GitHub, maybe you can get some inspiration.
Maybe just add a for loop for submitting jobs and increase the loop times to find the performance limit, not sure if I'm right or not.

Automatic hibernation of application instance on cloudbees

I have a cloudbees enterprise instance that I use for performance and automated UI testing.
The free instance (which is limited in memory) cannot support the memory or request per second that we have for testing.
I would like to have the instance automatically hibernated when I am not using it but have it wake up when requests come in. I would configure a jenkins job to wake the app up (by issuing a request) before kicking off my sauce lab based selenium jobs.
My question is how do I configure automatic hibernation? The control panel has minimum of one instance which I guess means that the one instance stays up.
You are right - currently automatic hibernation is only for free applications. When an application is hibernated (vs stopped) then it will be automatically woken whenever someone needs to access it.
What you could do for this is to have a job set your application to hibernated, say once a day, (or at certain time of the day when you know it won't be needed). When it is needed again - you won't need to do anything - simply accessing it will cause it to be activated (woken) again - so your test script can just insure that is the case (and ideally, after a test run, set it to hibernated again).
It really depends how often the app is needed - if you can work out what points it isn't needed and trigger the hibernate off that (eg after a test run) then that is ideal (you minimise cost).

Can TeamCity tests be run asynchronously

In our environment we have quite a few long-running functional tests which currently tie up build agents and force other builds to queue. Since these agents are only waiting on test results they could theoretically just be handing off the tests to other machines (test agents) and then run queued builds until the test results are available.
For CI builds (including unit tests) this should remain inline as we want instant feedback on failures, but it would be great to get a better balance between the time taken to run functional tests, the lead time of their results, and the throughput of our collective builds.
As far as I can tell, TeamCity does not natively support this scenario so I'm thinking there are a few options:
Spin up more agents and assign them to a 'Test' pool. Trigger functional build configs to run on these agents (triggered by successful Ci builds). While this seems the cleanest it doesn't scale very well as we then have a lead time of purchasing licenses and will often have need to run tests in alternate environments which would temporarily double (or more) the required number of test agents.
Add builds or build steps to launch tests on external machines, then immediately mark the build as successful so queued builds can be processed then, when the tests are complete, we mark the build as succeeded/failed. This is reliant on being able to update the results of a previous build (REST API perhaps?). It also feels ugly to mark something as successful then update it as failed later but we could always be selective in what we monitor so we only see the final result.
Just keep spinning up agents until we no longer have builds queueing. The problem with this is that it's a moving target. If we knew where the plateau was (or whether it existed) this would be the way to go, but our usage pattern means this isn't viable.
Has anyone had success with a similar scenario, or knows pros/cons of any of the above I haven't thought of?
Your description of the available options seems to be pretty accurate.
If you want live update of the builds progress you will need to have one TeamCity agent "busy" for each running build.
The only downside here seems to be the agent licenses cost.
If the testing builds just launch processes on other machines, the TeamCity agent processes themselves can be run on a low-end machine and even many agents on a single computer.
An extension to your second scenario can be two build configurations instead of single one: one would start external process and another one can be triggered on external process completeness and then publish all the external process results as it's own. It can also have a snapshot dependency on the starting build to maintain the relation.
For anyone curious we ended up buying more agents and assigning them to a test pool. Investigations proved that it isn't possible to update build results (I can definitely understand why this ugliness wouldn't be supported out of the box).