Why is karate-gatling slow compared to JMeter - karate

I have followed the example at karate-gatling-demo for creating a load test. For my use-case I converted a JMeter test to karate. After making sure everything works, I compared the two. In the time that it took karate-gatling to get to even 300 requests, JMeter had already made a few thousands. I thought it might have been the pause in the demo but even after I removed it, the speed of the tests make them unusable. I would really like to implement this as we are already making strides to use normal karate tests as part of our CI process. Is there a reason they are so slow?
(I am using karate-gatling version 0.8.0.RC4)
To provide some info related to the two testing situations...
JMeter: 50 threads/users with 30 second ramp up and 50 loops
Karate-Gatling: repeat scenario 50 times, ramp to 50 users over 30 seconds

Because this is still in the early stages of development. This feedback helps. If possible can you try 0.8.0.RC3 and see if that makes a difference, the test syntax needs a slight change which you should be able to figure out from the version history. There was a fundamental change in the async model which probably has some issues.
Ideally I would love someone who knows Gatling internals to help but this will take a little time to evolve with me looking at it.
EDIT: Gatling support was released in 0.8.0 (final) and multiple teams have reported that it is working well for them.

Related

Selenium GRID vs TestNG parallel

This topic is the beginning of the answer I am looking for. I need to know more.
Short story:
Why use GRID if pure TestNG parallel execution seems to work just fine?
Long story:
Background:
We are running about 40 tests now, growing.
We only use one browser (chrome).
To make tests faster we do parallel testing (makes sense).
We face issues configuring GRID solution,
in many cases we just drop it and run pure testNG parallel.
Question:
I need to know if it even makes sense to be so stubborn on that
whole GRID. For now it only seems to consume time without giving any
additional value.
My own thoughts:
The only thing i can think of to justify GRID is running the tests
using different machines. If we would need to actually balance the
load on several servers. But at this point even my own laptop is
doing the job just perfectly. This situation will not change
dramatically in nearest future, so why bother?
The link mentioned above claims the results of the no-grid parallel
tests may become unpredictable. We do not face that. So the question
may be: in what sense unpredictable? What to watch out for?
Thanks in advance for your help.
cheers,
Greg
The Grid mimics as a load balancer and distribute tests to nodes according to the desired capabilities. While the parallel attribute in testNG xml is just instructing the testNGrunner to trigger n number of tests at one go.
CAVEAT : If you do not use grid for parallel test execution, your single host will get overloaded as you scale up the thread-count. The results of the no-grid parallel tests may become unpredictable because multiple sessions will fill up the heap memory quickly. A general purpose computer has limited Heap memory . You are not facing this issue ,may be because you did not hit that limit.
Lets consider some examples:
Your target is to check functionality on windows as well as on MAC. Without grid you will run the cases twice.
You got a test case where a functionality breaks at older version of a browser and now its time for regression test. Without grid you will be running test cases multiple times for each browser's older version.
A case that is dependent on different screen resolutions.
Grid can simplify the effort for configuration.
Its just about making the time as much minimal as possible for running number of test cases.

when testing production environment continuously makes sense

Let's say I have a bunch of unit tests, integration tests, and e2e tests that cover my app. Does it make sense to have these continuously running against prod, e.g. every 10 mins?
I'm thinking no, here's why:
My tests are already ran after every prod deploy. If they passed and no code has changed after that, they should continue to pass. So testing them thereafter doesn't make sense.
What I really want to test continuously is my infrastructure -- is it still running? In this case, running an API integration test every 10 mins to check if my API is still working makes sense. So I'm dealing with a subset of my test suites -- the ones that test my infrastructure availability (integration+e2e) versus only single bits of code (unit test). So in practice, would I have seperate test suites that test prod uptime than the suites used to test pre/post deploy?
Such "redundant" verifications (they can include building as well, BTW, not only testing) offer additional datapoints increasing the monitoring precision for your actual production process.
Depending on the complexity of your production environment even the simple "is it up/running?" question might not have a simple answer and subset/shortcut versions of the verifications might not cut it - you'd only cover those versions, not the actual production ones.
For example just because a build server is up doesn't mean it's also capable of building the product successfully, you'd need to check every aspect of the build itself: availability of every tool, storage, dependencies, OS resources, etc. For complex builds it's probably simpler to just perform the build itself than to manage the code reliably checking if the build would be feasible ;)
There are 2 production process attributes that would benefit from a more precise monitoring (and for which subset/shortcut verifications won't be suitable either):
reliability/stability - the types, occurence rates and root causes of intermittent failures (yes, those nasty surprises which could make a difference between meeting the release date or not)
performance - the avg/min/max durations of various verifications; especially important if verifications are expensive in terms of duration/resources involved; trending could be desired for planning, budgeting, production ETAs, etc
Donno if any of these are applicable to or have acceptable cost/benefit ratios for your context but they are definitely important for most very large/complex sw projects.

Coded UI Test PC\Server Spec Requirement

I've been developing Coded UI Tests for a few months now, and have optimized them as much as my knowledge allows. I found some performance issues in regards to running time of the tests.
Currently I have 91 tests, each one of them is quite small and use multiple UI Maps. The time taken to run each tests varies from 1 to 5 minutes, some tests run over 20 minutes. I've watched few of them run, and have noticed that it takes a while for a test to find UI controllers (sometimes it doesn't).
I suppose there are two questions here:
1) Is there an optimal requirements (RAM, HD Space, CPU, etc) for a PC\Server to get best running time results?
2) Is there a way to optimize the Coded UI Tests to improve running time?
Answer to my question by Jack Zhai - MSFT can be found on MSDN Forum - Optimizing Coded UI Tests And PC\Server Spec Requirement

How often should applications be stress or load tested?

Is there a rule on how often an application should be stress or load tested? I normally do it before putting into production a new version, when the hardware changes or when the expected amount of users is known to change.
But today i'm asked if this should be a standard practice for an application that is in production even if no changes are introduced. If so, how often?
It really depends on how you want to address it for your company's needs. Personally, we load test our integration (test) builds daily - just like the builds go out. After the build runs at approx 1a, we have it scripted to be load tested as well. Our goal is specifically looking for build over build changes in performance. Even if we do not introduce changes into the code, the servers that the code is load tested on still recieves updates/patches/hot fixes/service packs/etc. At worst, once automated, it provides additional historical data.
We are going this route (build relativity) because it is cost prohibitive to try and replicate our hardware environment in production. In the event that we see a sudden change (or gradual changes) to key performance monitors, we can look into what changesets were introduced at that time and isolate potential code changes that adversely impacted performance.
From the sound of it, you are testing against a lab that replicates production? That is a different approach then we had, because we are going under the assumption that most of our bottlenecks would be code-induced and not directly dependent on hardware. We use VMs to approximate, but not duplicate, our production environment.
One thing that affects system performance, even though the code is unchanged, is data.
An example might be performance of a database query. As data is added to a table the cost of maintaining indexes goes up. Page splits in the index can degrade performance. As indexes grow, the number of 'levels' in the index will every so often have to be increased. When that happens you see a sudden, apparently inexplicable change in performance.
Running stress tests in a production environment is not always possible - it affects your day-to-day business. More often systems are instrumented to provide on-going feedback about performance. Maybe using something like ganglia. The data are used to detect issues and for capacity planning.
I think that whenever you change something in the application - code, data files, use-cases - and that includes but is not limited to expected amount of users, you should test it.
My two cents: sometimes you won't have changed anything and your site's performance could suffer. It could be from the app handling too much data (ie: caches overflowing). It could be from third party advertisements on the site slowing down. Heck, it could be because of fault RAM!
The main thing is that while it's generally advised you do testing after any known change, it's also not a bad idea to do occasional performance testing to check for possible unknown changes that could affect performance.
My company, BrowserMob, provides a free website monitoring service that runs real Selenium scripts every X minutes against your site. While it's not a load test, it definitely can help you identify trends and bottlenecks in your production site.
This depends on how mission-critical your system is ... if it is just a small tool that one can do without, once you put it on production. If your life depends on it, after every single build.
As far as I'm concerned, that is.
Depends on how much effort the testing requires. If it is easy enough, more testing never hurts. However, I see no reason to test if there are no changes expected. If this would lead to the software not tested for a long time, then it might be appropriate to run the tests from time to time in case there are unexpected changes.
Needless to say, it also depends on whether your software is running a nuclear reactor or a bulletin board.
If no changes are introduced in your product and the load testing simply repeats the same process over and over for some interval, I don't see the benefit of re-running.
I used to have stress tests as part of my ant file, so I could run those tests every night, if I wanted, and I would run them when I was making any changes, or testing a possible new change, that would in one way or another impact what I was testing, to see if there was any improvement.
I think how often would depend on your environment.
If you can't really stress test in development then you may have to wait until you get to QA, where you are testing just before you go to production.
I think the sooner you do it the better, as you can find problems and fix them faster, since you know it worked two nights ago, last night it failed the test, so there was some change during the day that caused it.
I used junitperf for many of my stress tests.
I think we don't want to stress test Notepad. It's a joke. Never mind =).
Stress testing not for all applications. :-)
If your software can kill someone, i think you should.
Imagine someone dying because the blood didn't arrive because some timeout on the system.
"Timeout.exception: your heart are not coming anymore. Try again later"

Setting up a lab for developers performance testing

Our product earned bad reputation in terms of performance. Well, it's a big enterprise application, 13 years old, that needs a refreshment treat, and specifically a boost in its performance.
We decided to address the performance problem strategically in this version. We are evaluating a few options on how to do that.
We do have an experienced load test engineers equipped with the best tools in the market, but usually they get a stable release late in the version development life cycle, therefore in the last versions developers didn't have enough time to fix all their findings. (Yes, I know we need to deliver earlier a stable versions, we are working on this process as well, but it's not in my area)
One of the directions I am pushing is to set up a lab environment installed with the nightly build so developers can test the performance impact of their code.
I'd like this environment to be constantly loaded by scripts simulating real user's experience. On this loaded environment each developer will have to write a specific script that tests his code (i.e. single user experience in a real world environment). I'd like to generate a report that shows each iteration impact on existing features, as well as performance of new features.
I am a bit worried that I'm aiming too high, and it it will turn out to become too complicated.
What do you think of such an idea?
Does anyone have an experience with setting up such an environment?
Can you share your experience?
It sounds like a good idea, but in all honesty, if your organisation can't get a build to the expensive load test team it has employed just for this purpose, then it will never get your idea working.
Go for the low hanging fruit first. Get a nightly build available to the performance testing team earlier in the process.
In fact, if this version is all about performance, why not have the team just take this version to address all the performance issues that came late in the iteration for the last version.
EDIT: "Don't developers have a responsibility to performance test code" was a comment. Yes, true. I personally would have every developer have a copy of YourKit java profiler (it's cheap and effective) and know how to use it. However, unfortunately performance tuning is a really, really fun technical activity and it is possible to spend a lot of time doing this when you would be better developing features.
If your developer team are repeatedly developing noticeably slow code then education on performance or better programmers is the only answer, not more expensive process.
One of the biggest boost in productivity is an automated build system which runs overnight (this is called Continuous Integration). Errors made yesterday are caught today early in the morning, when I'm still fresh and when I might still remember what I did yesterday (instead of several weeks/months later).
So I suggest to make this happen first because it's the very foundation for anything else. If you can't reliably build your product, you will find it very hard to stabilize the development process.
After you have done this, you will have all the knowledge necessary to create performance tests.
One piece of advice though: Don't try to achieve everything at once. Work one step at a time, fix one issue after the other. If someone comes up with "we must do this, too", you must do the same triage as you do with any other feature request: How important is this? How dangerous? How long will it take to implement? How much will we gain?
Postpone hard but important tasks until you have sorted out the basics.
Nightly builds are the right approach to performance testing. I suggest you require scripts that run automatically each night. Then record the results in a database and provide regular reports. You really need two sorts of reports:
A graph of each metric over time. This will help you see your trends
A comparison of each metric against a baseline. You need to know when something drops dramatically in a day or when it crosses a performance threshold.
A few other suggestions:
Make sure your machines vary similarly to your intended environment. Have low and high end machines in the pool.
Once you start measuring, never change the machines. You need to compare like to like. You can add new machines, but you can't modify any existing ones.
We built a small test bed, to do sanity testing - ie did the app fire up and work as expected when the buttons were pushed, did the validation work etc. Ours was a web app and we used Watir a ruby based toolkit to drive the browser. The output from those runs are created as Xml documents, and the our CI tool (cruise control) could output the results, errors and performance as part of each build log. The whole thing worked well, and could have been scaled onto multiple PCs for proper load testing.
However, we did all that because we had more bodies than tools. There are some big end stress test harnesses that will do everything you need. They cost, but that will be less than the time spent to hand roll. Another issue we had was getting our Devs to write Ruby/Watir tests, in the end that fell to one person and the testing effort was pretty much a bottleneck because of that.
Nightly builds are excellent, lab environments are excellent, but you're in danger of muddling performance testing with straight up bug testing I think.
Ensure your lab conditions are isolated and stable (i.e. you vary only one factor at a time, whether that's your application or a windows update) and the hardware is reflective of your target. Remember that your benchmark comparisons will only be bulletproof internally to the lab.
Test scripts written by the developers who wrote the code tends to be a toxic thing to do. It doesn't help you drive out misunderstandings at implementation (since the same misunderstanding will be in the test script), and there is limited motivation to actually find problems. Far better is to take a TDD approach and write the tests first as a group (or a separate group), but failing that you can still improve the process by writing the scripts collaboratively. Hopefully you have some user-stories from your design stage, and it may be possible to replay logs for real world experience (app varying).