I'm in the middle of picking tools to load test my Ruby on Rails app. So far I'm trying out -
apachebench
autobench
httperf
selenium
trample
Is there anything else worth looking at? I don't have a ton of hardware, so efficiency is a concern.
The famous one (at least for me):
JMeter
The Grinder
OpenSTA
All support simulating concurrent users, can generate decent load, support distributed testing if required (with distributed agent). JMeter and OpenSTA have a recorder and recorded scripts are relatively easy to variablelize. For The Grinder, I'm not sure.
OpenSTA is the most polished one and with the most features (but is not portable).
JMeter is my preferred one mostly because I know it well and because testing can be easily automated (e.g. to be included in a build). Have a look at the user manual to get started. If you need to record over SSL, check BadBoy.
More interesting reading at Shootout: Load Runner vs The Grinder vs Apache JMeter.
Check out JMeter.
Related
We are currently using Jmeter for API performance testing in distributed mode (1 master + 3 slaves) as need to generate 10k requests.
Now using Karate for API functional testing and could integrate with Gatling using Maven dependencies successfully. As documentation says I could inject users and duration in these scripts and run>generate report (tested for 10 users).
Kindly guide, having below queries:
Is it possible to make these Karate-Gatling scripts to run as we do in Jmeter distributed mode.
How many users can be injected using Karate-Gatling scripts in a single machine (AWS/GCP mini instance/VM).
I guess this might vary how fast the application responds/volume.
I have gone through Jmeter Vs Gatling and looks like Clustering/distributed mode is supported only in Gatling paid version.
As per Gatling Performance Testing Pros and Cons article:
If you don’t want to pay for Gatling FrontLine, but you need to take your load test a little bit further, it may not be so easy to distribute the load as it is with JMeter. Despite that, not all is lost, as Gatling actually provides a way to distribute the load with the free version of the tool.
The way of distributing load in Gatling can be found here, but the main idea of Gatling’s distribution is based on a bash script that takes care of executing the Gatling scripts located in the slaves machines, which then sends the logs generated by the simulation to the master machine, where the consolidated report will be built.
So you can kick off several Gatling instances on several hosts and use the Bash script provided in order to run your test simultaneously on different machines. You might also want to use ssh-copy-id command to avoid entering the password for each machine
This is my first time playing with cucumber and also creating a suite which tests and API. My questions is when testing the API does it need to be running?
For example I've got this in my head,
Start express server as background task
Then when that has booted up (How would I know if that happened?) then run the cucumber tests?
I don't really know the best practises for this. Which I think is the main problem here sorry.
It would be helpful to see a .travis.yml file or a bash script.
I can't offer you a working example. But I can outline how I would approach the problem.
Your goal is to automate the verification of a rest api or similar. That is, making sure that a web application responds in the expected way given a specific question.
For some reason you want to use Cucumber.
The first thing I would like to mention is that Behaviour-Driven Development, BDD, and Cucumber are not testing tools. The purpose with BDD and Cucumber is to act as a communication tool between those who know what the system should do, those who write code to make it happen, and those who verify the behaviour. That’s why the examples are written in, almost, a natural language.
How would I approach the problem then?
I would verify the vast majority of the behaviour by calling the methods that make up the API from a unit test or a Cucumber scenario. That is, verify that they work properly without a running server. And without a database. This is fast and speed is important. I would probably verify more than 90% of the logic this way.
I would verify the wiring by firing up a server and verify that it is possible to reach the methods verified in the previous step. This is slow so I would do as little as possible here. I would, if possible, fire up the server from the code used to implement the verification. I would start the server as a part of the test setup.
This didn’t involve any external tools. It only involved your programming language and some libraries. The reason for doing it this way is that I want to to be as portable as possible. The fewer tools you use, the easier it gets to work with something.
It has happened that I have done some of the setup in my build tool and had it start a server before running the integration tests. This is usually more heavy weight and something I avoid if possible.
So, verify the behaviour without a server. Verify the wiring with a server. It is important to only verify the wiring in this step. The logic has been verified earlier, there is no need to repeat it.
Speed, as in a fast feedback loop, is very important. Building and testing the entire system should, in a good world, take seconds rather than minutes.
I have a working example if you're interested (running on travis).
I use docker-compose to launch the API & required components such as database, then I run cucumber-js tests against the running stack.
docker-compose is also used for local development & testing.
I've also released a library to help writing cucumber for APIs, https://github.com/ekino/veggies.
I have a system to test, which is a video ads distribution technology. I need to load every video like 1-2 mins to serve the ads. The videos are played in a Flash client and streamed as FLV streams like in YouTube.
The reason why I need to test it only via browsers -- and every other method won't work -- is to stress test both the video streaming servers and the ads servers simultaneously and displaying ads in real-time.
I have used Selenium, WatiN, Automation Anywhere and many other automation tools. However, when I am trying to start like 10000 browsers on my machine (32GB RAM, 16-core CPU), none of them are able to do the job.
With Selenium, I am able to start the maximum FireFox instances so far, but that's still too low: half of the instances don't run the test.
Any suggestions to do with Selenium?
You aren't going to run 10,000 browsers on your machine. That would give 3.2MB of physical memory per browser instance and I'm pretty sure FireFox just won't like that.
You could create a JMeter script that hits your server with many threads. It won't interact with the UI but would simulate the load of many clients hitting whatever URLs you tell it. I believe it also includes the ability to record a session and play it back for easy setup of your sessions.
Selenium isn't really optimized for load/stress testing, especially if you're running your browsers locally. Running 1000+ browsers is going to choke even the beefiest server. Though RAM is an obvious bottleneck, you also have limited CPU resources and bandwidth. The latter being a primary concern if you are loading videos.
Not to mention you'd be testing from a single IP with 10k browsers, so load balancing may not kick in properly, as well as the actual distribution of video ads to specific virtual users.
If you want to stick with existing Selenium tests, I've had good experiences with BrowserMob. They basically have a huge grid to do real browser load-testing, distributed across AWS.
Another recommendation would be an actual performance testing tool. I'd recommend Soasta CloudTest. They have a free version that runs 100 users so you can see if it will be a good fit for you. I have found that scripting for CloudTest is relatively simple.
Disclaimer: My experiences with both companies have been as a paying customer and I have never worked for either.
If you are using Windows machine then as per my experience there is a limit on number of browser window instances to be opened. As per my test last time, it does restrict between 100-150 browser windows.
I would recommend you using headless robot, which doesn't require opening browser window. I think latest version of Selenium has that capability. But it seems to be more like a load test as you are trying to simulate 10,000+ user instances, I would recommend you using load testing tool like JMeter or LoadRunner.
It looks to me that you are trying to verify what the client will see based on high traffic, no?
In that case, Joel is quite correct. If you absolutely have to see what the client sees, you could use threaded hits and just dump the results in a database. That'll show you anything the client would see anyway, and it's a lot easier to sort through than thousands of browser instances.
Either way, your client will not see errors if there are no errors present on the server side. If you're testing functionality in bandwidth restricted environments, CPU-intensive environments, or memory-intensive environments, those are much easier achieved than running thousands of browser instances.
Your post smells of some form of ad-based fraud to me, but either way: have you considered using different web browsers besides Firefox? PhantomJS is a headless webkit-based browser that is compatible with Selenium. It supports all the core browser features like DOM handling, CSS selectors, Javascript and Canvas. I do not know if it supports Flash.
This post has a decent list of other headless and automatable web-browsers that you might consider.
Also, if each browser instance is instantiating a Flash plugin, don't neglect the possibility that the issue could be with Flash and not Firefox. Alternatively, why instantiate several different Firefox processes? Can you accomplish what you want through the use of tabs instead?
The in-house way to this wiht selenium is using browsermob proxy and multiple broswser agents to recreate the experience of different users, changing the ip is more difficult because it requires changing your home network.
Here is a good example
I have been looking around stackoverflow for automated GUI tools for testing our web app gui from a Business analyst point of view, so that means strictly requirements-record-playback kind of testing since we are not really programmers.
We have used selenium in the past but unfortunately it is no longer compatible with Firefox 4.
Is there a similar tool to selenium that allows recording and playback of GUI tests that does not require a lot or any scripting on a windows platform? thanks
You can use the FireFox add on compatibility reporter to get Selenium working on FF4
https://addons.mozilla.org/en-US/firefox/addon/add-on-compatibility-reporter/
Or alternatively drop down to FireFox 3.x and use that just for your testing!
For the server component Selenium-RC (necessary to execute tests), You must run Selenium-RC 2.0b3 (or higher if it become) to be compatible with Firefox 4. I have used it succesfully with FF4.
Selenium IDE, the recording tool, for firefox is indeed not available as a plugin for FF4 (but I speculate it will be coming soon).
I think that you can benefit from AutoIt (http://www.autoitscript.com/site/autoit/) I`d been using it to test Windows based GUI, but to the best of knowledge there are lot of scripts to test/play on-line games, thus it is applicable to Web Sites.
It does not require deep technical knowledge, but of course it will be much better and frequently mandatory to optimize the generated code. I`ve started my experience with this tool, and I was doing my work flawlessly.
At one company, I was developing automated tests for web app by means of TestPartner (Compuware company) it was one of the best tools Ive ever worked with, it generates VB code quite 'intelligently' and supports user with administration features. But Im not sure whether it is possible to use it without paying.
Good luck !
In my current project we are testing our ASP.NET GUI using WatiN and Mbunit.
When I was writing the tests I realized that it would be great if we also could use all of these for stresstesting. Currently we are using Grinder to stresstest but then we have to script our cases all over again which for many reasons isent that good.
I have been trying to find a tool that can use my existing tests to create load on the site and record stats, but so far i have found noting. Is there such a tool or is there an easy way to create one?
We have issues on our build server when running WatiN tests as it often throws timeouts trying to access the Internet Explorer COM component. It seems to hang randomly while waiting for the total page to load.
Given this, I would not recommend it for stress testing as the results will be inaccurate and the tests are likely to be slow.
I would recommend JMeter for making threaded calls to the HTTP requests that your GUI is making
For load testing there is a tool which looks promising - LoadStorm. Free for 25 users. It has zero deployment needs as this is a cloud based service.
You could build a load controller for your stress testing. It could take your watin tests and run them in a multithreaded/multiprocessed way.
If you are comfortable using Selenium instead of WatiN, check out BrowserMob for browser-based load testing. I'm one of the Selenium RC authors and started BrowserMob to provide a new way to load test. By using real browsers, rather than simulated traffic, tests end up being much easier to script and maintain.