My company monitors performance and availability of websites and mobile applications via functional testing in the cloud; we're looking to expand our technology to include Selenium tests. We use RoR systems to remotely run our functional tests intermittently from a number locations save the data in mysql for reporting/alerting purposes.
We anticipate including Selenium RC on each of our monitoring servers to execute remote tests. We may evolve to running tests from multiple machines in each location (i.e. different flavors of OS, or for scalability purposes).
Since we already have a controller for managing the runs of our tests in various locations, so would Grid would be overkill or a necessity?
Any other suggestions?
Well, the Grid does not actually manage (as in schedule) the different runs of tests, it is just a collection of Remote Controls (RCs) connected to a Hub which distributes the tests amongst the machines running the different RCs when their execution is requested.
As for scalability, if you mean stress load testing, then I suggest a different tool (something like JMeter), it could be done with Selenium but it'd require a great number of RCs connecting to the same server which could probably require several machines running many RCs each. RCs are kinda resource heavy if you need many of them (you will for stress testing).
Running different OSs and browsers from various locations should be no problem though, as long as you specify relevant profiles for each, I'd say that this would be the best/main use for Selenium (other than regression testing during development).
Overall, I'd say it's worth it to put up a Grid and RCs in each of your servers, though you can probably manage using a single hub (and thus a single Grid) and having all RCs connect to it.
Related
I am new to Appium/Selenium parallel testing and I was wondering if one could run different tests concurrently across multiple devices? My team needs to reduce the total runtime of our UI tests and are not concerned with different OS versions affecting the behaviour of the application for these specific tests. I have been reading through many posts and trying to search for answers but all I can seem to find on the internet are articles, tutorials and forums on how to run the same test in parallel on multiple devices.
Can I run different tests concurrently on multiple devices without kicking off different tests manually, or is that a limitation of Appium? Ideally this would be implemented using an open source solution.
(Right now we are trying to use a JUnit approach for testing due to specific limitations of other tools. All tests are being written in Java.)
Thanks for your time.
Depending on your setup, you can accomplish this. However, a lot of your build automation and device management will need to be set up by you or your team custom, so you will not be able to use an out-of-the-box solution to do this.
I've accomplished the same with both Selenium and Appium -- you will need a test framework that allows for test execution with parameters, and your devices will need to be connected to separate USB hubs that each have their own virtual server attached.
Using NUnit, here's my approach:
Generate .txt files for each different set of tests I want to run -- test_list_1.txt, test_list_2.txt, etc. Each list contains a different group of test cases to run.
Write a build script to clean & build your project from scratch -- for C#, I use Cake.
Set up a job in Jenkins that executes your build script and calls NUnit's console runner, which takes a test_list as a parameter. This initiates a test execution against a list of test cases
You should be able to build your Jenkins job against any test list you want, so you now have the ability to run your automation against different tests, as mentioned in your problem description.
Connect your virtual machines (which connect to your Appium devices) to Jenkins and add them as executors on your job. Now you have multiple machines to run your job against.
With this set up, you can run as many jobs as you have machines -- 4 VM's means 4 jobs, which means you can run 4 different sets of test cases concurrently.
Setting this up on my end was completely custom -- I used certain tools to accomplish individual steps, but it worked for our needs and we did accomplish concurrent execution against different sets of test cases.
What you are asking , basically it's not possible.
You can't run different test cases on different devices.
Though you can run same test cases on wide range of devices using Hive or Browserstack or AWS device farm.
Hope this helps.
You can run your tests locally on multiple devices by creating multiple instances of appium server. Every Appium server should be running on different ip and proxy address. So you should set your capabilities for each instance accordingly.
But there is another solution as well but that's bit costly. That is AWS Device Farm. AWS provides multiple real devices hosted at there servers which you can use for executions of your customised test suits. They give initially 1000 free test minutes. You have to create a maven project for your test scripts. I prefer using testng rather than Junit.
I can't found any question/answer about that (probably I don't know how to find it...)
Could somebody give me a global idea to execute +200 Selenium webdriver tests (Python) from cloud servers/tools?
Thanks!!
rgzl
Another way is Saucelabs, using this service you'll be able to just send your Selenium
Java/Python tests
to their Cloud infrastructure for execution. The benefits of such testing are obvious – no need to waste time and resources setting up and maintaining your own VM farm, and additionally you can run your test suite in various browsers in parallel. Also no need to share any sensitive data, source code and databases.
As said in this acticle:
Of course inserting this roundtrip across the Internet is not without cost. The penalty of running Selenium tests this way is that they run quite slowly, typically about 3 times slower in my experience. This means that this is not something that individual developers are going to do from their workstations.
To ease the integration of this service into your projects, maybe you'll have to write a some kind of saucelabs-adapter, that will do the necessary SSH tunnel setup/teardown and Selenium configuration, automatically as part of a test.
And for a better visualization:
Here's a global idea:
Use Amazon Web Services.
Using AWS, you can have a setup like this:
1 Selenium Grid. IP: X.X.X.X
100 Selenium nodes connecting to X.X.X.X:4444/wd/register
Each Selenium node has a node config, running 2 maxSessions at once. (depending on size of course)
Have also, a Continuous integration server like Jenkins, run your Python tests Against X.X.X.X grid.
We have several parallel development groups working on different things in separate environments. Each group has a jenkins server/2 windows slaves setup that is executing selenium nunit tests.
Is it possible to to have all the slave instances in a pool that each of the jenkins servers can pick from? We are using the JNLP b/c there are issues with some of the browser tests that require running in an interactive desktop. I thought perhaps I could start a JNLP for each server instance on each machine, but that seemed the wrong way as each server would have no knowledge of other servers use of it. Is there any way to make a slave available to multiple servers?
I don't think you can do what you are looking for.
You can run multiple slaves on one computer, but as you said, there is no way to keep multiple servers from trying to access the same desktop.
A better solution is probably to combine your Jenkins servers. You can use the security settings and views to set it so that regular users are not even aware of the other projects being run in parallel- while allowing one Jenkins server to coordinate all of the builds (which is what you want).
You may want to check with CloudBees Ops Center (http://www.cloudbees.com/joc), in particular, the Share Executors (Slaves) Between Masters feature. That would do exactly what you want, but for a bit of a price.
Practical uses of virtualization in software development are about as diverse as the techniques to achieve it.
Whether running your favorite editor in a virtual machine, or using a system of containers to host various services, which use cases have proven worth the effort and boosted your productivity, and which ones were a waste of time ?
I'll edit my question to provide a summary of the answers given here.
Also it'd be interesting to read about about the virtualization paradigms employed too, as they have gotten quite numerous over the years.
Edit : I'd be particularly interested in hearing about how people virtualize "services" required during development, over the more obvious system virtualization scenarios mentioned so far, hence the title edit.
Summary of answers :
Development Environment
Allows encapsulation of a particular technology stack, particularly useful for build systems
Testing
Easy switching of OS-specific contexts
Easy mocking of networked workstations in a n-tier application context
We deploy our application into virtual instances at our host (Amazon EC2). It's amazing how easy that makes it to manage our test, QA and production environments.
Version upgrade? Just fire up a few new virtual servers, install the software to be tested/QA'd/used in production, verify the deployment went well, and throw away the old instances.
Need more capacity? Fire up new virtual servers and deploy the software.
Peak usage over? Just dispose of no-longer-needed virtual servers.
Virtualization is used mainly for various server uses where I work:
Web servers - If we create a new non-production environment, the servers for it tend to be virtual ones so there is a virtual dev server, virtual test server, etc.
Version control and QA applications - Quality Center and SVN are run on virtual servers. The SVN box also runs CC.Net for our CI here.
There may be other uses but those seem to be the big ones at the moment.
We're testing the way our application behaves on a new machine after every development iteration, by installing it onto multiple Windows virtual machines and testing the functionality. This way, we can avoid re-installing the operating system and we're able to test more often.
We needed to test the setup of a collaborative network application in which data produced on some of the nodes was shared amongst cooperating nodes on the network in a setup with ~30 machines, which was logistically (and otherwise) prohibitive to deploy and set up. The test runs could be long, up to 48 hours in some cases. It was also tedious to deploy changes based on the results of our tests because we'd have to go around to each workstation and make the appropriate changes, which was a manual and error-prone process involving several tired developers.
One approach we used with some success was to deploy stripped-down virtual machines containing the software to be tested to various people's PCs and run the software in a simulated data-production/sharing mode on those PCs as a background task in the virtual machine. They could continue working on their day-to-day tasks (which largely consisted of producing documentation, writing email, and/or surfing the web, as near as I could tell) while we could make more productive use of the spare CPU cycles without "harming" their PC configuration. Deployment (and re-deployment) of the software was simplified, since we could essentially just update one image and re-use it on all the PCs. This wasn't the entirety of our testing, but it did make that particular aspect a lot easier.
We put the development environments for older versions of the software in virtual machines. This is particularly useful for Delphi development, as not only do we use different units, but different versions of components. Using the VMs makes managing this much easier, and we can be sure that any updated exes or dlls we issue for older versions of our system are built against the right stuff. We don't waste time changing our compiler setups to point at the right shares, or de-installing and re-installing components. That's good for productivity.
It also means we don't have to keep an old dev machine set up and hanging around just-in-case. Dev machines can be re-purposed as test machines, and it's no longer a disaster if a critical old dev machine expires in a cloud of bits.
We used to have a dedicated server (1&1) and very infrequently ran into problems with the server having issues.
Recently, we migrated to a VPS (Wiredtree.com) with similar specs to our old dedicated server, but notice frequent problems running out of memory, mysql having to restart, etc... both when knowingly running intensive scrips and also just randomly during normal use.
Because of this, we're considering migrating to another at VPS - this time at Slicehost to see if it performs better.
My question is two fold...
Are their straightforward ways we could stress test a VPS at Slicehost to see if the same issues occur without having to actually migrate everything over?
Also, is it possible that the issues we're facing aren't just because of the provider (Wiredtree) but just the difference between a dedicated box and VPS (despite having similar specs)?
The best way to stress test an environment is to put it under load. If this VPS is hosting a web application, use one of the many available web server benchmark tools: ab, httperf, Siege or http_load. You don't necessarily care that much about the statistics from the tool itself, but more that it puts a predictable load on the server so that you can tune Apache to handle it, or at least not crash and burn.
The one problem you have with testing against Slicehost is that you are at the mercy of the Internet and your bandwidth to Slicehost. You may not be able to put enough load on the server to reach a meaningful conclusion.
Instead, you might find it just as valuable to run one of the many virtualization products on the market and set up a VM with comparable specs to the VPS plan you're considering. Local testing over your LAN will allow you to put a higher and more predictable load on the server.
In either case, you don't need to migrate everything, but you will need to set up an environment for your application to run in, with representative data in your database.
A VPS with similar specs to a dedicated server should perform approximately the same, but in order to get good performance, you still need to tune Apache, MySQL and any other long-lived server processes. In my experience, the out-of-the-box configuration of Apache in many Linux distributions is not ideal and will allow far too many child processes, overcommitting memory and sending the server into a swap-death spiral.