When using NUnit, you can pass in parameters to your tests using TestCaseSourceAttribute.
[Test, TestCaseSource(typeof(WebDriverFactory), "Drivers")]
What would be the best approach to doing the same for tests generated using specflow? Those tests do not use the 'Test' attribute. They use 'Given', 'And', 'Then' etc.
I'm trying to pass in different web drivers (selenium) so I don't have to manually change them to test across different browsers.
Specflow creates automatically test fixtures, so you cannot use [TestCaseSource]. You can try Test class generator to drive automated web ui tests with Selenium and SpecFlow.
However you should ask yourself if executing Specflow scenarios in different browsers brings a lot of benefits to your project, as execution time of your acceptance tests will double/triple. From my experience cross browser testing identifies UI changes and very rare functional (to be honest I've never encountered any). In our team testers perform it manually.
Related
When creating automated tests with selenium, I thought one would use easier cucumber with selenium or testng with selenium or just junit with selenium although using only junit is not very popular. I have recently found out that you could use cucumber with testng but I don't see what is the gain of doing this. If someone is using both of them together can you tell me why ?
EDIT:
Using Testng over junit has many advantages. My question is if i use cucumber does it still make a difference or not anymore.
P.S I am not trying to start this tool vs this tool war
The answer that you seem to be looking for, is one of interest in what Cucumber, as a tool, adds to existing test frameworks.
The answer:
Cucumber adds an extra level of communication between you (the development team) and the management team. You are able to link test cases to scenarios that are now understandable by the business, which means that everybody is on the same page. You can even use the BDD tool to start talking about behaviours of the feature:
What things should be included?
Do we need more information?
Lets add that to the file, so that we can test that use case later.
Any new functionality added to the feature later?
Need to understand which section has gone wrong quickly, without having to decipher code written by the intern that was in for 2 months in the summer?
Cucumber helps with all of this, and that's just scraping the surface.
TestNG, JUnit, Selenium? You imagine it, you can do it. With Cucumber as your helpful neighbourhood BDD tool, you can pull together your test suite and bolt an abstraction layer on top. The business will now be able to look at the test results. Where tests have failed, they will be able to describe exactly what section has gone wrong to other members of management, without having to go too far into technical details.
If you're wondering whether to use JUnit or TestNG for this, it is most likely to be your choice. Using whatever is the current test tool to bolt cucumber on top of is the best option if you have an existing suite.
Also, make sure you are using the right language for your team. For instance:
Are you introducing a team of manual testers to developing test automation?
Maybe you should use Ruby or JavaScript, as they are easier languages to pick up as a first language
Are you a development team, using cucumber to add an abstraction layer to your unit tests?
Use the language that you are using for development, with the unit test tool that you are using.
Are you developers in test, using cucumber for automating tests for your website?
Use the language that you and your team are most comfortable with, taking the language being used for development over any others that tie with this (based on a team vote).
I think it depends on what are your other tests (unit ones for example) and how you run them.
If your current tests are already using TestNG, then it will be easier to run Cucumber tests with TestNG engine.
At the opposite, if you already have JUnit tests, it could be easier to use JUnit for Cucumber run (but TestNG is able to run JUnit tests, so you can use TestNG in that case too).
And if you have no other tests, so the choice of the test runner will depend on your own taste.
Yes.. I understand your question. Even I had the same doubt as below:
We use selenium for automation testing. Since they don't provide proper reports, we add TestNG to it (and also for other features). But now, we have cucumber, which gives proper reports. So why do we need TestNG?
I realized, though we get proper results with cucumber, TestNG provides us with many other features which cucumber cannot; like setting priority, setting method dependency, timeouts, grouping , etc.
Though cucumber provides a tag feature, it does not provide all the features provided by TestNG. Maybe when cucumber incorporates all those features, we can eliminate TestNG.
I am using Karma for unit tests and it works great.
The tests I would like to run would look similar to this:
1. var $input = $("#foo");
2. var $button = $("#bar");
3. $input.val("HELLO WARLD");
4. $button.trigger("click");
5. expect(window.appData.foo).toBe("HELLO WARLD");
Selenium looks like it might be the right choice but I don't know if there is a better option.
Yes. You can use Selenium for end-to-end automated tests on existing web applications.
Selenium is designed to automate web browsers. It's certainly a good choice for what you want. I'm sure there are other options, but as Selenium is used by several big companies, including Google for their automated web testing technologies, you will be safe with Selenium assuming you implement it optimally.
The only thing to keep in mind, is that GUI testing is NOT supposed to replace unit tests.
GUI testing should be your least used testing methodology. You can, however, use Selenium to test your Acceptance tests which is what many companies do.
Is anyone aware of any ongoing open source project that integrates robotframework with a load testing tool such as grinder, jmeter, funkload etc?
Thanks
Yes. There is a Python library for integration of Robot Framework and JMeter: Robot Framework JMeter Library . It can be used for running JMeter and parsing and converting results. I am author of this library so I might not be objective.
No, and that's likely not to happen. Robot Framework is for functional not load testing. How would you deem a load test as pass/fail and how long does it run?
Robot Framework and functional tests have a finite set execution time (takes as long as it needs to complete testing the particular feature or times out before doing so in case it hung, etc.), and has strict criteria as to what is pass/fail when test runs.
With load testing, you at least during exploratory runs and design of test, you don't run for fixed time, or even if fixed, it's usually not short (except trial runs and scalable burst increases). And criteria for pass/fail is usually within ranges rather than yes/no.
So it's harder to integrate and design a test library that can offer pass/fail and run within some set time for load testing. Unless someone can define a good architectural design of a test and test library for how to do so with Robot Framework.
I think the idea would be that a test case is created only once and can be used in both functional tests as in load tests and even in end user monitoring. In this (utopic) way a test case can be used during the whole lifecycle of an application. With a tag (for instance) a test case can be promoted to be also a loadtesting test case with another type of response validation. Would be nice to run Robot framework and create a Loadrunner-TrueClient (or another browser-driven loadtesttool) script. Main purpose of the integration would be to automate the scripting.
We have recently automated some "Coded UI Tests" (running in the selenium framework) which are run from within Microsoft Test Manager (MTM). However, I am struggling to find out how MTM can pass parameters (such as the URL of the application under test) through to the coded UI tests. It seems to me that this would be a fairly typical usage pattern, but I am struggling to see how it can be achieved.
Any suggestions would be appreciated.
Thanks,
David
Your after Data Driven Coded UI Tests
http://blogs.msdn.com/b/mathew_aniyan/archive/2009/03/17/data-driving-coded-ui-tests.aspx
http://msdn.microsoft.com/en-us/library/ee624082.aspx
If your linking yout Coded UI Test to a Test Case you can use the test cases parameters to feed the data into the coded ui test to drive it.
I'm looking for something equivalent to JUnit setUp() and tearDown() methods. In other words: I have a test suite; I would like to write a setup test case and a teardown test case. The setup test case would be executed before each test in the suite. The teardown test case would be executed after each test in the suite.
How?
It sounds to me like you're at the point where you need to export your tests from Selenium IDE into another format/language. Selenium IDE is great for quick prototyping of tests or for showing off what Selenium can do, but when you actually begin to build a library of tests, you need to use a real programming language. Setup and Teardown are a part of every major testing suite (you mentioned JUnit but also TestNG, NUnit and MSTest for C#, etc) so use one! Using a real programming language also allows you to refactor your tests, extracting common functionality into classes and methods so that when your Application Under Test changes, you only need to change one method and not 100 tests. Most testing frameworks also support some sort of data driven testing which many Selenium users find useful.
Are you generating Java code to drive your test cases?
I ended up writing a custom format for C# to handle integrating Selenium test cases with MbUnit which are then just pulled to a Team City server and run after our nightly builds.
I suggest you check out Robot Framework. There is a Selenium library available for Robot Framework so you get almost all Selenium functionality plus you get a great framework to create your test suite.
In Robot Framework you can simply define Test Setup in the initial settings and it will be executed before every test case. Similarly Test Teardown will be executed after every test case in your test suite.