We have recently automated some "Coded UI Tests" (running in the selenium framework) which are run from within Microsoft Test Manager (MTM). However, I am struggling to find out how MTM can pass parameters (such as the URL of the application under test) through to the coded UI tests. It seems to me that this would be a fairly typical usage pattern, but I am struggling to see how it can be achieved.
Any suggestions would be appreciated.
Thanks,
David
Your after Data Driven Coded UI Tests
http://blogs.msdn.com/b/mathew_aniyan/archive/2009/03/17/data-driving-coded-ui-tests.aspx
http://msdn.microsoft.com/en-us/library/ee624082.aspx
If your linking yout Coded UI Test to a Test Case you can use the test cases parameters to feed the data into the coded ui test to drive it.
Related
For web application automation Selenium can be used with Robot framework. But both are frameworks.What is the relation between these two?
You have selenium to automate all the web related work e.g login,click,button and many more thing. But then you have to use it with some languages , e.g. Java ,Ruby, Python.
Suppose you get a project to automate a webbrowser by using any other languages , where your task would be
1)login to browser
2)Fill user details
3)Click on submit
Now to have a good framework , you need to break down these tasks into smaller component
1) you need to define test cases
2) You need to have a separate file to store variables
3) You need to have a good reporting tool , which will show how many test cases passed or failed and further drill down.
i am a python user,so lets talk the problem with python,selenium
1) you can write the test cases with unittest module - But then generating a good test reports would be hedache , you have to spend a lot of time to create good test reports
this is one of the major disadvantage with python
Now coming to RobotFramework
if you integrate selenium library in robotframe work, you would be able to do almost similar thing which can be done by any other languages with lot of ease and control.
Taking example of your assignment in hand
1) You can define Test cases
2) You can Create a seprate variable file and then pass it with main file during run time (check pybot -V)
3)You dont need to be worry about reporting part , all the reports would be generated and with betterdrill down options
additional advantage
1) There are lots of inbuilt libraries, which will help you to do your task easily
2) You can create your own custom library and import in Robotframework
3) You would be able to drill down till last variable where the problem is with help of RobotFramework Reports, which will save lots of time
In Nutshell i can say Robotframework provide a building block to your framework where you just need to worry about the functional aspect of your program
Your original assertion is incorrect. They are not both frameworks. Or at least not the same type of framework.
robot framework is a set of programs and libraries for creating test cases. With it you can create test suites built upon reusable keywords written either from other keywords or written in other programming languages. The framework provides a test runner, and generates test reports.
selenium is a library interface to a driver that controls a browser. You cannot write tests using only selenium -- you need something else such as a programming language (python, ruby, etc) or testing framework (robot, cucumber, etc). Selenium itself provides no way to run tests, and no way to generate reports.
When using NUnit, you can pass in parameters to your tests using TestCaseSourceAttribute.
[Test, TestCaseSource(typeof(WebDriverFactory), "Drivers")]
What would be the best approach to doing the same for tests generated using specflow? Those tests do not use the 'Test' attribute. They use 'Given', 'And', 'Then' etc.
I'm trying to pass in different web drivers (selenium) so I don't have to manually change them to test across different browsers.
Specflow creates automatically test fixtures, so you cannot use [TestCaseSource]. You can try Test class generator to drive automated web ui tests with Selenium and SpecFlow.
However you should ask yourself if executing Specflow scenarios in different browsers brings a lot of benefits to your project, as execution time of your acceptance tests will double/triple. From my experience cross browser testing identifies UI changes and very rare functional (to be honest I've never encountered any). In our team testers perform it manually.
Could someone explain what an automation test is and why I would use it. I read from the wiki page that a tester would create a automation script? What kind of scripting language can be used to do this?
Automation tests are carried out to check the behavior of an application against expected behavior. Normally used in regression testing where you validate that a newer version of the application doesn't hinder any of the previous version's features. These might also be carried along with manual testing.
Coming to the scripting language part, this might help you:- https://softwareengineering.stackexchange.com/questions/19292/best-language-or-tool-for-automating-tedious-manual-tasks
In simple words, If you are doing regression test or testing same piece of code over and over you can automate that manual process. That's called automation testing.
You can use several different scripting languages to achieve this and it's depends on which tool you are using. Some popular automation tools are Selenium, QTP, Loadrunner, Jmeter, SOAP UI etc.
You want to check your login with more than 1000 of users how much time you will spend to run this test case ?
In the same way you want to test you mobile API's before it used by developer how will you test?
There are lots of thing for that you have to go for automation In small application, sites you can work as an tester after that when those app's sites will grow will large data than those product owner will move for automated test cases
is there any tool out there that i can used to set-up run automatically and i was goggling and i found selenium test runner? there are so many tools out there its hard to figured out which is best
I'm using C# and using MSTest as a test framework and I'm looking forward to see if I can get a way from testing in MSTEST
any help?
This is very subjective question. Every requirement will have its own correct answer. Anyhow I will try to address few requirements and will be updating as I learn more.
If you are automating web app browser tests (sans flash player and silverlight) I would say that selenium is the way to go. There are ways to automate flash and silverlight too, but that is answer for another question.
Selenium is anyways an automation too and your choice will rather is of which test framework to select. So here are few options:
1. Integrating with CI tools:
If you want to organize your tests as segregated atomic units and want them to be integrated to some CI server (e.g. TeamCity). I will recommend using NUnit to run your selenium tests.
2. Behavioral Tests
It is a new trend in the software development and how we test our products. Using behavioral (i.e. business specification) like language. In my experience it is also a very good format to write up acceptance tests. You can use selenium with something like Nbehave or SpecFlow
3. Centralize Test management and Execution
Now this might not fit for everyone but I have found FitNesse (and its c# binding) to be very useful in maintaining and executing selenium test cases.
Please note this answer may not be right and is certainly not complete given the scope of the question. I have nevertheless tried provide few pointers.
Is anyone aware of any ongoing open source project that integrates robotframework with a load testing tool such as grinder, jmeter, funkload etc?
Thanks
Yes. There is a Python library for integration of Robot Framework and JMeter: Robot Framework JMeter Library . It can be used for running JMeter and parsing and converting results. I am author of this library so I might not be objective.
No, and that's likely not to happen. Robot Framework is for functional not load testing. How would you deem a load test as pass/fail and how long does it run?
Robot Framework and functional tests have a finite set execution time (takes as long as it needs to complete testing the particular feature or times out before doing so in case it hung, etc.), and has strict criteria as to what is pass/fail when test runs.
With load testing, you at least during exploratory runs and design of test, you don't run for fixed time, or even if fixed, it's usually not short (except trial runs and scalable burst increases). And criteria for pass/fail is usually within ranges rather than yes/no.
So it's harder to integrate and design a test library that can offer pass/fail and run within some set time for load testing. Unless someone can define a good architectural design of a test and test library for how to do so with Robot Framework.
I think the idea would be that a test case is created only once and can be used in both functional tests as in load tests and even in end user monitoring. In this (utopic) way a test case can be used during the whole lifecycle of an application. With a tag (for instance) a test case can be promoted to be also a loadtesting test case with another type of response validation. Would be nice to run Robot framework and create a Loadrunner-TrueClient (or another browser-driven loadtesttool) script. Main purpose of the integration would be to automate the scripting.