How can I create a general library for Coded UI tests?
Let's assume that you have common operations such as Launching browser, Login, navigating to a page, clicking on a HTML link and Closing the browser. All these activities can be used in all different test cases. Hence you will not code (record) this option again and again for each single test. If we record all these common actions for each test, the maintenance will become a nightmare, when some link/icon/title changes.
How can we create a common library (something similar to DLL) that all tests referring to it and use it for common activity?
I finally was able to manage that.
There are two possibilities in order to address this issue. Before listing both approaches, we need to create (a) coded UI Test(s), which include(s) the generic and basic functionalities. For example the common tasks may be opening browser, login and closing the browser. All these kind of common functionalities can go to a coded UI test (it may be several tests based on the requirements). The created UI Test is basically a DLL. Now we have two possibilities:
Adding the created UI test as reference into the new coded UI test or
Creating a new test and inheriting from the generic one.
In such a way we can minimize our maintenance in long term and modulize the whole automation approach.
Related
Here's the problem:
We have a healthy infrastructure of projects that contain selenium page objects and methods for each part of our site. However, each selenium project is written using the language its parent component is developed in. So for example, team A creates component A using node.js and thus their selenium objects are written in js, while team B creates component B in .NET, and their selenium page objects in C#.
How can we write complete end-to-end selenium tests of our site with different parts using different languages for their selenium bindings? We of course want to maintain a separation of concerns, so team A doesn't need to concern themselves with the selenium details of team B. Should we be approaching this differently?
Why not to agree on usage of one common language for test automation? It will solve consistency problem. All utils/libraries can be shared across the different teams, prevent from reinventing the same wheel.
I have a group that has some non-technical people creating tests and test suites using the Selenium IDE. I'd like for that group to be able to work independently, yet after the fact be able to run a series of those suites with minimal button clicks. There are a lot of reasons why I'm not just writing tests using some 'native' language (groovy or java) and making this easy to use for the team will help adoption of testing.
So, I would like to be able to just instruct the members of the team to open a single 'suite' (or equivalent) and run it and it would then run each of the suites that I have designated as part of the 'master suite' (if you follow me).
I know that I could just maintain a list of the suites that are part of our automated tests, but it would be easier for me to sell if it was possible to just open up a single file and click 'go' and then walk away and see the results after coming back from getting a cup of coffee or something.
If your reason for not going with a native language is because of your non-technical people, then your automation strategy will fail.
Sorry for being blunt, but there is a reason why there is the IDE, and there is the native language support. They both serve very different purpose, and if you don't approach automation with the respect that it's a programming exercise, then your automation strategy will fail.
Selenium IDE is extremely limited. You are unable to string multiple test suites together. You only have the option of creating one huge a** list of test cases in one suite, or loading your suites in one at a time.
Go WebDriver - everything that you want to do, is extremely limited, if even possible using the IDE.
Yes, I wrote a framework that does that. You can record as many "Selenium Builder" scripts as you like and they will be ran by my framework in a multi-threaded fashion as a group. Just fork your own copy of my framework and then modify it to your uses.
I need to automate testing of my windows mobile application. My application does not have any UI. So, normal testing tools which works with random key strokes and mouse clicks will not work here. Are there any tools available for windows mobile to test only background processing?
You have a couple of options depending on what level of testing you want.
Integrated Test
An integrated test aims to test the application in the real world. Therefore you would create all of the "real" things and write code to specifically test to see if your conditions are met. I would believe however, if you're trying to test GPS then this would not be practical. As someone would actually have to move the device around.
Unit Test by mocking
I've done this before for GPS testing. The idea is that you SANDBOX the object being tested. You ensure that all external references (e.g. anything that isn't the object) are interfaces. You then MOCK these interfaces with "test-only" implementations.
For example I worked on a GPS test where we used an interface called: INmeaInterpreter to fire certain events which would be picked up by a class named PositioningService.
The default implementation was a 3rd party component.
However as INmeaInterpreter was an interface we could create an implementation that instead of using the REAL data uses (for example) an NMEA file to read from. This enabled us to test how the PositioningService worked in certain (and sometimes strange) scenarios.
I would then suggest mocking the other external references. The call to the database can just be a dummy object with a counter for the database call that is incremented if it is called. You could then write a test with an NMEA file that should result in a database call and then at the end of the unit test check that dummy object to see if that call occurred.
We did all the above with horrible MSTest but you could use any testing framework (I recommend NUnit). I'm not sure if there are options to specifically test on the device. We ran all of our tests on desktop as we'd split the code nicely so that device specific code was isolated and could easily be replaced with Desktop equivalents.
Obviously the only problem with unit tests is that they dont test the hardware.
I would recommend (depending on the size of the project and the team) to do BOTH types of testing but to place the larger emphasis on the unit tests (as they are easier to run/manage).
My goal is:
Our customers could generate new web-tests.
Our continuous integration server makes a test-environment deployment; it should execute the tests against it
The test could also be run against some other environment.
(Final acceptance tests should be made by the customer, to test fonts etc, but this would be a great pre-acceptance check for our test-environment. Customers could focus on other things than now.)
Usually some property (like text field id) has changed or something and the tests will break in a few weeks. It seems that recorded tests broke often, so it's better to easily record a new one than trying to maintenance and modify an old test.
Now, I found a whole new approach. Maybe recording is not the right way.
How about, if our customers could make use cases in a human readable own language which the machine would understand and compile to web-recording (with Domain Specific Language, DSL).
This is not sci-fi, it has been already made, so read on. :-)
I have tried to use these automatic web testing frameworks:
Visual Studio web test (Customers can't execute)
Selenium (Works only with Firefox, our customers have IE)
WatiN (.NET version of Watir, recorder seems to be a bit buggy)
HP Quick Test Pro (Not easy enough to make new tests)
None of these have provided actually what I need... but Selenium is the closest one.
Our customers speak Finnish, so in the beginning of a software project, in specification phase, user writes a use case like:
Avaa "OmaLomake"
Syötä "Tuomas" kohtaan "nimi"
Paina "Seuraava"
Translation:
Open "MyForm"
Insert "Tuomas" into field "name"
Press "Next"
Now... This is a human-readable use case, but also it can be compiled to automatic web acceptance test. Open, Insert, into field and Press are keywords, others are values.
What kind of DSL tool would be good for this?
Microsoft is making a new DSL-making-tool in their Oslo-project called MGrammar. It means that you can make a custom language to make it easy for non-technical people to work with machines. (The same basic idea that was (and failed) with Cobol and Visual Basic.)
I found that someone has already made this kind of DSL with MGrammar, but it is for Watin, not Selenium:
http://www.codinginstinct.com/2008/11/creating-watin-dsl-using-mgrammar.html
So the continuous integration server process will be:
Fetch a new version from source control (as usual).
Build, run unit tests and analyze the code (as usual).
Make an installation package and tag version in version control (as usual).
Compile use cases to web tests
Run web tests
Accept/Reject the software :-)
Running a web-test in the continuous integration server usually means a lot of configuration work. So, before I try this, I'm curious, what do you think?
Have you used same kind of setup, and what are your experiences? (What exact environment?)
How about DSL, will it have enough power for use cases or will it be another endless development task? Will the customers ever generate the tests?
First of all, Selenium does work with IE and other browsers as well as Firefox; cross browser support is one of its strengths. Here's the list of supported browsers.
However, if you want a human language-based DSL for writing your tests, take a look at Cucumber - the syntax is almost exactly like your example above. Cucumber already has Finnish language support - see the examples at this link.
Fitnesse and Selenium Integration tools such as Selenesse(http://github.com/marisaseal/selenesse) or Fitnium(http://www.magneticreason.com/tools/fitnium/fitnium.html) can also serve your purpose. However, you need to find answer for who will put the element locators in the test cases written by customers. If customers put the locators using the recorders, it may not be possible to maintain. If customers write the steps and a automation tester/developer can put those locators using regex, custom location strategy, this approach may work.
The TestPlan software uses a specialize language for writing tests. It is highly domain specific and works very well in web environments. It supports the Selenium backend so you gain that compatibility, plus it can run without a browser, for even faster tests. I have used it on some fairly large web projects in the type of setup you are looking for.
Your example script might look like this:
GotoURL /SomePage
Click MyForm
SubmitForm with
%Params:name% Tuomos
%Submit% value:Next
end
That's it. It nicely describes what the user wants to do and is a functioning test. You can combine scripts into units and have custom function as well. So if you really wanted you could write the Finish equivalents to the names.
Are there any books or articles that show you how to use NUnit to test entire features of a program? Is there a name for this type of testing?
This is different from the typical use of NUnit for unit testing where you test individual classes. This is similar to acceptance testing except that it is written by the developer to discern that the program does what they interpreted as being what the customer wants the program to do. I don't need it to be readable by non-programmers or to produce a readable specification for non-programmers.
The problem I am having is keeping this feature testing code maintainable. I need help in organizing my feature testing code. I also need help organizing the program code to be drivable in this way. I am having a hard time being able to issue commands to the program while still having good code design.
Currently I have a class called Program with a single public method called Run. With every test I start at the beginning of the program like the user would and then get to the desired point in the program where a particular feature would be available. I then use that feature in some way and verify it did what I want. I have a class called Commands that exposes different features of the program as methods. An instance of the Commands object is passed to the program and it eventually gets passed to every Form class. These will subscribe to events from the Commands class that are called by the methods of the command class(one matching event per method). The events are subscribed to by pointed to the method that is called when a certain part of the user interface is used, thus allowing the entire program to be drivable by my tests. If you call a method on the Command object for an event that is currently not subscribed to, a FeatureMissingException is thrown.
All of this works but I don't like the Command class. It is getting too large with too many responsibilities (every feature of the program). The Commands class is also a dependency magnet (all the Form classes have an instance of it but only subscribe to the events that represent features that can be activated through their UI).
It's called integration testing. Integration tests are much more difficult to make automated, and are very often done by hand. Many simpler tests can still be done using NUnit though - you don't have to do anything special, just don't use Mocks (like you should be doing for unit tests) so you can test how the modules actually fit together.
Context/specification is a good way of organizing these tests.
What you want to do is integration testing, like the other answer suggests. This will allows you to functional/feature testing. The most common framework for this for StoryQ or SpecFlow. This allows you to develop your tests in a BDD style and can be mostly be automated against the spec that you want.
Tools like Selenium allow you to do functional testing in a browser to do what the end user would do. All of these can be driven with NUnit since NUnit is purely a framework for running tests be them Unit tests to large functional tests