Windows Mobile - Automated Testing Tool for Non-UI Application - testing

I need to automate testing of my windows mobile application. My application does not have any UI. So, normal testing tools which works with random key strokes and mouse clicks will not work here. Are there any tools available for windows mobile to test only background processing?

You have a couple of options depending on what level of testing you want.
Integrated Test
An integrated test aims to test the application in the real world. Therefore you would create all of the "real" things and write code to specifically test to see if your conditions are met. I would believe however, if you're trying to test GPS then this would not be practical. As someone would actually have to move the device around.
Unit Test by mocking
I've done this before for GPS testing. The idea is that you SANDBOX the object being tested. You ensure that all external references (e.g. anything that isn't the object) are interfaces. You then MOCK these interfaces with "test-only" implementations.
For example I worked on a GPS test where we used an interface called: INmeaInterpreter to fire certain events which would be picked up by a class named PositioningService.
The default implementation was a 3rd party component.
However as INmeaInterpreter was an interface we could create an implementation that instead of using the REAL data uses (for example) an NMEA file to read from. This enabled us to test how the PositioningService worked in certain (and sometimes strange) scenarios.
I would then suggest mocking the other external references. The call to the database can just be a dummy object with a counter for the database call that is incremented if it is called. You could then write a test with an NMEA file that should result in a database call and then at the end of the unit test check that dummy object to see if that call occurred.
We did all the above with horrible MSTest but you could use any testing framework (I recommend NUnit). I'm not sure if there are options to specifically test on the device. We ran all of our tests on desktop as we'd split the code nicely so that device specific code was isolated and could easily be replaced with Desktop equivalents.
Obviously the only problem with unit tests is that they dont test the hardware.
I would recommend (depending on the size of the project and the team) to do BOTH types of testing but to place the larger emphasis on the unit tests (as they are easier to run/manage).

Related

Testing Framework vs Testing tool

I am new to testing. The commonly used terms like Framework and tool confuses me a lot. Can anyone please explain me the difference between a Framework like STAF[software testing automation framework]and Tool like selenium.
Also how to select a tool for a particular framework. What are the criterias used for selection?
Brief explanations are welcomed!!
Tool:
Simply put, a tool is a software. In case of test automation, tools are software that let you automate your tests on an application. There are many test automation tools that you can choose from depending on your requirements. Some examples are Selenium, UFT, Visual Studio CUIT, Jamo Solutions Meux Test, T-Plan Robot, Telerik Test Studio etc.
Often, you'll have to write tests in the tools using a supported programming language. For instance, testers using UFT need to code in VB while those using Visual Studio can code in both VB and C#. However, some testing tools (like Telerik Test Studio) let you write script-less tests where your tests will consist of a bunch of easily understandable keywords, not code.
Framework:
The most popular test automation tools like Selenium and Visual Studio provide all the basic features you require to build your own tests. However, they do not provide ready-made features (like Reporting and Exception Handling) for testing. This requires the creation of a 'Framework' which is nothing but a collection of code written using a tool of one's choice that makes testing an application easy. Simply put, a framework is what you create with a tool (or a collection of tools) to test your application.
A typical framework consists of two parts: test scripts and function libraries. Test Scripts are the pieces of code that need to be executed to perform actions on the application under test (AUT). Function Libraries are classes consisting of important functions that are called by your test scripts. These can include timing functions, reporting/logging functions, exception handling functions, data communication functions etc.
You can also use an external database to pass test data to your test scripts during run-time instead of hard-coding it in your test scripts. Frameworks that employ external databases are called data-driven frameworks. The external database can be of your choice, be it a SQL Server, an XML file or a simple Excel spreadsheet. Data-driven frameworks make use of APIs or include custom-made classes that let you communicate with the database to transfer data.
Another type of framework is the keyword-driven framework. These frameworks are used in long-term test automation projects that require scripting of thousands of test cases. The main objective of these frameworks is to reduce the time taken to script a test case by reusing code that has already been written. They often include very strong function libraries which enable scripting of test cases using just predefined keywords. For example, common actions on an application like login and logout are performed by single line codes like:
Actions.Login();
and
Actions.Logout();
where Actions is a Function Library that consists of the Login() and Logout() functions. This massively reduces the script size and the long-term maintenance requirements of the test script, among other benefits.
Of course, you can either build a test automation framework and use it for your own application or create a generic test automation framework and make it available to the testing community for everyone to use, which is what STAF is.
Selection of testing tools:
To address your second question, there is no straight-forward answer to it. There are a number of criteria that can affect your decision. But in the end, it is all about your requirements and the requirements of your AUT.
If it is a Windows desktop app, you have to use Coded UI Tests in
Visual Studio.
If it is a Web application, you can use Selenium, UFT, Visual Studio or Test Studio.
If it is a mobile app, you can use Appium, Jamo Solutions Meux Test or T-Plan Robot.
If you want to test your mobile app over a large number of devices
and platforms, you can use cloud-based tools like Sauce Labs,
Perfecto Mobile or Device Anywhere.
If you are short on budget, you'll be better off using open source tools over
commercial tools, and so on.
Application Testing is a huge industry now and there is no dearth of testing tools available in the market. You will find the perfect tool for you if you know what you want and do some research on Google.
I will try to answer what I believe people normally use these terms to mean, lets start with the simpler term: A tool.
A tool like selenium is what actually does the automation, it has an API that will work for pretty much anything it covers (in this case websites) but it knows nothing about how the website you want to test works, this means it deals with low level constructs such as elements on a page and clicks.
A framework is normally just wrapping a tool to make it easier to make a test by imparting knowledge of your application, a standard example is login.
Say you want a test that checks when you enter a correct username and password in you get access to application. Using just selenium it would something like:
driver.findElement(By.id("username").sendKeys("MyUsername");
driver.findElement(By.id("password").sendKeys("password123");
driver.findElement(By.id("login").click();
Thats pretty simple, but as you can guess login is going to be used a lot across your tests and so it makes sense to move this into a place that makes it easier to reuse (both from a less code stand point and maintainability). This is where a framework comes into play, normally with selenium it will be page objects (see here)
Base on my understanding:
TOOLS
We "USE" tools to meet our objective (can be own self or your small groups of team).
Example: We use Selenium IDE as a tools helps us to automate some repetitive steps to do certain verification during our smoke test.
FRAMEWORK
We "DESIGN" a framework to meet the organization mission.
Things to consider when we design the framework including:
Maintainability
Reusability
Data Driven
Reporting
Schedule running through CI tools like Jenkins
Example: We design a Test Automation Framework by using WebDriver + Java + TestNG + ANT, to meet the objective to identify our current code base stability, and the test will be trigger and run by jenkins in a daily basis, SSRS report will captured in a daily basis as well each time the test finished. Stakeholders can review the daily code stability report anytime he/she needed.
Hope that can help you :D

Portable, tool-independent test automation

Our codebase contains code in multiple languages, ranging from Python over C# to MATLAB and LaTeX. Currently we have unit tests in each individual language (using language-specific frameworks). This makes test automation cumbersome, especially collecting and checking all the different reports.
I am therefore looking for a test automation tool that
is portable (at least Windows + Linux)
is language-independent
can be extended (custom reporting, additional languages, ...)
Ideally the tool would connect via plugins to the existing tool-specific frameworks like Python's unittest, C#'s NUnit, etc.
Is there such a tool? If not, how do you handle test scenarios like this?
Perhaps you're looking for something like the Test Anything Protocol?
The Test Anything Protocol (TAP) is a protocol to allow communication between unit tests and a test harness. It allows individual tests (TAP producers) to communicate test results to the testing harness in a language-agnostic way.
It doesn't avoid the need for language-specific tests, but it does make sure that your test results are all in a uniform format, so that you can build cross-langauge reports. It's also fairly simple, so if your language of choice doesn't have a TAP producer, it is not very difficult to write one (and maybe share it back to the world).
If you go with TAP as already suggested, then a complete test infrastructure around that is Tapper.
The central server assumes some external Linux tools when you need its automation layer but besides that everything else is language and OS independent. It can receive TAP reports regardless how they were started: manual tests, crontab-driven, or the built-in automation which can set up machines from scratch or start tests via ssh.
See above link for more info, especially have a look at the linked presentations.
Summary:
TAP as protocol for writing tests
Tapper for the infrastructure

test a monitoring tool

please guide me how to test a monitoring tool?
Due to the nature of the project (a program that monitor another target program), we cannot expect the input (activity of target program) and hence what is the best way to test it?
I don't know exactly how your monitoring tool is working, but I think that some how it communicates with some 'target' applications, retrieve data from them and displays a report.
I think that your problem is that you cannot control the input to your monitoring tool, in order to test that it displays correct data, because that input depends on your target apps.
My suggestion is to create a Mock application. This app must have the same interface with your real target app so your tool can communicate with it. Of course, the implementation must be different. Implement it with that way, so that every parameter is checked by your tool will be configurable to you. So, you will be able to know which will be the input to your tool.

Code Coverage tool for BlackBerry

I'm looking for a code coverage tool that I can use with a BlackBerry application. I'm using J2ME-Unit for Unit Testing and I want to see how much of my code is being covered by my tests.
I've tried using Cobertura for J2ME but after days of wrestling with it I failed to get any results from it. (I believe that the instrumentation is un-done by the RAPC compilation). And despite this message, the project seems to be dead.
I've looked at JInjector but the project seems very incomplete. There is little (if any) documentation and although it claims to be able to work with BlackBerry projects, I haven't seen any places where it has been used for that purpose. I've played with the project quite a bit but to no avail.
I've also tried the "Coverage" view in the BlackBerry JDE, even though I use Eclipse for development. The view stays permanently blank, regardless of clicking "Refresh" and running the application from the JDE.
I've looked at most of the tools on this SO thread, but they won't work with J2ME/BlackBerry projects.
Has anyone had any success with any code coverage tools on the BlackBerry? If so, what tools have you used? How have you used them?
If anyone has managed to get JInjector or Cobertura for J2ME to work with a BlackBerry project, what did you have to do to get it working?
I can't speak for Coberatura or JInjector, because I don't know how they collect test coverage probe data.
What is
critical is how this data is captured (does it need Java runtime support only available in standard Java VMs?) and how it is exported to the test coverage display/report generation tools.
Our SD Java Test Coverage tool instruments your source code; at runtime this produces an array of native Java booleans representing the coverage data, without need for any special VM support. Normally, this array is exported directly to a file, used by the test coverage display mechanism, by a TCVDump method provided with the test coverage tool, as your application exits.
Java (and other programming langauges used) in embedded systems often requires custom methods to extract the test coverage data. You might need to code a special dump procedure (in Java) to write out that boolean array to an accessible place. Our experience with building such custom dump procedures is that they are generally pretty simple (a few dozen lines); the real trick is deciding how/where to put the data, so that it can be easily moved to the target file. Mostly this is just a peculiar pair of copies, the first of which copies the boolean array to some staging location, and the second which writes the staged data into the destination file. (The standard TCVdump method is provided in source form to enable this kind of customization).
While I haven't specifically looked at BlackBerry, if you can write the data anywhere, you can pretty much be assured you can achieve this. We've had success with other embedded hand-set systems, such as Symbian, doing this.
If you want a complete overview of how to generally instrument code for test coverage following this strategy, see this paper: Branch Coverage for Arbitrary Languages Made Easy
I was actively involved with JInjector while working at Google. We were able to use it to successfully obtain code coverage for Blackberry applications. The application lifecycle for Balckberry apps is less predictable than J2ME and we found we had to tweak the application code to ensure the coverage data was gathered. I didn't personally work on the blackberry apps, several other engineers did. I'd hoped we'd create an example blackberry application and make it available on the jinjector site, but events and life got in the way.
If you would be willing to provide a sample blackberry apps with some unit tests, I'd be willing to spend a few hours trying to help you get the code coverage working. I'm not actively working with either J2ME or Blackberry (I'm currently working on Android apps when I have time to experiment with mobile) so I'm quite rusty. I have a day job that doesn't involve much mobile test automation, however I continue to work on ways to improve the test automation for mobile apps e.g. http://code.google.com/p/mwta/downloads/list for Android Test Automation.
I'm julianharty at gmail.com

Books or Articles on Using NUnit to Test Entire Features

Are there any books or articles that show you how to use NUnit to test entire features of a program? Is there a name for this type of testing?
This is different from the typical use of NUnit for unit testing where you test individual classes. This is similar to acceptance testing except that it is written by the developer to discern that the program does what they interpreted as being what the customer wants the program to do. I don't need it to be readable by non-programmers or to produce a readable specification for non-programmers.
The problem I am having is keeping this feature testing code maintainable. I need help in organizing my feature testing code. I also need help organizing the program code to be drivable in this way. I am having a hard time being able to issue commands to the program while still having good code design.
Currently I have a class called Program with a single public method called Run. With every test I start at the beginning of the program like the user would and then get to the desired point in the program where a particular feature would be available. I then use that feature in some way and verify it did what I want. I have a class called Commands that exposes different features of the program as methods. An instance of the Commands object is passed to the program and it eventually gets passed to every Form class. These will subscribe to events from the Commands class that are called by the methods of the command class(one matching event per method). The events are subscribed to by pointed to the method that is called when a certain part of the user interface is used, thus allowing the entire program to be drivable by my tests. If you call a method on the Command object for an event that is currently not subscribed to, a FeatureMissingException is thrown.
All of this works but I don't like the Command class. It is getting too large with too many responsibilities (every feature of the program). The Commands class is also a dependency magnet (all the Form classes have an instance of it but only subscribe to the events that represent features that can be activated through their UI).
It's called integration testing. Integration tests are much more difficult to make automated, and are very often done by hand. Many simpler tests can still be done using NUnit though - you don't have to do anything special, just don't use Mocks (like you should be doing for unit tests) so you can test how the modules actually fit together.
Context/specification is a good way of organizing these tests.
What you want to do is integration testing, like the other answer suggests. This will allows you to functional/feature testing. The most common framework for this for StoryQ or SpecFlow. This allows you to develop your tests in a BDD style and can be mostly be automated against the spec that you want.
Tools like Selenium allow you to do functional testing in a browser to do what the end user would do. All of these can be driven with NUnit since NUnit is purely a framework for running tests be them Unit tests to large functional tests