J2ME coverage tools - testing

I need to estimate the code coverage of a test set.
The tests are run on a J2ME application, on a physical device.
MIDP 2.1, CLDC 1.1 and JSR-75 FileConnection are available.
As J2ME is (roughly) a subset of J2SE, tools using java.io.File (like those listed in the only answer so far..) can not be used.
This is mainly to identify pieces of code the tests do not touch at all.
It would also be nice to be able to combine the report data arbitrarily afterwards, so I can see how much a new test actually increases coverage.
Are there any alternatives to Cobertura4j2me?

There's lots of Java code coverage tools. Many of them work by using JVM features not available in embedded systems due to space limitations.
One that uses only an additional boolean array in which to hold the coverage data can be found at
http://www.semanticdesigns.com/Products/TestCoverage/JavaTestCoverage.html
You have to code an additional routine that dumps that array out of your embedded device into a file on a PC, but that's generally a pretty easy task (e.g., several hours work, once).

Here's a slew of alternatives.
http://java-source.net/open-source/code-coverage

Related

Does anyone use Sikuli as a testing tool?

Hi I have a Swing application to test and I found Sikuli a nice tool to do it, but I am a little worried about the size of the community and if it's being continually developed and it's being used by other companies.
Do you use it?
For what?
Is it stable?
Is it the best tool for the job you needed?
I use it in my company, too.
It can be used quite easily for not too complex gui-tests.
Sikuli was not developed for the last year but development is now increasing again.
Questions in the Sikuli-FAQ section on launchpad are answered fast although the community is not that big.
In my company, Sikuli is used for gui testing which was previously done by human testers.
It saves some time but not everything is automatable with Sikuli, e.g. the OCR functionality is not dependable (but will be updated from tesseract 2.04 to 3 in the near future).
For my job it was the best tool because it is the only open source (=free) tool I found that provides screenshot based automation that can be integrated with other systems like CI-systems and is programmable with Java and Python which makes easy unit testing possible with JUnit or PyUnit.
Hope I could help.
Yes we use it in-house for testing. It is actively supported. I have reported bugs in Sikuli and have had tickets and workarounds suggested within days with the bugs fixed in the next revision.
It is quite stable. The problems I have encountered typically come from not specifying images correctly and the program selecting an incorrect area of the screen.
One of our more unique uses was creating a set of automated bench tests for a legacy embedded system. The system was written in assembly and had no unit testing capabilities. It communicated with a custom legacy PC application. Rather than try to locate the PC source code, reverse engineer the design, and then write some meaningful bench tests, we created a number of Sikuli scripts to interface with the PC app. It saved weeks of development.
Yes, we use it for automating GUI tests. It's used mostly for old systems that were developed with no test driven back end. (ie: no testing api)
We tests some very complex tools including a debugger using Sikuli.
We tend not to use the Sikuli IDE though.

How to do code coverage on embedded

I write a project for a non POSIX embedded system so I cannot use gcc option --coverage (i don't have read or write). What else can I do to produce gcov like output. I do have an output function.
It can be most easily done with by a processor with embedded trace, a board design that exposes the trace port, and a suitable hardware debugger and associate software. For example, many Cortex-M based devices include ARM's embedded trace macrocell (ETM), and this is supported by Keil's uVision IDE and ULINK-Pro debugger to provide code coverage and instruction/source level trace as well as real-time profiling. Hardware trace has the advantage that it is non-intrusive - the code runs in real-time.
If you do not have the hardware support, you may have to resort to simulation. Many tool-chains include an instruction level simulator that will perform trace, code-coverage, and profiling, but you may have to create debug scripts or code stubs to simulate hardware to coerce the execution of all paths.
A third alternative is to build the code on a desktop platform with stubs to replace target hardware dependencies, and perform testing and code coverage on that. You have to trust that the target C compiler and the test system compiler both translate the source with identical semantics. The advantage here is that the debug tools available are often superior to those available to embedded systems. You can also test much of your code before any hardware is available, and in most cases execute code much faster, possibly allowing more extensive testing.
Not having a POSIX API does not preclude using GCC, it merely precludes using the GNU C library. On embedded systems without POSIX, alternative C libraries are used such as Newlib. Newlib has a system porting layer where I/O and basic heap management are implemented.
Disclaimer: The company (Rapita Systems) I work for provides a code coverage solution aimed at embedded applications.
Because embedded systems bring their own, special and widely varying requirements, the "best" solution for code coverage also varies widely.
Where you have trace-based devices, like ARM chips with ETM or NEXUS-enabled parts, you can perform coverage without instrumentation via debuggers.
Otherwise, you are most likely faced with an instrumentation-based solution:
For RAM-limited solutions, a good solution is to write instrumentation to an I/O port
Alternatively, you can record instrumentation to a RAM buffer, and use a wide variety of means to extract this from the target.
Of course lots of different flavours of code coverage are also available: function, statement, decision/branch, MC/DC
Our family of C/C++ test coverage tools instrument the source code, producing a program you compile with you embedded compiler, that will collect test coverage data into a "small" data structure added to the program. This works with various dialects including ANSI, GCC, Microsoft and GreenHills.
You have to export that data structure from the embedded execution context to a file on a PC; this is often easy to do with a spare serial or parallel port and a small bit of custom code specific to your port. The tools will provide test coverage views and summaries with that resulting files.
So, you can use these tools to collect test coverage data from your embedded system, in most practical circumstances.
If your embedded target is supported by GCC-based cross-toolchains, you may find my blog post useful.
The main idea is that you compile your code with the appropriate gcov options, and then create the coverage information in memory (what in the end is stored in .gcda files). You can then place appropriate breakpoints with your GDB, and dump this information over your debug link (serial, JTAG, whatever).
Have a look at my blog post - I describe things in great detail.

Code Coverage tool for BlackBerry

I'm looking for a code coverage tool that I can use with a BlackBerry application. I'm using J2ME-Unit for Unit Testing and I want to see how much of my code is being covered by my tests.
I've tried using Cobertura for J2ME but after days of wrestling with it I failed to get any results from it. (I believe that the instrumentation is un-done by the RAPC compilation). And despite this message, the project seems to be dead.
I've looked at JInjector but the project seems very incomplete. There is little (if any) documentation and although it claims to be able to work with BlackBerry projects, I haven't seen any places where it has been used for that purpose. I've played with the project quite a bit but to no avail.
I've also tried the "Coverage" view in the BlackBerry JDE, even though I use Eclipse for development. The view stays permanently blank, regardless of clicking "Refresh" and running the application from the JDE.
I've looked at most of the tools on this SO thread, but they won't work with J2ME/BlackBerry projects.
Has anyone had any success with any code coverage tools on the BlackBerry? If so, what tools have you used? How have you used them?
If anyone has managed to get JInjector or Cobertura for J2ME to work with a BlackBerry project, what did you have to do to get it working?
I can't speak for Coberatura or JInjector, because I don't know how they collect test coverage probe data.
What is
critical is how this data is captured (does it need Java runtime support only available in standard Java VMs?) and how it is exported to the test coverage display/report generation tools.
Our SD Java Test Coverage tool instruments your source code; at runtime this produces an array of native Java booleans representing the coverage data, without need for any special VM support. Normally, this array is exported directly to a file, used by the test coverage display mechanism, by a TCVDump method provided with the test coverage tool, as your application exits.
Java (and other programming langauges used) in embedded systems often requires custom methods to extract the test coverage data. You might need to code a special dump procedure (in Java) to write out that boolean array to an accessible place. Our experience with building such custom dump procedures is that they are generally pretty simple (a few dozen lines); the real trick is deciding how/where to put the data, so that it can be easily moved to the target file. Mostly this is just a peculiar pair of copies, the first of which copies the boolean array to some staging location, and the second which writes the staged data into the destination file. (The standard TCVdump method is provided in source form to enable this kind of customization).
While I haven't specifically looked at BlackBerry, if you can write the data anywhere, you can pretty much be assured you can achieve this. We've had success with other embedded hand-set systems, such as Symbian, doing this.
If you want a complete overview of how to generally instrument code for test coverage following this strategy, see this paper: Branch Coverage for Arbitrary Languages Made Easy
I was actively involved with JInjector while working at Google. We were able to use it to successfully obtain code coverage for Blackberry applications. The application lifecycle for Balckberry apps is less predictable than J2ME and we found we had to tweak the application code to ensure the coverage data was gathered. I didn't personally work on the blackberry apps, several other engineers did. I'd hoped we'd create an example blackberry application and make it available on the jinjector site, but events and life got in the way.
If you would be willing to provide a sample blackberry apps with some unit tests, I'd be willing to spend a few hours trying to help you get the code coverage working. I'm not actively working with either J2ME or Blackberry (I'm currently working on Android apps when I have time to experiment with mobile) so I'm quite rusty. I have a day job that doesn't involve much mobile test automation, however I continue to work on ways to improve the test automation for mobile apps e.g. http://code.google.com/p/mwta/downloads/list for Android Test Automation.
I'm julianharty at gmail.com

Most appropriate platform independent development language

A project is looming whereby some code that I will be writing may be deployed on any hardware that potential clients happen to have. Its a business application that will be running 24/7 so I envisage that most of the host machines will be server type boxes but smaller clients might, for example, just have a simple PC.
A few more details about the code I will be writing:
There will be no GUI.
It will need to communicate with another bespoke 'black box' device over an Ethernet network.
It will need to communicate with a MySQL database somewhere on the network.
I don't have any performance concerns as a) the number of communications with the black box will be small, around 1 per second, and the amount of data exchanged will be tiny (around 1K each time), b) the number of read/writes with the database will be small, around 5 per minute, and again the amount of data exchanged will be tiny and c) the processing that needs to be performed is fairly simplistic.
Nothing I'm doing is very 'close to the metal' so I don't want to use languages that are too low level. Ease of development and ease of deployment are my main priorities.
I'm not expecting there to be a perfect solution so I can live with things like, for example, having to have slightly different configuration files for Windows machines than for Linux boxes etc. I would like to avoid having to compile the software for each host machine if possible though.
I would value your thoughts as to which development language you think is most suitable.
Cheers,
Jim
I'd go with a decent scripting language such as Python, Perl or Ruby personally. All of those have decent library support, can communicate easily with both local and remote MySQL databases and are pretty platform independent.
The first thing we need to know is what language skills you already have? This is likely to be a fairly big determiner of what choice you would ideally make.
If I was doing this I'd suggest Java for a couple of reasons:
It will run almost anywhere and meet the requirements you've outlined.
Its not an esoteric language so there will be plenty of developers.
I already know how to program in it!
Probably the most extensive library ecosystem of any of the development platforms.
Also note that you could write it in another language on the JVM if your more comfortable with Ruby or Python.
Sounds like Perl or Python would fit the bill perfectly. Which one you choose would depend on the expertise of the people building and supporting the system.
On the subject of scripting languages versus Java, I have been disappointed with developing command line tools using Java. You can't directly execute them, you have to (1) compile them and (2) write a shell script to execute the jar file, this script may differ between platforms. I recommend Python because it runs anywhere and it's got a great SQL library, mysql-python. The library is ready to use on Windows and Linux. Python also has a lot less boilerplate, you'll write fewer lines of code to do the same thing.
EDIT: when I talked about JARs being executable or not, I was talking about whether they are directly executable be the OS. You can, of course, double click on them to run them if your file manager is set up to do so. But when you're in a terminal window and you want to run a java program, you have to "java -jar myapp.jar" instead of the usual "./myapp.jar". In Python one just runs "./myapp.py" and doesn't have to worry about compiling or class paths.
If all platforms are standard PCs (or at least run Linux), then Python should be considered. You can compile it yourself if no package exists for your version. Also, you can strip the standard library easily from things that aren't available and which you don't need (sound support, for example).
Python doesn't need lots of resources, it's easy to learn and read.
If you know Perl, you can try that. If you don't use Perl on a daily basis, then don't. The Perl syntax is hard to remember and after a week, you'll wonder what the code did, even if you wrote it yourself.
Perl may be of help to you as it is available for many platforms and you can get almost any functionality by simply installing modules from CPAN.
Python or Java. They both are easy to deploy on both the server environments and the desktop environments you mention - i.e., Linux/Solaris and Windows.
Perl is also a nice choice, but it depends on how well you know Perl, how well other people that will maintain your code know Perl, and number of desktop users that are savvy enough to handle an install of the Windows Perl version(s).
As Java supports Python via Jython, I'd go with a JVM requirement myself, but I'd personally go with a Java application all the way for such a system you describe.
I would say use C or C++. They are platform independant, though you will have to compile for each platform.
Or use Java. That runs in a Virtual Machine so is truely cross platform and not a slow level as C.

What is a good regression testing framework for software applications?

Am looking for a regression test framework where I can add tests to.. Tests could be any sort of binaries that poke an application..
This really depends on what you're trying to do, but one of the features of the new Test::Harness (disclaimer: I'm the original author and still a core developer) is that if your tests output TAP (the Test Anything Protocol), you can use Test::Harness to run test suites written in multiple languages. As a result, you don't have to worry about getting "locked in" to a particular language because that's all your testing software supports. In one of my talks on the subject, I even give an example of a test suite written in Perl, C, Ruby, and HTML (yes, HTML -- you'd have to see it).
Just thought I would tell you guys what I ended up using..
QMtest ::=> http://mentorembedded.github.io/qmtest/
I found QMTest to full fill my needs. Its extensible framework, allows you to write very flexible test classes. Then, these test classes could be instantiated to large test suites to do regression testing.
QMTest is also very forward thinking, it allows for weak test dependencies and the creation of test resources. After a while of using QMTest, I started writing better quality tests. However, like any other piece of complex software, it requires some time to learn and understand the concepts, the API is documented and the User Manual give a good introduction. With sometime in your hand, I think QMTest is well worth it.
You did not indicate what language you are working in, but the xUnit family is available for a lot of different languages.
/Allan
It also depends heavily what kind of application you're working on. For a commandline app, for example, its probably easy enough to just create a shell script that calls it with a whole bunch of different options and compares its result to a previously known stable version, warning you if any of the output differs so that you can check whether the change is intentional or not.
If you want something more fancy, of course, you'll probably want some sort of dedicated testing framework.
I assume you are regression-testing a web application?
There are some tools in this kb article from Microsoft
And if I remember correctly, certain editions of Visual Studio also offer its own flavor of regression testing tools as well.
But if you just want a unit testing framework, the xUnit family does it pretty well.
Here's JUnit and NUnit.