I'm in charge of the automation of our builds, tests, etc. in my company. We are very much a multi-platform shop. We are compiling .NET code, Java for android and XCode for iPhone applications. We run a build on every check in. All of our automation is done with a combination of Jenkins, NANT and ANT We have a project coming up to enforce our code standards so that variable naming, indentation, etc are all consistent within each code base.
To this end, I'm looking to add a code standard enforcement into the check-in policy. I would like either a pre-commit hook in SVN or a tool that runs during the check-in build that fails the build on violation. The problem I am finding is that every tool, CheckStyle, StyleCop, etc are really designed for one language. I'd prefer not to have to maintain three separate tools. Is there good multi-language tool that I can use for this purpose?
There's at least one such tool: Coverity. It is extremely powerful, expensive and slow.
That said, I personally would pick tools for each language separately. You're running automated tests to discover errors. You may find that tools which focus on a single language uncover more errors, faster and cheaper.
Also, you can significantly reduce some costs by using in the headless build the same tool that can be run rapidly or continuously by developers in their IDE.
Related
I am new to testing. The commonly used terms like Framework and tool confuses me a lot. Can anyone please explain me the difference between a Framework like STAF[software testing automation framework]and Tool like selenium.
Also how to select a tool for a particular framework. What are the criterias used for selection?
Brief explanations are welcomed!!
Tool:
Simply put, a tool is a software. In case of test automation, tools are software that let you automate your tests on an application. There are many test automation tools that you can choose from depending on your requirements. Some examples are Selenium, UFT, Visual Studio CUIT, Jamo Solutions Meux Test, T-Plan Robot, Telerik Test Studio etc.
Often, you'll have to write tests in the tools using a supported programming language. For instance, testers using UFT need to code in VB while those using Visual Studio can code in both VB and C#. However, some testing tools (like Telerik Test Studio) let you write script-less tests where your tests will consist of a bunch of easily understandable keywords, not code.
Framework:
The most popular test automation tools like Selenium and Visual Studio provide all the basic features you require to build your own tests. However, they do not provide ready-made features (like Reporting and Exception Handling) for testing. This requires the creation of a 'Framework' which is nothing but a collection of code written using a tool of one's choice that makes testing an application easy. Simply put, a framework is what you create with a tool (or a collection of tools) to test your application.
A typical framework consists of two parts: test scripts and function libraries. Test Scripts are the pieces of code that need to be executed to perform actions on the application under test (AUT). Function Libraries are classes consisting of important functions that are called by your test scripts. These can include timing functions, reporting/logging functions, exception handling functions, data communication functions etc.
You can also use an external database to pass test data to your test scripts during run-time instead of hard-coding it in your test scripts. Frameworks that employ external databases are called data-driven frameworks. The external database can be of your choice, be it a SQL Server, an XML file or a simple Excel spreadsheet. Data-driven frameworks make use of APIs or include custom-made classes that let you communicate with the database to transfer data.
Another type of framework is the keyword-driven framework. These frameworks are used in long-term test automation projects that require scripting of thousands of test cases. The main objective of these frameworks is to reduce the time taken to script a test case by reusing code that has already been written. They often include very strong function libraries which enable scripting of test cases using just predefined keywords. For example, common actions on an application like login and logout are performed by single line codes like:
Actions.Login();
and
Actions.Logout();
where Actions is a Function Library that consists of the Login() and Logout() functions. This massively reduces the script size and the long-term maintenance requirements of the test script, among other benefits.
Of course, you can either build a test automation framework and use it for your own application or create a generic test automation framework and make it available to the testing community for everyone to use, which is what STAF is.
Selection of testing tools:
To address your second question, there is no straight-forward answer to it. There are a number of criteria that can affect your decision. But in the end, it is all about your requirements and the requirements of your AUT.
If it is a Windows desktop app, you have to use Coded UI Tests in
Visual Studio.
If it is a Web application, you can use Selenium, UFT, Visual Studio or Test Studio.
If it is a mobile app, you can use Appium, Jamo Solutions Meux Test or T-Plan Robot.
If you want to test your mobile app over a large number of devices
and platforms, you can use cloud-based tools like Sauce Labs,
Perfecto Mobile or Device Anywhere.
If you are short on budget, you'll be better off using open source tools over
commercial tools, and so on.
Application Testing is a huge industry now and there is no dearth of testing tools available in the market. You will find the perfect tool for you if you know what you want and do some research on Google.
I will try to answer what I believe people normally use these terms to mean, lets start with the simpler term: A tool.
A tool like selenium is what actually does the automation, it has an API that will work for pretty much anything it covers (in this case websites) but it knows nothing about how the website you want to test works, this means it deals with low level constructs such as elements on a page and clicks.
A framework is normally just wrapping a tool to make it easier to make a test by imparting knowledge of your application, a standard example is login.
Say you want a test that checks when you enter a correct username and password in you get access to application. Using just selenium it would something like:
driver.findElement(By.id("username").sendKeys("MyUsername");
driver.findElement(By.id("password").sendKeys("password123");
driver.findElement(By.id("login").click();
Thats pretty simple, but as you can guess login is going to be used a lot across your tests and so it makes sense to move this into a place that makes it easier to reuse (both from a less code stand point and maintainability). This is where a framework comes into play, normally with selenium it will be page objects (see here)
Base on my understanding:
TOOLS
We "USE" tools to meet our objective (can be own self or your small groups of team).
Example: We use Selenium IDE as a tools helps us to automate some repetitive steps to do certain verification during our smoke test.
FRAMEWORK
We "DESIGN" a framework to meet the organization mission.
Things to consider when we design the framework including:
Maintainability
Reusability
Data Driven
Reporting
Schedule running through CI tools like Jenkins
Example: We design a Test Automation Framework by using WebDriver + Java + TestNG + ANT, to meet the objective to identify our current code base stability, and the test will be trigger and run by jenkins in a daily basis, SSRS report will captured in a daily basis as well each time the test finished. Stakeholders can review the daily code stability report anytime he/she needed.
Hope that can help you :D
Hi I have a Swing application to test and I found Sikuli a nice tool to do it, but I am a little worried about the size of the community and if it's being continually developed and it's being used by other companies.
Do you use it?
For what?
Is it stable?
Is it the best tool for the job you needed?
I use it in my company, too.
It can be used quite easily for not too complex gui-tests.
Sikuli was not developed for the last year but development is now increasing again.
Questions in the Sikuli-FAQ section on launchpad are answered fast although the community is not that big.
In my company, Sikuli is used for gui testing which was previously done by human testers.
It saves some time but not everything is automatable with Sikuli, e.g. the OCR functionality is not dependable (but will be updated from tesseract 2.04 to 3 in the near future).
For my job it was the best tool because it is the only open source (=free) tool I found that provides screenshot based automation that can be integrated with other systems like CI-systems and is programmable with Java and Python which makes easy unit testing possible with JUnit or PyUnit.
Hope I could help.
Yes we use it in-house for testing. It is actively supported. I have reported bugs in Sikuli and have had tickets and workarounds suggested within days with the bugs fixed in the next revision.
It is quite stable. The problems I have encountered typically come from not specifying images correctly and the program selecting an incorrect area of the screen.
One of our more unique uses was creating a set of automated bench tests for a legacy embedded system. The system was written in assembly and had no unit testing capabilities. It communicated with a custom legacy PC application. Rather than try to locate the PC source code, reverse engineer the design, and then write some meaningful bench tests, we created a number of Sikuli scripts to interface with the PC app. It saved weeks of development.
Yes, we use it for automating GUI tests. It's used mostly for old systems that were developed with no test driven back end. (ie: no testing api)
We tests some very complex tools including a debugger using Sikuli.
We tend not to use the Sikuli IDE though.
I have been working on Test Automation from last few months and have been using the tool named "Testcomplete". But I have noticed that the tools do not matter a lot in the field of automation. Only thing you expect from an automation tool is the ability of the tool to spit out the recognition strings for the different controls used in the test application.
Apart from this, you will always have to build a automation framework which will serve your needs writing code.
So my question is, Is my thinking that automation tools do not matter a lot in the field of automation correct? In the sense, you can use any tools to get your automation running. Or Do the tools really matter? (Please ignore the costs factor of the tools). Also if I need to learn a new automation tool, then what do I concentrate on? Or how do I go about learing the tool? In short, what exactly does "learning a tool" mean?
My 3 best reasons for choosing which tool to use:
it works. This is important, not all tools work in all scenarios i.e. flash, silver light, adobe air, legacy apps with no automation support, etc.
whole team skills. This includes not only testers, but also developers. Test automation shouldn't be an isolated effort, developers should also collaborate on it. This is far easier when dev and test are using the same language/platform.
price. Doesn't have to be free (but it could), but of course its an important factor.
Personally we use the same test runner as the one for the unit tests. That along with extra third party automation pieces that do the plumbing for you.
Some additional thoughts on why the tool is important:
Community - What's the user community like? Are there a lot of user-generated resources out there to help?
Support - (if vendor) What's customer support like? Do they fix problems quickly? Is it easy to find solutions to common problems?
Extensibility - Often in test automation, you'll need to roll your own or code work-arounds, if the tool does not support a particular type of object in your application. How easy is it to extend the product? What programming language does the tool use? What kind of support do you get from the IDE?
An other piece of advice: sometimes you'll need wrapper classes around certain frameworks. We were using WatiN, which was really good at its time, but it lacked Chrome (it had a small percentage that time) support. The thing that killed WatiN for us was the lack of coping with new FireFox releases: FireFox 8 was out, and we had to run on our tests on FireFox 3.6...
Selenium was the solution, but it had a totally different logic and we already had more than a 1000 tests.
So we had to create a wrapper class around Selenium to "fake" it was WatiN. We had some issues, but we had to rewrite only some special cases... And not all tests.
The point is, sometimes, support for frameworks just cease to exist. But with an own framework focusing on what the test actually does instead of how it works would save you in this situation.
Variety of Test Automation Frameworks and Test Automation Tools are already available in the market. Thus, I would not recommend the built your own Test Automation Framework at all.
As far as selection of automation tools is a concern, I would say it does matter on the following basis:
Support: How much level of support you have when you are choosing an automation tool for your project.
Community: How big community is using that tool and how responsive that community is about sharing knowledge?
Pricing: (Proprietary or Open Source) Last but not the least is the pricing of the automation tool that you are planning to introduce in your project.
QA teams' expertise also matters sometimes. For example, in case your QA team does not have a developer or semi-developer skills vs Not-Technical QA Team, etc.
Regarding the Automation framework, there are many automation frameworks also available in the market already, therefore no need to reinvent the wheel. and selection of automation framework mostly depends on your selection of scripting language.
For example, if you choose python as your scripting language, then you have option to choose UnitTest, PyTest etc. as an automation framework.
In case of Java, you have option of JUnit and TestNG as an automation framework.
and so on, the base on your selection of scripting language.
Finally, when it comes to structuring your automation framework, it solely depends on many things as the following:
Your nature of the project
Single product vs multiple products
and many more...
Check an example of multiple product automation project directory structure. https://github.com/pancht/python-selenium-framework
I hope, in some way, I would have helped you out in giving an answer to your question.
Thanks,
Panchdev Singh Chauhan
I just started a new Haskell project and wanted to set up a good testing workflow from the beginning. It seems like Haskell has a lot of excellent and unique testing tools and many different ways to integrate them.
I have looked into:
HUnit
QuickCheck
benchpress
HPC
complexity
Which all seem to work very well in their domains, but I'm looking for a comprehensive approach to testing and was wondering what has worked well for other people.
Getting unit testing, code coverage, and benchmarks right is mostly about picking the right tools.
test-framework provides a one-stop shop to run all your HUnit test-cases and QuickCheck properties all from one harness.
Code coverage is built into GHC in the form of the HPC tool.
Criterion provides some pretty great benchmarking machinery
I'll use as a running example a package that I just started enabling with unit testing, code coverage, and benchmarks:
http://github.com/ekmett/speculation
You can integrate your tests and benchmarks directly into your cabal file by adding sections for them, and masking them behind flags so that they don't make it so that every user of your library has to have access to (and want to use for themselves) the exact version of the testing tools you've chosen.
http://github.com/ekmett/speculation/blob/master/speculation.cabal
Then, you can tell cabal about how to run your test suite. As cabal test doesn't yet exist -- we have a student working on it for this year's summer of code! -- the best mechanism we have is Here is how to use cabal's user hook mechanism. This means switching to a 'Custom' build with cabal and setting up a testHook. An example of a testHook that runs a test program written with test-framework, and then applies hpc to profile it can be found here:
http://github.com/ekmett/speculation/blob/master/Setup.lhs
And then you can use test-framework to bundle up QuickCheck and HUnit tests into one program:
http://github.com/ekmett/speculation/blob/master/Test.hs
The cabal file there is careful to turn on -fhpc to enable code coverage testing, and then the testHook in Setup.lhs manually runs hpc and writes its output into your dist dir.
For benchmarking, the story is a little more manual, there is no 'cabal benchmark' option. You could wire your benchmarks into your test hook, but I like to run them by hand, since Criterion has so many graphical reporting options. You can add your benchmarks to the cabal file as shown above, give them separate compilation flags, hide them behind a cabal flag, and then use Criterion to do all the heavy lifting:
http://github.com/ekmett/speculation/blob/master/Benchmark.hs
You can then run your benchmarks from the command line and get pop-up KDE windows with benchmark results, etc.
Since in practice you're living in cabal anyways while developing Haskell code, it makes a lot of sense to integrate your toolchain with it.
Edit: Cabal test support now does exist. See http://www.haskell.org/cabal/release/cabal-latest/doc/users-guide/developing-packages.html#test-suites
The approach is advocate in RWH ch 11 and in XMonad is approximately:
State all properties of the system in QuickCheck
Show test coverage with HPC.
Confirm space behavior with heap profiling.
Confirm thread/parallel behavior with ThreadScope.
Confirm microbenchmark behavior with Criterion.
Once your major invariants are established via QuickCheck, you can start refactoring, moving those tests into type invariants.
Practices to support your efforts:
Run a simplified QuickCheck regression on every commit.
Publish HPC coverage details.
The test-framework package is really awesome. You can easily integrate HUnit and QuickCheck tests, and get executables that run specified suites only, based on command-line flags, with multiple output targets.
Testing and profiling are different beasts though. For profiling I'd set up a separate executable that stresses just the section you want to profile, and just looking carefully at the results of profiling builds and runs (with -prof-auto-all for compilation and +RTS -p for a runtime flag).
For testing, I rely on HUnit and QuickCheck properties and use the Haskell Test Framework to collect all unit tests and all QuickCheck properties automatically.
Disclaimer: I'm the main developer of the Haskell Test Framework.
I'm looking for a code coverage tool that I can use with a BlackBerry application. I'm using J2ME-Unit for Unit Testing and I want to see how much of my code is being covered by my tests.
I've tried using Cobertura for J2ME but after days of wrestling with it I failed to get any results from it. (I believe that the instrumentation is un-done by the RAPC compilation). And despite this message, the project seems to be dead.
I've looked at JInjector but the project seems very incomplete. There is little (if any) documentation and although it claims to be able to work with BlackBerry projects, I haven't seen any places where it has been used for that purpose. I've played with the project quite a bit but to no avail.
I've also tried the "Coverage" view in the BlackBerry JDE, even though I use Eclipse for development. The view stays permanently blank, regardless of clicking "Refresh" and running the application from the JDE.
I've looked at most of the tools on this SO thread, but they won't work with J2ME/BlackBerry projects.
Has anyone had any success with any code coverage tools on the BlackBerry? If so, what tools have you used? How have you used them?
If anyone has managed to get JInjector or Cobertura for J2ME to work with a BlackBerry project, what did you have to do to get it working?
I can't speak for Coberatura or JInjector, because I don't know how they collect test coverage probe data.
What is
critical is how this data is captured (does it need Java runtime support only available in standard Java VMs?) and how it is exported to the test coverage display/report generation tools.
Our SD Java Test Coverage tool instruments your source code; at runtime this produces an array of native Java booleans representing the coverage data, without need for any special VM support. Normally, this array is exported directly to a file, used by the test coverage display mechanism, by a TCVDump method provided with the test coverage tool, as your application exits.
Java (and other programming langauges used) in embedded systems often requires custom methods to extract the test coverage data. You might need to code a special dump procedure (in Java) to write out that boolean array to an accessible place. Our experience with building such custom dump procedures is that they are generally pretty simple (a few dozen lines); the real trick is deciding how/where to put the data, so that it can be easily moved to the target file. Mostly this is just a peculiar pair of copies, the first of which copies the boolean array to some staging location, and the second which writes the staged data into the destination file. (The standard TCVdump method is provided in source form to enable this kind of customization).
While I haven't specifically looked at BlackBerry, if you can write the data anywhere, you can pretty much be assured you can achieve this. We've had success with other embedded hand-set systems, such as Symbian, doing this.
If you want a complete overview of how to generally instrument code for test coverage following this strategy, see this paper: Branch Coverage for Arbitrary Languages Made Easy
I was actively involved with JInjector while working at Google. We were able to use it to successfully obtain code coverage for Blackberry applications. The application lifecycle for Balckberry apps is less predictable than J2ME and we found we had to tweak the application code to ensure the coverage data was gathered. I didn't personally work on the blackberry apps, several other engineers did. I'd hoped we'd create an example blackberry application and make it available on the jinjector site, but events and life got in the way.
If you would be willing to provide a sample blackberry apps with some unit tests, I'd be willing to spend a few hours trying to help you get the code coverage working. I'm not actively working with either J2ME or Blackberry (I'm currently working on Android apps when I have time to experiment with mobile) so I'm quite rusty. I have a day job that doesn't involve much mobile test automation, however I continue to work on ways to improve the test automation for mobile apps e.g. http://code.google.com/p/mwta/downloads/list for Android Test Automation.
I'm julianharty at gmail.com