Automation testing of application developed using WxWidgets - automation

Is there a possibility to do a automation testing of application developed using WxWidgets on windows platform? If so how do I do that?

wxWidgets uses native widgets so any automated testing solution for Win32 programs would work with wxWidgets applications. However IME the typical "point-and-click" tests are not that great in practice and it's better to write the tests for the GUI code in the same way as you do it for the rest of the program.
There are two problems that need to be solved when doing it though: first, how to trigger various actions in the GUI. This is addressed by wxUIActionSimulator. It is not perfect, but if your application has a reasonable keyboard interface (as it should), it should be enough.
The second problem is related to the control flow in the GUI applications: if some action results in opening of a modal dialog, the test can't continue until the dialog is closed by user, which is inappropriate for unattended tests. wxWidgets provides (still somewhat experimental and not documented yet) wxTEST_DIALOG macro for dealing with this, its use is explained in this comment.
Combination of these two approaches allows to write tests for the real-life GUI applications and, moreover, the tests are portable and not limited to Windows platform.

Related

Automate accessibility testing with NVDA screen reader

I am working on implementing accessibility (for visually impaired individuals) for one of our web application. It need to be ARIA compliant. Right now we are testing our changes with screen reader manually.
For example we have Tree control in our application. I open NVDA screen reader and then navigate through my Tree Nodes. NVDA screen reader speaks out
Node XYZ expanded, (When I expand XYZ node with right arrow key)
Node XYZ collapsed, (When I collapse XYZ node with left arrow key)
Along with the voice it also write down this text.
But all this is manual. Now we want to setup automated test cases for the same so that any regression bugs can be caught by are test cases. Do there exist any such tool which we can use to automate our test cases. Any direction will be helpful.
PS: Just for a sake of comparison. We have nunit to write test cases for c# application. After writing test cases we integrate them into our build process. Any breaking change is caught when we run the build. I am looking for something similar to test out our aria compliance and screen reader's behavior with our web application.
I don't know of any existing tools for testing screen readers, however, there are accessibility APIs that test websites and web applications.
axe-core from Deque Systems is widely used and well-supported.
I wrote a python package to run automated web accessibility tests that uses axe-core and selenium.
While it isn't quite what you are looking for, it does cover about 60% of accessibility guidelines, including aria roles and attributes. It should help with determining screen reader usability.
You could integrate axe into C#, similar to my python package and the Java package, also created by Deque.
I hope this helps!
It sounds like you're already performing some pretty good manual accessibility testing against your web application, which no automated testing tool is going to be able to replicate completely. That said, if you're looking to take care of any low-hanging fruit with an automated solution, like Kimberly suggested, there are several automated accessibility testing tools out there that you can relatively easily integrate into your existing web application's testing framework that might help you.
One such tool is Continuum, which doesn't have a C#-based library offering at the moment, but could be used in a separate testing framework to be run against your web application after it has already been built. This may be preferable depending on your use case, as code linters for accessibility aren't perfect and are highly language-dependent, whereas testing the HTML of your web application more closely matches the screen reader use case you say you're trying to test for. You could even integrate Continuum into your existing CI/CD process to make sure your application is tested during development as opposed to afterwards, to reduce your manual accessibility testing load.
Continuum has a few sample projects to get you started, depending on your technologies of choice. Free versions are available at webaccessibility.com if you're interested. Most of them are Java- or JavaScript-based at the moment.
Appreciate this is quite an old question, but having explored this area a lot recently thought was worth updating with the state as of 2023 as there is now some progress in this space.
Current tooling available at time of writing (that I’m aware of, may not be exhaustive):
guidepup - NodeJS automation for VoiceOver and NVDA supporting all keyboard commands and getters for spoken phrases (disclaimer: I’m the author).
auto-vo - CLI for navigating sequentially through a page with VoiceOver and reporting the spoken phrases, also exports a separate Node module for some interactions with VoiceOver.
screen-reader-reader - NodeJS automation for VoiceOver and NVDA for starting, stopping, and getting spoken phrases.
web-test-runner-voiceover - NodeJS plugins for #web/test-runner to automate VoiceOver testing.
nvda-testing-driver - .NET automation for NVDA supporting all keyboard commands and getters for spoken phrases.
assistive-webdriver - NodeJS implementation of a Webdriver server that allows remote testing of screen readers (e.g. NVDA, JAWS) running in a VM.
As stated in other answers, there are also a number of static analysis tools such as axe, as well numerous browser extensions offering similar static analysis, and companies such as Assistiv Labs offering remote environment services to interact with screen readers manually (similar to SauceLabs/BrowserStack/etc. but for screen readers, magnification etc. - no affiliation and haven’t used services so can’t vouch, simply an observation).
Worth calling out that none of these cover the full range of a11y requirements - there is more to a11y than just screen readers. A combined/layer approach including automation, manual testing, and user testing likely preferred.

Does anyone use Sikuli as a testing tool?

Hi I have a Swing application to test and I found Sikuli a nice tool to do it, but I am a little worried about the size of the community and if it's being continually developed and it's being used by other companies.
Do you use it?
For what?
Is it stable?
Is it the best tool for the job you needed?
I use it in my company, too.
It can be used quite easily for not too complex gui-tests.
Sikuli was not developed for the last year but development is now increasing again.
Questions in the Sikuli-FAQ section on launchpad are answered fast although the community is not that big.
In my company, Sikuli is used for gui testing which was previously done by human testers.
It saves some time but not everything is automatable with Sikuli, e.g. the OCR functionality is not dependable (but will be updated from tesseract 2.04 to 3 in the near future).
For my job it was the best tool because it is the only open source (=free) tool I found that provides screenshot based automation that can be integrated with other systems like CI-systems and is programmable with Java and Python which makes easy unit testing possible with JUnit or PyUnit.
Hope I could help.
Yes we use it in-house for testing. It is actively supported. I have reported bugs in Sikuli and have had tickets and workarounds suggested within days with the bugs fixed in the next revision.
It is quite stable. The problems I have encountered typically come from not specifying images correctly and the program selecting an incorrect area of the screen.
One of our more unique uses was creating a set of automated bench tests for a legacy embedded system. The system was written in assembly and had no unit testing capabilities. It communicated with a custom legacy PC application. Rather than try to locate the PC source code, reverse engineer the design, and then write some meaningful bench tests, we created a number of Sikuli scripts to interface with the PC app. It saved weeks of development.
Yes, we use it for automating GUI tests. It's used mostly for old systems that were developed with no test driven back end. (ie: no testing api)
We tests some very complex tools including a debugger using Sikuli.
We tend not to use the Sikuli IDE though.

how to perform auto-test with GUI applications?

Just new on software testing...
Regarding testing, I think GUI applications are pretty difficult to automate. Some testing involves with interacting particular GUI objects in particular sequence (e.g., clicking buttons). The interface often changes from one window to another. And the timing and sync sometimes also pose an issue (e.g., recording mouse clicks and replaying may screw up).
Is there any solution for testing such applications with less human labour? Thank you for sharing your experience.
Yes, GUI apps are indeed tough to automate. Regardless of the app's technology (Swing, web, WPF, iOS), you first have to focus on automating high-value tests. Moreover, test automation shouldn't be at just the GUI level, it should be a mix across unit, integration, and functional (GUI) tests too.
Are you working on a web app? If so, have a look at great open source tools like Watir or WebDriver. (I'll also pitch Telerik's Test Studio to you; however, for full disclosure I'm their evangelist for that tool.)
Desktop applications (or mobile) bring a lot of challenges to automation, and it's totally dependent on what platform you're working with. Test Studio supports WPF, but you can also look to other commercial and a few free tools. I don't know of any tools for Swing apps, but that lack of knowledge is due to me having been out of that domain for many years. (And maybe I'm so out of it that Swing's not even the normal Java GUI toolset...)
iOS and Android are tough ones to find reliable automation tools for. I know the Frank framework/API will work on iOS (Test Studio has a free recorder in the App Store), but I don't know of any other tools that reliably support the extraordinary matrix of Droid hardware and OS versions.
Regardless of your platform and toolset, you need to learn the basic approaches for dealing with GUI testing: focus on high value tests, learn to avoid duplication through approaches like Page Object Pattern, learn how to deal with synchronization/timing issues in your specific application.
It's a long haul, but if you work carefully it's totally worth it.
(And fun, too, IMO.)

Testing a desktop application

I need to open a .exe application where I have to test all the functions, UI, etc.
I were working with watin and Nunit for testing a web, but now, i think watin is useless for this. I found NunitForms, but I dont think that will be enough.
I have to open the application and test all the windows, buttons, etc that appear. The application also start minimized in the taskbar and have a desplegable menu.
How can I handle it? Thanks!
I believe you are referring to Winforms application. Please check below links
The Microsoft UI Automation Library -
http://msdn.microsoft.com/en-us/site/cc163288
UI Automation with Windows PowerShell -
http://msdn.microsoft.com/en-us/site/cc163301
Lightweight UI Test Automation with .NET - http://msdn.microsoft.com/en-us/site/cc163864
Ideally, you want as little code in the forms as possible. If you move the functionality to separate classes, those can easily be tested using NUnit. If you must test the forms directly, NUnitForms is a reasonable tool.

Tools for automating mouse and keyboard events sent to a windows application

What tools are useful for automating clicking through a windows form application? Is this even useful? I see the testers at my company doing this a great deal and it seems like a waste of time.
Check out https://github.com/TestStack/White and http://nunitforms.sourceforge.net/. We've used the White project with success.
Though they're mostly targeted at automating administration tasks or shortcuts for users, Autohotkey and AutoIT let you automate nearly anything you want as far as mouse/keyboard interaction.
Some of the mouse stuff can get tricky when the only way to really tell it what you want to click is an X,Y coordinate, but for automating entirely arbitrary tasks on a Windows machine, it does the trick.
Like I said, they're not necessarily intended for testing purposes, so they're not instrumented for unit test conventions. However, I use them all of the time to automate stuff that isn't testing related.
You can do it programmatically via the Microsoft UI Automation API. There's an MSDN Magazine article about it.
Integrates well with unit test frameworks. A better option than the coordinate-based script runners because you don't have to rewrite scripts when layouts change.
There's a couple out there. They all hook into the windows API to log item clicks, and then reproduce them to test.
We're now mostly web based (using WatiN), but we used to use Mercury Quicktest.
Don't use Quicktest, it's awful for a tremendously long list of reasons.
This is what i was looking for.
Check out http://www.codeplex.com/white and http://nunitforms.sourceforge.net/. We've used the White project with success.