I've started a new role in my life. I was a front end web developer, but I've now been moved to testing web software, or more so, automating the testing of the software. I believe I am to pursue a BDD (Behavior Driven Development) methodology. I am fairly lost as to what to use, and how to piece it together.
The code that is being used/written is in Java to write a web interface for the application to test. I have documentation of the tests to run, but I've been curious how to go about automating it.
I've been directed to Cucumber as one of the "languages" to help with the automation. I have done some research and come across a web site for a synopsis of BDD Tools/Frame works,
8 Best Behavior Driven Development (BDD) Tools and Testing Frameworks. This helped a little but then I got a little confused of how to implement it. It seems that Selenium is a common denominator in a lot of the BDD frameworks for testing a GUI, but it still doesn't seem to help describe what to do.
I then came across the term Functional Testing tool, and I think that confused me even more. Do they all test a GUI?
I think the one that looked like it was all one package was SmartBear TestComplete, and then there is, what seems to be, another similar application by SmartBear called, SmartBear TestLeft, but I think I saw that they still used Cucumber for BDDing it. There a few others that looked like they might work as well, but I guess the other question is what's the cheapest route?
I guess the biggest problem I have is how to make these tests more dynamic, as the UI/browser dimensions can easily change from system to system, and how do I go about writing automation that can handle this, and tie into a BDD methodology?
Does anyone have any suggestions here? Does anybody out there do this?
Thanks in advance.
BDD Architecture
BDD automation typically consists of a few layers:
The natural language steps
The wiring that ties the steps to their definition
The step definitions, which usually access page objects
Page objects, which provide all the capabilities of a page or widget
Automation over the actual code being exercised, often through the GUI.
The wiring between natural language steps and the step definitions is normally done by the BDD tool (Cucumber).
The automation is normally done using the automation tool (Selenium). Sometimes people do skip the GUI, perhaps targeting an API or the MVC layer instead. It depends how complex the functionality in your web page is. If in doubt, give Selenium a try. I've written automation frameworks for desktop apps; the principle's the same regardless.
Keeping it maintainable
To make the steps easy to maintain and change, keep the steps at a fairly high level. This is frequently referred to as "declarative" as opposed to "imperative". For instance, this is too detailed:
When Fred provides his receipt
And his receipt is scanned
And the cashier clicks "Refund to original card"
And the card is inserted...
Think about what the user is trying to achieve:
When Fred gets a refund to his original card
Generally a scenario will have a few Givens or Thens, but typically only one When (unless you have something like users interacting or time passing, where both events are needed to illustrate the behaviour).
Your page objects in this scenario might well be a "RefundPageObject" or perhaps, if that's too large, a "RefundToCardPageObject". This pattern allows multiple scenario steps to access the same capabilities without duplication, which means that if the way the capabilities are exercised changes, you only need to change them in one place.
Different page objects could also be used for different systems.
Getting started
If you're attacking this for the first time, start by getting an empty scenario that just runs and passes without doing anything (make the steps empty). When you've done this, you'll have successfully wired up Cucumber.
Write the production code that would make the scenario run. (This is the other way round from the way you'd normally do it; normally you'd write the scenario code first. I've found this is a good way to get started though.)
When you can run your scenario manually, add the automation directly to the steps (you've only got one scenario at this point). Use your favourite assertion package (JUnit) to get the outcome you're after. You'll probably need to change your code so that you can automate over it easily, by e.g.: giving relevant test ids to elements in your webpage.
Once you've got one scenario running, try to write any subsequent scenarios first; this helps you think about your design and the testability of what you're about to do. When you start adding more scenarios, start extracting that automation out into page objects too.
Once you've got a few scenarios, have a think about how you might want to address different systems. Avoid using lots of "if" statements if you can; those are hard to maintain. Injecting different implementations of page objects is probably better (the frameworks may well support this by now; I haven't used them in a while).
Keep refactoring as you add more scenarios. If the steps are too big, split them up. If the page objects are too big, divide them into widgets. I like to organize my scenarios by user / stakeholder capabilities (normally related to the "when" but sometimes to the "then") then by different contexts.
So to summarize:
Write an empty scenario
Write the code to make that pass manually
Wire up the scenario using your automation tool; it should now run!
Write another scenario, this time writing the automation before the production code
Refactor the automation, moving it out of the steps into page objects
Keep refactoring as you add more scenarios.
Now you've got a fully wired BDD framework, and you're in a good place to keep going while making it maintainable.
A final hint
Think of this as living documentation, rather than tests. BDD scenarios hardly ever pick up bugs in good teams; anything they catch is usually a code design issue, so address it at that level. It helps people work out what the code does and doesn't do yet, and why it's valuable.
The most important part of BDD is having the conversations about how the code works. If you're automating tests for code that already exists, see if you can find someone to talk to about the complicated bits, at least, and verify your understanding with them. This will also help you to use the right language in the scenarios.
See my post on using BDD with legacy systems for more. There are lots of hints for beginners on that blog too.
Since you feel lost as to where to start, I will hint you about some blogs I have written that talks a bit about your problem.
Some categories that may help you:
http://www.thinkcode.se/blog/category/Cucumber
http://www.thinkcode.se/blog/category/Selenium
This, rather long and old post, might give you hints as well:
http://www.thinkcode.se/blog/2012/11/01/cucumberjvm-not-just-for-testing-guis
Notice that versions are dated, but hopefully it can give some ideas as what too look for.
I am not an expert on the test automation but I am currently working on this part. So let me share some idea and hope it will help you at the current stage.
We have used selenium+cucumber+intellij for testing web application. We have used testcomplete+cucumber+intellij for testing java desktop application.
As to the test of web application, we have provided a test mode in our web application, which allows us to get some useful details of the product and the environment; and also allows us to easily trigger events through clicking the button and inputting text into the test panel under test mode.
I hope these are helpful for you.
Related
Please let me know the difference between the hand written code and recorded scripts in automation testing tools like coded ui or any other tools.
Regards,
Raj
By 'hand-written' I'll assume you mean manually coded...
I can see a few reasons. Coding experience is brilliant. It will be a worthwhile investment if you code your own tests because you can learn a lot about the testing framework you are using (CodedUI, Selenium etc) but also the language you are using (Java, C#). Manually coding these tests, using built in framework methods, will serve you well and give you much more knowledge than an automatic play back tool would.
Automatic playback tools can produce horrific code. Code that is ugly, badly named, no best practices followed, and unreliable location methods.
Playback tools will simply use the most simple way to find an element. This is not always the best. A classic example is XPath.
Most notably, XPath is a powerful tool, it can get you any element you need (or at least, I've never found a situation where XPath cannot be used), but playback tool's will produce horrific XPath queries based purely on position...let's take an example.
You've got a page that has 100 feed items. You want to verify after a particular action a feed item is shown on this page, but not only is it shown but it is the first one. You cannot use ID's etc, because the markup is badly made and so you must use XPath.
A playback tool might make a very odd XPath like: //div[1]/span[2]/table[1]/tbody[1]/tr[10]/[td[2]/a[text()='Test'].
Looks weird, right?
This will work a few times, but what happens if the app gets another tr element shoved at the top of the table? Now, tr[10] wont be the element you want, it'll be tr[11].
Through manual coding, you can account for this, you can put in logic to work around this. Playback tool's wont.
I highly recommend coding these tests yourself. You do not need a few years experience to do this, you do not need any prior programming degrees. You need time.
Playback tools will also be limited in what they can do...you want to take a screenshot when a test fails? I highly highly doubt a playback tool will do this, you'll need to put in logic yourself. However, this isn't hard to do yourself.
There might be a business reason too - playback tools can convert manual tests into automated tests faster, but they won't be reliable - you'll need to have time to dedicate to making them reliable and fast. Time that would better be spent coding them yourself in the first place.
My software company never did BDD or even TDD before. Testing before meant to simply try out the new software some days before deployment.
Our recent project is about 70% done. We also use it as a playground for new technologies, tools and ways of developement. My boss wanted that I switch to it to "test testing".
I tried out Selenium 2 and RSpec. Both are promising, but how to catch up months of developement? Further problems are:
new language
never wrote a line of code by myself
huge parts are written by freelancer
lots of fancy hacking
no documentation at all besides some source comments and flow charts
All I was able to was to do was to cover a whole process with Selenium. This appeared to be quite painfully (but still possible), since the software was obivously never meant to be testet this way. We have lots of dynamically generated id ´s, fancy jQuery and much more. Dont even know how to get started with RSpec.
So, is it still possible to apply BDD to this project? Or should I run far away and never come back?
Before you start - have you asked your boss what he values from the testing? I'd clarify this with your boss first. The main benefits of system-level BDD are in the conversations with business stakeholders before the code is written. You won't be able to get this if all you're doing is wrapping existing code. At a unit level, the main benefits are in questioning the responsibilities of classes and their behavior, which, again, you won't be able to get. It might be useful for helping you understand the value of the application and each level of code, though.
If your boss just wants to try out BDD or TDD it may be simpler to start a new project, or even grab an existing project from someone else to wrap some tests around. If he genuinely wants to experiment with BDD over legacy code, then it may be worth persisting with what you have - #Esko's book suggestion rocks. Putting higher-level system tests around the existing functionality can help you to avoid breaking things when you refactor the lower-level code (and you will need to do so, if you want to put tests in place). I additionally recommend Martin Fowler's "Refactoring".
RSpec is best suited to applying BDD at a unit level, as a variant of TDD. If you're looking to wrap automated tests around your whole environment, take a look at Cucumber. It makes reusing steps a lot simpler. You can then call Selenium directly from your steps.
I put together a page of links on BDD here, which I hope will help newcomers to understand it better. Best of luck.
Reading the book Working Effectively with Legacy Code might be helpful. There is also a short version in PDF form.
I have been reading a lot about test-driven development and decided that I want to give it a go on a small project. For reference, I am currently reading 'Growing Object-Oriented Software, Guided by Tests'.
I understand how to unit test my application and how to unit test certain parts of the UI as well, but I am struggling to set up end-to-end tests. For example, testing that a certain path through my whole application produces the correct output (this is my basic understanding of an end-to-end test).
It's not necessary to simulate click events, but it is necessary to have some sort of connection to the UI.
Am I right in thinking that I need a combination of "Logic" tests (test without launching the app), "Application" tests (test with launching the app) and the asynchronous functionality of something like GHUnit to accomplish this?
EDIT:
After reading some of the answers below, it sounds like I'm looking for functional end-to-end testing, but I think I should give an example of a test as I imagine it.
Start the application.
Call the login function with a test users credentials. (Note: doesn't necessarily need UI automation).
Verify a label on the window says "Logging In...".
After successfully verifying the user, verify the label now says "Welcome, Adam!".
KIF sounds like it could work, as it has steps to check changes in UI elements and it looks like there is a Mac OSX branch also. I'm sure I could also write a small class which constantly polls the UI for changes I expect and times-out after a certain time, but I'm wondering if this the right way to go about it.
However, perhaps I am trying to take what I am reading in 'Growing Object-Oriented Software, Guided by Tests' and trying to apply it too literally to Cocoa.
Another UPDATE:
So I've been reading the advice so far, checked the various places linked to and started to implement something whilst still referencing the book. I think what I'm really trying to get at is the Test-Driven-Development part. What stood out most in said book, was that they described what they wanted to happen from a users perspective first with acceptance tests.
I realise that solid unit testing will be necessary as soon as I start writing methods, but I was keen to write some high-level acceptance tests first, using some of the UI. I have started to write my own application "driver" class, using some similar methods to the GHAsyncTestCase ideas to help me accomplish this. Does this sound correct/useful/necessary?
I really appreciate all the comments so far and they have definitely helped me work out in my own head what I'm trying to do and what various areas of testing there are. I will finish up this question soon, as it is getting rather large, so any final advice is welcome!
I think the key thing that I got from "Growing Object Orientated Software" was to decouple as much as possible from the UI. Without code to look at it's harder to give suggestions but with your revision I'd think that separating the "verify a label says.." bit from the UI. What is setting this message, and can you just test for that event?
The more you can decouple from the UI the more you can unit-test (quicker and easier) rather than integrating other frameworks or drivers of UI elements.
You might be interested in Square's KIF framework: http://corner.squareup.com/2011/07/ios-integration-testing.html
It looks really cool for integration/UI testing.
I believe you can use the Accessibility features to script the UI. I'd check the WWDC 2011 videos for one entitled "Design Patterns to Simplify Mac Accessibility". They did something similar in 2010.
Based on your response to #Norman, I guess you're looking for recommendations that span both functional end-to-end as well as UI-based end-to-end but perhaps a UI automation framework might change your mind? Something intrusive like FoneMonkey might be helpful:
http://www.gorillalogic.com/fonemonkey
If that doesn't work for you, I'd be interested in knowing why & what "gap" you perceive in such UI driven tests versus code-based functional testing?
I am trying to become more familiar with test driven approaches. One drawback for me is that a major part of my code is generated context for reporting (PDF documents, chart images). There is always a complex designer involved and there is no easy test of correctness. No chance to test just fragments!
Do you know TDD practices for this situation?
Some applications or frameworks are just inheritently unit test-unfriendly, and there's really not a lot you can do about it.
I prefer to avoid such frameworks altogether, but if absolutely forced to deal with such issues, it can be helpful to extract all logic into a testable library, leaving only declarative code behind in the framework.
The question I ask myself in these situations is "how do I know I got it right"?
I've written a lot of code in my career, and almost all of it didn't work the first time. Almost every time I've gone back and changed code for a refactoring, feature change, performance, or bug fix, I've broken it again. TDD protects me from myself (thank goodness!).
In the case of generated code, I don't feel compelled to test the code. That is, I trust the code generator. However, I do want to test my inputs to the code generators. Exactly how to do that depends on the situation, but the general approach is to ask myself how I might be getting it wrong, and then to figure out how to verify that I got it right.
Maybe I write an automated test. Maybe I inspect something manually, but that's pretty risky. Maybe something else. It depends on the situation.
To put a slightly different spin on answers from Mark Seemann and Jay Bazuzi:
Your problem is that the reporting front-end produces a data format whose output you cannot easily inspect in the "verify" part of your tests.
The way to deal with this kind of problem is to:
Have some very high-level integration tests that superficially verify that your back-end code hooks correctly into your front-end code. I usually call those tests "smoke tests", as in "if I turn on the power and it smokes, it's bad".
Find a different way to test your back-end reporting code. Either test an intermediate output data structure, or implement an alternate output front-end that is more test-friendly, HTML, plaintext, whatever.
This similar to the common problem of testing web apps: it is not possible to automatically test that "the page looks right". But it is sufficient to test that the words and numbers in the page data are correct (using a programmatic browser surch as mechanize and a page scraper), and have a few superficial functional tests (with Selenium or Windmill) if the page is critically dependent on Javascript.
You could try using a web service for your reporting data source and test that, but you are not going to have unit tests for the rendering. This is the exact same problem you have when testing views. Sure, you can use a web testing framework like Selenium, but you probably won't be practicing true TDD. You'll be creating tests after your code is done.
In short, use common sense. It probably does not make sense to attempt to test the rendering of a report. You can have manual test cases that a tester will have to go through by hand or simply check the reports yourself.
You might also want to check out "How Much Unit Test Coverage Do You Need? - The Testivus Answer"
You could use Acceptance Test driven Development to replace the unit-tests and have validated reports for well known data used as references.
However this kind of test does not give a fine grained diagnostic as unit-tests do, they usually only provide a PASS/FAIL result, and, should the reports change often, the references need to be regenerated and re-validated as well.
Consider extracting the text from the PDF and checking it. This won't give you formatting, however. Some pdf extraction programs can pull out the images if the charts are in the pdf.
Faced with this situation, I try two approaches.
The Golden Master approach. Generate the report once, check it yourself, then save it as the "golden master". Write an automated test to compare its output with the golden master, and fail when they differ.
Automate the tests for the data, but check the format manually. I automate checks for the module that generates the report data, but to check the report format, I generate a report with hardcoded values and check the report by hand.
I strongly encourage you not to generate the full report just to check the correctness of the data on the report. When you want to check the report (not the data), then generate the report; when you want to check the data (not the format), then only generate the data.
Recently I've came up with the question is it worth at all to spent development time to generate automatic unit test for web based projects? I mean it seems useless at some point because at some point those projects are oriented on interactions with users/clients, so you cannot anticipate the whole possible set of user action so you be able to check the correctness of content showed. Even regression test can hardly be done. So I'm very eager to know to know the opinion of other experienced developers.
Selenium have a good web testing framework
http://seleniumhq.org/
Telerik are also in the process of developing one for web app testing.
http://www.telerik.com/products/web-ui-test-studio.aspx
You cannot anticipate the whole
possible set of user action so you be
able to check the correctness of
content showed.
You can't anticipate all the possible data your code is going to be handed, or all the possible race conditions if it's threaded, and yet you still bother unit testing. Why? Because you can narrow it down a hell of a lot. You can anticipate the sorts of pathological things that will happen. You just have to think about it a bit and get some experience.
User interaction is no different. There are certain things users are going to try and do, pathological or not, and you can anticipate them. Users are just inputting particularly imaginative data. You'll find programmers tend to miss the same sorts of conditions over and over again. I keep a checklist. For example: pump Unicode into everything; put the start date after the end date; enter gibberish data; put tags in everything; leave off the trailing newline; try to enter the same data twice; submit a form, go back and submit it again; take a text file, call it foo.jpg and try to upload it as a picture. You can even write a program to flip switches and push buttons at random, a bad monkey, that'll find all sorts of fun bugs.
Its often as simple as sitting someone down who's unfamiliar with the software and watching them use it. Fight the urge to correct them, just watch them flounder. Its very educational. Steve Krug refers to this as "Advanced Common Sense" and has an excellent book called "Don't Make Me Think" which covers cheap, simple user interaction testing. I highly recommend it. It's a very short and eye opening read.
Finally, the client themselves, if their expectations are properly prepared, can be a fantastic test suite. Be sure they understand its a work in progress, that it will have bugs, that they're helping to make their product better, and that it absolutely should not be used for production data, and let them tinker with the pre-release versions of your product. They'll do all sorts of things you never thought of! They'll be the best and most realistic testing you ever had, FOR FREE! Give them a very simple way to report bugs, preferably just a one button box right on the application which automatically submits their environment and history; the feedback box on Hiveminder is an excellent example. Respond to their bugs quickly and politely (even if its just "thanks for the info") and you'll find they'll be delighted you're so responsive to their needs!
Yes, it is. I just ran into an issue this week with a web site I am working on. I just recently switched-out the data access layer and set up unit tests for my controllers and repositories, but not the UI interactions.
I got bit by a pretty obvious bug that would have been easily caught if I had integration tests. Only through integration tests and UI functionality tests do you find issues with the way different tiers of the application interact with one another.
It really depends on the structure and architecture of your web application. If it contains an application logic layer, then that layer should be easy to unit test with automating tools such as Visual Studio. Also, using a framework that has been designed to enable unit testing, such as ASP.NET MVC, helps alot.
If you're writing a lot of Javascript, there have been a lot of JS testing frameworks that have come around the block recently for unit testing your Javascript.
Other than that, testing the web tier using something like Canoo, HtmlUnit, Selenium, etc. is more a functional or integration test than a unit test. These can be hard to maintain if you have the UI change a lot, but they can really come in handy. Recording Selenium tests is easy and something you could probably get other people (testers) to help you create and maintain. Just know that there is a cost associated with maintaining tests, and it needs to be balanced out.
There are other types of testing that are great for the web tier - fuzz testing especially, but a lot of the good options are commercial tools. One that is open source and plugs into Rails is called Tarantula. Having something like that at the web tier is a nice to have run in a continuous integration process and doesn't require much in the form of maintenance.
Unit tests make sense in TDD process. They do not have much value if you don't do test-first development. However the acceptance test are a big thing for quality of the software. I'd say that acceptance test is a holy grail of the development. Acceptance tests show whether the application satisfies the requirements. How do I know when to stop developing the feature --- only when all my acceptance test pass. Automation of acceptance testing a big thing because I do not have to do it all manualy each time I make changes to the application. After months of development there can be hundreds of test and it becomes unfeasible (sometime impossible) to run all the test manually. Then how do I know if my application still works?
Automation of acceptance tests can be implemented with use of xUnit test frameworks, which makes a confusion here. If I create an acceptance test using phpUnit or httpUnit is it a unit test? My answer is no. It does not matter what tool I use to create and run test. Acceptance test is the one that show whether the features is working IAW requirements. Unit test show whether a class (or function) satisfies the developer's implementation idea. Unit test has no value for the client (user). Acceptance test has a lot of value to the client (and thus to developer, remember Customer Affinity)
So I strongly recommend creating automated acceptance tests for the web application.
The good frameworks for the acceptance test are:
Sahi (sahi.co.in)
Silenium
Simpletest (I't a unit-test framework for php, but includes the browser object that can be used for acceptance testing).
However
You have mentioned that web-site is all about user interaction and thus test automation will not solve the whole problem of usability. For example: testing framework shows that all tests pass, however the user cannot see the form or link or other page element due to accidental style="display:none" in the div. The automated tests pass because the div is present in the document and test framework can "see" it. But the user cannot. And the manual test would fail.
Thus, all web-applications needs manual testing. The automated test can reduce the test workload drastically (80%), but manual test are as well significant for the quality of the resulting software.
As for the Unit testing and TDD -- it make the code quality. It is beneficial to the developers and for the future of the project (i.e. for projects longer that a couple of month). However TDD requires skill. If you have the skill -- use it. If you don't consider gaining the skill, but mind the time it will take to gain. It usually takes about 3 - 6 month to start creating a good Unit tests and code. If you project will last more that a year, I recommend studding TDD and investing time in proper development environment.
I've created a web test solution (docker + cucumber); it's very basic and simple, so easy to understand and modify / improve. It lies in the web directory;
my solution: https://github.com/gyulaweber/hosting_tests