I don't even really know if the title is the best way to explain what I'm trying to do, but anyway...
We have a web app that is being ported to a number of DB backends via MDB2. Our unit tests are pretty lacking at the moment, but our internal users are pretty good at knowing what to test to see if things are broken.
What I'm 'imagining' is a browser plug in (don't really care which browser it is for) or a similar system that essentially takes every event from one window and 'mirrors' it in the other browser/s. The reason I'd like this is so that I can have various installations that use different DB backends, and have the user open a window/tab to each installation. From there, however, I'd like them to be able to 'work' in one window and have that 'work' I occur at the same time in each of the 'cloned' windows. From there, they should be able to do some quick eyeballing of the information that comes back, without having to worry about timing differences and so (very much).
I know it's a big ask, but I figure if anyone knows of a solution, I'd find it here...
Any thoughts?
Do have a look at Selenium http://seleniumhq.org/ to automate the testing.
Related
I have been reading a lot about test-driven development and decided that I want to give it a go on a small project. For reference, I am currently reading 'Growing Object-Oriented Software, Guided by Tests'.
I understand how to unit test my application and how to unit test certain parts of the UI as well, but I am struggling to set up end-to-end tests. For example, testing that a certain path through my whole application produces the correct output (this is my basic understanding of an end-to-end test).
It's not necessary to simulate click events, but it is necessary to have some sort of connection to the UI.
Am I right in thinking that I need a combination of "Logic" tests (test without launching the app), "Application" tests (test with launching the app) and the asynchronous functionality of something like GHUnit to accomplish this?
EDIT:
After reading some of the answers below, it sounds like I'm looking for functional end-to-end testing, but I think I should give an example of a test as I imagine it.
Start the application.
Call the login function with a test users credentials. (Note: doesn't necessarily need UI automation).
Verify a label on the window says "Logging In...".
After successfully verifying the user, verify the label now says "Welcome, Adam!".
KIF sounds like it could work, as it has steps to check changes in UI elements and it looks like there is a Mac OSX branch also. I'm sure I could also write a small class which constantly polls the UI for changes I expect and times-out after a certain time, but I'm wondering if this the right way to go about it.
However, perhaps I am trying to take what I am reading in 'Growing Object-Oriented Software, Guided by Tests' and trying to apply it too literally to Cocoa.
Another UPDATE:
So I've been reading the advice so far, checked the various places linked to and started to implement something whilst still referencing the book. I think what I'm really trying to get at is the Test-Driven-Development part. What stood out most in said book, was that they described what they wanted to happen from a users perspective first with acceptance tests.
I realise that solid unit testing will be necessary as soon as I start writing methods, but I was keen to write some high-level acceptance tests first, using some of the UI. I have started to write my own application "driver" class, using some similar methods to the GHAsyncTestCase ideas to help me accomplish this. Does this sound correct/useful/necessary?
I really appreciate all the comments so far and they have definitely helped me work out in my own head what I'm trying to do and what various areas of testing there are. I will finish up this question soon, as it is getting rather large, so any final advice is welcome!
I think the key thing that I got from "Growing Object Orientated Software" was to decouple as much as possible from the UI. Without code to look at it's harder to give suggestions but with your revision I'd think that separating the "verify a label says.." bit from the UI. What is setting this message, and can you just test for that event?
The more you can decouple from the UI the more you can unit-test (quicker and easier) rather than integrating other frameworks or drivers of UI elements.
You might be interested in Square's KIF framework: http://corner.squareup.com/2011/07/ios-integration-testing.html
It looks really cool for integration/UI testing.
I believe you can use the Accessibility features to script the UI. I'd check the WWDC 2011 videos for one entitled "Design Patterns to Simplify Mac Accessibility". They did something similar in 2010.
Based on your response to #Norman, I guess you're looking for recommendations that span both functional end-to-end as well as UI-based end-to-end but perhaps a UI automation framework might change your mind? Something intrusive like FoneMonkey might be helpful:
http://www.gorillalogic.com/fonemonkey
If that doesn't work for you, I'd be interested in knowing why & what "gap" you perceive in such UI driven tests versus code-based functional testing?
Our testing system is pretty rudimentary; fire up a browser, see if it works. Recently we ran into problems, found by our client, with our application where the number of users created a slow-down in the application. The application is basically a huge Word document with people editing their own versions all at the same time. Part of the problem came from not knowing how to test multiple instances at the same time. My partner and I thought about how to test this; one idea was to hire out an internet cafe and hire students for an hour to bang on the app.
What are other ways that people have tried to emulate concurrency in testing their web-based application? Most of the advice here is for specific methodology; I'm asking, how do you test it to make sure that it works?
If you have never checked out Selenium, then you need to. It will allow you to do automated web testing through the browser. Ok, so first problem solved.
Now ideally you could use that same script and load it up on a bunch of boxes and run them all at once to get some sort of load testing right? Luckily for you someone has already figured this out, although it is a paid service: Browser Mob. But, it looks like you were willing to spend a little money to do this anyway, and would probably net you better, more repeatable results.
We usually answer the question "can the web application do more than one thing at a time" by using JMeter to produce a simulated HTTP load on the web server.
I find that it helps to consider distinguish several different types of testing; concurrency (what happens when two events in the system collide), capacity (what happens when there are many overlapping requests), volume (what happens as data accumulates in the system)...
Huge general slow down, evidenced by response times that fall outside of the SLA, are usually related to capacity problems (with contention as a common cause) or volume (many users, much data, and the system gets slower over time). The former usually requires some sort of multi-threaded request stream; the latter you can usually manage by preloading the volume, and then measuring the response times experienced by a single user.
I generally find that separating the load generator from the actual measurement/instrumentation is a good idea. That can be as simple as having a black box over there to generate a typical load, and sitting here with a stop watch measuring the responsiveness of a typical use case.
JMeter http://jmeter.apache.org/
I have a huge web application and I want to use a system which will help the developers keep track of areas (links) that have been visited and tested. I am not looking for a bug tracking system, etc. Just a system that will help me track pages that I have visited, etc.
Let me know if there is a tool to keep track of pages physically visited and signed off.
I don't know any out-of-the-box solution to do this.
If nothing comes up, you might be able to put something together using frames. You could have your site in the top frame, and the tracking application in a tiny bottom frame. The bottom frame would constantly update itself and tell you whether the URL in the big frame is in the list of visited pages. You would need some server side programming for that (e.g. PHP) and it would be some work, but nothing impossible.
I am also not aware of any Tools that are available for this. However, without any tools, you can still have developers give you a snapshot of browser history which shall give you this info. Alternatively depending on the webserver you are using, you can write simple scripts to scan through the access logs and generate a report of all the pages visited (uniquely and otherwise) and compare, if needed.
If you want to just track where you're visited you might consider pulling the data out from your browser history or, if you have access to them, server logs.
But I would consider writing, or recording—or creating somehow—actual automated tests for your web app esp. given its size.
SimpleTest provides a, now quite mature, web tester. I heard things about it integrating with Selenium. You can also use Watir which I know will run with Ruby but possibly more than that. Watir has the advantage that you can test multiple browsers. I'm sure there are plenty more tools if you do a few searches.
I've started a new role in my life. I was a front end web developer, but I've now been moved to testing web software, or more so, automating the testing of the software. I believe I am to pursue a BDD (Behavior Driven Development) methodology. I am fairly lost as to what to use, and how to piece it together.
The code that is being used/written is in Java to write a web interface for the application to test. I have documentation of the tests to run, but I've been curious how to go about automating it.
I've been directed to Cucumber as one of the "languages" to help with the automation. I have done some research and come across a web site for a synopsis of BDD Tools/Frame works,
8 Best Behavior Driven Development (BDD) Tools and Testing Frameworks. This helped a little but then I got a little confused of how to implement it. It seems that Selenium is a common denominator in a lot of the BDD frameworks for testing a GUI, but it still doesn't seem to help describe what to do.
I then came across the term Functional Testing tool, and I think that confused me even more. Do they all test a GUI?
I think the one that looked like it was all one package was SmartBear TestComplete, and then there is, what seems to be, another similar application by SmartBear called, SmartBear TestLeft, but I think I saw that they still used Cucumber for BDDing it. There a few others that looked like they might work as well, but I guess the other question is what's the cheapest route?
I guess the biggest problem I have is how to make these tests more dynamic, as the UI/browser dimensions can easily change from system to system, and how do I go about writing automation that can handle this, and tie into a BDD methodology?
Does anyone have any suggestions here? Does anybody out there do this?
Thanks in advance.
BDD Architecture
BDD automation typically consists of a few layers:
The natural language steps
The wiring that ties the steps to their definition
The step definitions, which usually access page objects
Page objects, which provide all the capabilities of a page or widget
Automation over the actual code being exercised, often through the GUI.
The wiring between natural language steps and the step definitions is normally done by the BDD tool (Cucumber).
The automation is normally done using the automation tool (Selenium). Sometimes people do skip the GUI, perhaps targeting an API or the MVC layer instead. It depends how complex the functionality in your web page is. If in doubt, give Selenium a try. I've written automation frameworks for desktop apps; the principle's the same regardless.
Keeping it maintainable
To make the steps easy to maintain and change, keep the steps at a fairly high level. This is frequently referred to as "declarative" as opposed to "imperative". For instance, this is too detailed:
When Fred provides his receipt
And his receipt is scanned
And the cashier clicks "Refund to original card"
And the card is inserted...
Think about what the user is trying to achieve:
When Fred gets a refund to his original card
Generally a scenario will have a few Givens or Thens, but typically only one When (unless you have something like users interacting or time passing, where both events are needed to illustrate the behaviour).
Your page objects in this scenario might well be a "RefundPageObject" or perhaps, if that's too large, a "RefundToCardPageObject". This pattern allows multiple scenario steps to access the same capabilities without duplication, which means that if the way the capabilities are exercised changes, you only need to change them in one place.
Different page objects could also be used for different systems.
Getting started
If you're attacking this for the first time, start by getting an empty scenario that just runs and passes without doing anything (make the steps empty). When you've done this, you'll have successfully wired up Cucumber.
Write the production code that would make the scenario run. (This is the other way round from the way you'd normally do it; normally you'd write the scenario code first. I've found this is a good way to get started though.)
When you can run your scenario manually, add the automation directly to the steps (you've only got one scenario at this point). Use your favourite assertion package (JUnit) to get the outcome you're after. You'll probably need to change your code so that you can automate over it easily, by e.g.: giving relevant test ids to elements in your webpage.
Once you've got one scenario running, try to write any subsequent scenarios first; this helps you think about your design and the testability of what you're about to do. When you start adding more scenarios, start extracting that automation out into page objects too.
Once you've got a few scenarios, have a think about how you might want to address different systems. Avoid using lots of "if" statements if you can; those are hard to maintain. Injecting different implementations of page objects is probably better (the frameworks may well support this by now; I haven't used them in a while).
Keep refactoring as you add more scenarios. If the steps are too big, split them up. If the page objects are too big, divide them into widgets. I like to organize my scenarios by user / stakeholder capabilities (normally related to the "when" but sometimes to the "then") then by different contexts.
So to summarize:
Write an empty scenario
Write the code to make that pass manually
Wire up the scenario using your automation tool; it should now run!
Write another scenario, this time writing the automation before the production code
Refactor the automation, moving it out of the steps into page objects
Keep refactoring as you add more scenarios.
Now you've got a fully wired BDD framework, and you're in a good place to keep going while making it maintainable.
A final hint
Think of this as living documentation, rather than tests. BDD scenarios hardly ever pick up bugs in good teams; anything they catch is usually a code design issue, so address it at that level. It helps people work out what the code does and doesn't do yet, and why it's valuable.
The most important part of BDD is having the conversations about how the code works. If you're automating tests for code that already exists, see if you can find someone to talk to about the complicated bits, at least, and verify your understanding with them. This will also help you to use the right language in the scenarios.
See my post on using BDD with legacy systems for more. There are lots of hints for beginners on that blog too.
Since you feel lost as to where to start, I will hint you about some blogs I have written that talks a bit about your problem.
Some categories that may help you:
http://www.thinkcode.se/blog/category/Cucumber
http://www.thinkcode.se/blog/category/Selenium
This, rather long and old post, might give you hints as well:
http://www.thinkcode.se/blog/2012/11/01/cucumberjvm-not-just-for-testing-guis
Notice that versions are dated, but hopefully it can give some ideas as what too look for.
I am not an expert on the test automation but I am currently working on this part. So let me share some idea and hope it will help you at the current stage.
We have used selenium+cucumber+intellij for testing web application. We have used testcomplete+cucumber+intellij for testing java desktop application.
As to the test of web application, we have provided a test mode in our web application, which allows us to get some useful details of the product and the environment; and also allows us to easily trigger events through clicking the button and inputting text into the test panel under test mode.
I hope these are helpful for you.
What are the most common things to test in a new site?
For instance to prevent exploits by bots, malicious users, massive load, etc.?
And just as importantly, what tools and approaches should you use?
(some stress test tools are really expensive/had to use, do you write your own? etc)
Common exploits that should be checked for.
Edit: the reason for this question is partially from being in SO beta, however please refrain from SO beta discussion, SO beta got me thinking about my own site and good thing too. This is meant to be a checklist for things that I, you, or someone else hasn't thought of before.
Try and break your own site before someone else does. Your web site is basically a publicly accessible API that allows access to a database and other backend systems. Test the URLs as if they were any other API. I like to start by cataloging all URLs that have some sort of permenant affect on the state of the system - this is easy if you are doing Ruby on Rails development or trying to follow a RESTful design pattern. For each of those URLs, try running a GET, POST, PUT or DELETE HTTP methods with different parameters so that you can ensure that you're only giving access to what you want to give access to.
This of course is in addition to obvious: Functional testing, Load Testing, SQL Injection, XSS etc.
Turn off javascript and make sure your site can still be navigated.
Even if you want to ignore the small but significant number of people who have it disabled, this will impact search engines as well.
YSlow can give you a quick analysis of different metrics.
What do friendly bots see (eg: Google); check using Google Webmaster Tools;
Regarding tools for running functional tests of a web pages, I've found that Selenium IDE to be useful.
The Firefox (version 2 only compatible at the moment) plug in lets your capture almost all web events, and save them and replay them in the same browser.
In conjunction with another Firefox https://addons.mozilla.org/en-US/firefox/addon/1843"> Firebug
you can create some very powerful tests.
If you want to set up Selenium Remote Control
you can then convert the Selenium IDE tests into nUnit tests, which you can run automatically.
I use cruise control and run these web tests as part of a daily build.
The nice thing about using Selenium remote control is that it can run the same functional tests on multiple browsers and operating systems, something that you can't do with the IDE.
Although the web tests will take ages to run, there is an version of Selenium called Selenium Grid that lets you use any old hardware you have spare to run the tests in parallel as part of a computing grid. Not tried this myself, but it sounds interesting.
All of the above is open source and free which helped me convince management to use if :-)
For checking the cross browser and cross platform look of your site, browershots.org is maybe the best free tool that can safe a lot of time and costs.
There's seperate stages for this one.
Firstly there's the technical testing, where you check all technical functionality:
SQL injections
Cross-site Scripting (XSS)
load times
stress levels
Then there's the phase where you have someone completely computer-illiterate sit down and ask them to find something. Not only does it show you where there's flaws in your navigational logic (I find that developers look upon things way differently than 'other people') but they're also guaranteed to find some way to break your site.