What is the difference between using HtmlUnitDriver and writing headless tests using Xvfb in linux? - testing

I am a novice in testing.
I am working on Linux.
I was reading about testing in headless mode and came across two things. One was X virtual frame buffer which does graphical operations in memory. So, no output is displayed. The implementation details I found in this link http://www.seleniumtests.com/2012/04/headless-tests-with-firefox-webdriver.html.
The other one that I came across was HtmlUnitDriver. This also does not open any browser while running the test. I wrote a basic sample code using HtmlUnitDriver and the assertions seem to work fine.
I understand that HtmlUnitDriver doesn't work too well with javascript. But apart from this, are there any major differences to choose one over the other?
I am going to be testing a web application that does have some abount of javascript in it.
I am a novice in this field. So, any answers, suggestions, etc. will be appreciated.
Thank you in advance

From my experience with both approaches:
HtmlUnit will in most practical cases be faster than a real browser with xvfb -- simply because it doesn't spend time rendering the pages. (A data point: 17 secs. HtmlDriver vs. 62 secs. FirefoxDriver for a specific test suite I'm using now).
It is easier to run several tests concurrently -- and it consumes a lot less resources -- using HtmlUnit. This can be very important if you have a large number of tests and you need them to finish fast (e.g. you want to follow the 10-minute-build rule).
As you said, HtmlUnit has its own quirks with JavaScript and the DOM. Not better or worse than any other browser (Firefox, Safari, IE, Chrome, ... -- they all have their own quirks), but one on which it is very questionable to spend time fixing bugs. I also find such bugs very difficult to diagnose, but that may be only my ignorance.
One advantage of real browsers + xvfb is that you can always use the exact same tests without xvfb and see what's going on -- possibly even use a console to run some JavaScript to diagnose issues. I sometimes feel quite blind when working with HtmlUnit, and because of above-mentioned quirks you can't always use the same exact tests code in both environments.
So, in summary, unless total test duration is important and you're ready to spend some time fighting HtmlUnit, it's just easier to go with a regular browser + xvfb.
I also like using xvnc, which has the added benefit of allowing you to connect to the screen of a running test and see what's going on (not sure whether you can do that with xvfb).

Related

A simple text based full web-page regression testing

My duty is to pick up and continue developing PHP website for a small sized business client. Project has no testing code. I want to quickly establish at least very basic regression testing for the backend of the site.
I need to test the full contents of the web page char to char. Must see the diff of failed tests.
I need to be able to set up cookies and GET/POST data.
Once every few days I am updating the local database from production database. I would like to then have an overview of failed tests and very quickly update my test suits so that everything is passing again.
Is using WatiN or Selenium a good idea? My local environment is Linux.
About Selenium (and only Selenium as I don't know WatiN) - it can only do what you can do in your broswer. It can click, type in fields, submit forms, take screenshots (that's a very good one), set up cookies (so yes to this one). You can always set up GET data through the URL. But I am not aware of any technique in Selenium that would allow you to set up POST data in any other way than navigating in a browser. Also, because the tests are in your browser, they are not particularly fast. E.g, on our product, a single thorough test with ~250 steps takes about 10 minutes on my computer to complete. Of course, you can always divide that between many computers using Selenium Grid. It's just more work.
To conclude it - I'd say yes, Selenium is good for your needs as there are so many ways to write a good test in it that everyone finds his style. It is good for quick checks, functionality affirmations, but also for full-scale tests etc. But if you want to do some really advanced stuff, then that's a job for a long time. Selenium offers so much functionality in so many different ways that it is definitely a full-time job to understand them and know how to use them.
Try Selenium-IDE for 20 minutes. It is just an addon for Firefox that can record your actions and then replay them. If you like what you see, go for it. If no, hire someone who will.
I'm not sure if I am too late here, but in regards to WatiN it is only IE based, so if you plan to use any other browser you are better off with Selenium WebDriver (though WatiN has some Firefox support). From what I have found (I have used both WatiN and Selenium) Selenium can achieve more low level interactions (also see Selenium Grid), but really I think it is dependant on what you are looking to achieve and personal preference. If you have time to write your own wrapper to interact with WatiN/Selenium you will find the tests themselves are rather quick to run. Also, the beauty of Automation is that once these tests are written you can run them and walk away while they complete.

Selenium vs. WebDriver, any obvious advantages?

I've moved from SeleniumRC to WebDriver for nearly two years. But I have to say that I haven't felt there's any obvious advantages for webdriver over rc.
Now I have 200+ test cases with C# driver against a website. But when I run them thoroughly for regression testing, I usually get 150 passed and 50+ failed/errored. After running failed test cases for the second time, many of them passed, only few of them were proven to be issues with the testing code.
As I can see it, sometimes WebDriver is really performing slowly, while I never met with such a situation when I was using SeleniumRC before.
As a result, I started to doubt the necessity to move from rc to webdriver, because it took much more time for me to verify errors and failures than before.
So my question is, are there any advantages for webdriver over rc to make it worth it to move from rc to webdriver? If so, can you please kindly tell me? Also, tell me about the disadvantages.
Selenium RC injects Javascript into the page to drive the interactions. Webdriver interacts directly with the browser. Injecting additional Javascript has disadvantages, with the Selenium HQ site stating it rather well.
While Selenium was a tremendous tool, it wasn’t without its drawbacks.
Because of its Javascript based automation engine and the security
limitations browsers apply to Javascript, different things became
impossible to do. To make things “worst”, webapps became more and more
powerful over time, using all sorts of special features new browsers
provide and making this restrictions more and more painful.
http://seleniumhq.org/docs/01_introducing_selenium.html#selenium-history
Another way to think about it is that testing a page that has had more JS added, you're not really testing the original page; you're testing a modified page.
What #OCary said.
However, the new and more powerful WebDriver has some limitations, too. Because it is a work-in-progress, it's behaviour between different versions changes slightly from time to time. Also, not nearly all intended features have been implemented yet, it will take some more time to have it stable, bug-free and fully developed. For example: the SafariDriver has just landed, the window controls are missed, you can't download files in a convenient way etc.
But a healthy development on WebDriver is better than today's non-existant development on Selenium 1 (It just won't get any better.), right?
From my experience Selenium RC is the most stable and robust web testing framework I have used. I recently started evaluating WebDriver (aka Selenium 2) for the reason that everybody says it is the future of Selenium. So far I am not impressed. Simple things (like clicking a button) do not work consistently across different browsers and require different workarounds.
I realize that there are limitations in what you can do with JavaScript, but I would not want to sacrifice the stability of my tests to get over those limitations.

Has anyone had trouble with Selenium tests results being inconsistent?

I'm currently using Selenium RC and JUnit to test some basic login and registration scenarios. The problem is that my tests don't always give the same results. Sometimes running them will be fine and the tests pass. Other times, they'll get stuck at certain points during the login/registration process and time out. I've been trying to debug this for a long time, but with no permanent success.
Is Selenium being flaky and has anyone else had similar issues?
I have used selenium for 3 years. Sometimes I have some strange situations but usually it is my wine or software problem. It's good practice to auto stop script or use screenshot function. To see that source of problem.
Yes, when I use Ajax validation the results are inconsistent. I am testing using the Yii framework. I am generating random valid passwords, but from time to time Selenium goes too fast (!) for the Ajax to focus.
If I slow down the speed to 100 it tends to work about four out of five times. Any slower than that and the tests are agonizingly slow.
$this->setSpeed(100);

Selenium Issues

I have been using Selenium a lot lately (testing an ExtJs app) and while in many ways it is wonderful, there are three things that cause me a lot of grief:
Inability to directly click on elements other than buttons. This has led me to write a bunch of Robot code to move the mouse around. That works but is a bit fragile, plus if you touch your mouse while a test is running, you are screwed. I tried the Selenium forums to see if there was a better way, and got nowhere. I think (but am not sure) that this is a fundamental limitation of Selenium's JS injection technique.
Inability in many cases to control what the 'id' attribute get sets to. This happens inside ExtJs and some elements let you set it, some don't, and some do but the attribute ends up where you don't expect it. You end up having to use XPath in some cases. Using XPath with ExtJs is kind of horrible as ExtJs creates massive levels of nested DIV's. You can also sometimes use CSS locators (which are also inconsistently controllable in ExtJs). (BTW, this is obviously not a Selenium problem per se).
The time that Selenium takes to fire up FF is too long... way longer than a normal human FF startup, about 2 seconds per test, which translates into tests that last several minutes, way too long.
I briefly looked at Watij, BadBoy and a couple of other web functional testing apps but none of them looked anywhere near as good as Selenium. (The way Selenium tests can be written in Java and run through jUnit is really, really sweet). There are also a few commercial alternatives but they are beyond my budget and there is no assurance that they would work any better anyway.
Any thoughts or suggestions appreciated.
About 3:
Selenium copies at startup the firefox profile to a temporary folder. If you don't specify a custom profile, Selenium propably uses the default profile which propably is bloated with addons you don't need for the test. Startup Firefox with '-p' and create a new profile for Selenium and copy it to a location you can point selenium to. This should speed up the test a bit.
Update:
Firefox Profile location / Windows: %APPDATA%\Mozilla\Firefox\Profiles
My thoughts:
There are some cases that click isn't enough, but you certainly shouldn't be limited to buttons. You might want to experiment further with your locators.
Why do you need to set the id attribute of elements? I have experience with testing ExtJS applications, and the problem is usually locating elements that have dynamically set ids. In my opinion this is an issue with ExtJS and not Selenium. Using smart XPath techniques using contains, starts-with, and substring can make your locators much more reliable. CSS locators are also often helpful as you mentioned.
As amarsuperstar says, you don't need to start Firefox before every test. If you do, you might want to consider using the browserSessionReuse command to speed up launching the browser. Alternatively you can use Selenium Grid to run tests in parallel.
Finally, it's well worth looking into the WebDriver API that will be in the soon to be released alpha of Selenium 2. In my experience Firefox launch times are reduced, and commands such as click are much improved.
I am not sure about the first two points, but as for the third, I don't think you need to start the browser for every test. You can use the Seleium server (a jar in the selenium directory somewhere) then point your tests to that e.g. localhost:6554 and that will only open the browser once.
With that you can have the steps in your script to start server -> run all test -> terminate server and you will only have one browser session across your tests.
My experiences (hopefully useful ;)
I've never had this problem, even with ExtJS. I have not used it with ExtJS 3.x, though. Is it possible that you are experiencing some thing as a result of your environment rather than Selenium?
UPDATE: As Dave Hunt reminded me, sometimes I've had to use mousedown/mouseup actions in lieu of "click"
I've found many clever ways to navigate using CSS locators (selenium supports most of CSS 3). In addition, you can use xpath like xpath=id('myid')//div['#class='foo'] (the ID part is crucial).
I've also never experienced this. Perhaps you can give some details about your environment?
Thanks for all the answers guys, I really appreciate it. I spent all day yesterday on this stuff and I wanted to add a couple of observations:
The way ExtJS lays stuff out can really make it hard to locate elements. For example, quite often the 'id' that you specified appears on an element that is 2-3-4 items in the DOM above the one you really want. The actual behaviour seems to vary greatly depending on the type of element. I have half a mind to write all this up for the benefit of future Se-ExtJs testers, as it all seems very trial-and-error and tedious. But ultimately it seems like it can be made to work nicely. And, of course, this is in no way a reflection on Se. Whether it reflects poorly on ExtJs, I'm not really sure, but it is amazing how many DIV tags even fairly simple projects create.
This is probably a 'well OBVIOUSLY' but to anyone else doing this, I'd recommend getting comfy with XPath. It seems a bit obtuse at first but after a few hours, as noted above, you can find almost anything with it, and almost always in ways that are not overly brittle.
Happy Holidays!
Selenium is one of the backends to the TestPlan testing framework and it can address a few of these issues for you.
We don't seem to have any problems here. Our front-end uses many different locator syntaxes to locate any elements on the screen. Though if your page is complicated and Selenium truly can't do it then ours won't help you either.
We base everything on XPath, so after a while you just get used to it. There are all sorts of shortcut syntax that you can use in XPath that may help. In TestPlan scripts however you can also use variable expansion in XPaths which makes them much easier to maintain.
TestPlan caches browser sessions when possible and not otherwise requested. This helps the speed a little bit, but only so much since normally you want a fresh session for each test anyway.
TestPlan

How to stress test a javascript-requiring Web App

A similar question was already asked (
Performing a Stress Test on Web Application?), but I'd like to test a web application that prevents double-submits and takes some counter-XSRF actions and therefore REQUIRES JavaScripts to be evaluated.
Has anybody done stress tests with web apps that require (and use) JS and any experience to share?
jMeter wouldn't work I guess...
Thanks!
Watir?
Watir is a simple open-source library for automating web browsers.
Watir drives browsers the same way people do. It clicks links, fills in forms, presses buttons. Watir also checks results, such as whether expected text appears on the page.
It drives Internet Explorer, but is also functional with Firefox (and Safari to some extent).
The problem with Watir and Selenium RC or any other full browser solution is that they need a full browser :P
Browsers are very expensive to run, often requiring 300MB or more of RAM. Multiply those requirements by even 100 and you need massive hardware. Fortunately, there is a solution: I recently started a company that does exactly what you're looking for.
Check out http://browsermob.com and you can run a limited test (up to 25 users) to get a feel for the app. Feel free to contact us if you have any questions at all!
One solution that may be worth pursuing is to run Selenium on Amazon EC2 to provide the scalability you need. There is a tutorial over at Selenium using a sample that ships with Selenium grid. Windows machines are 12.5 cents an hour for a small machine meaning that a 500 machine test is going to cost $62.50 an hour.
PROS:
Selenium runs in a real browser meaning that your Javascript is executing as it would on a client
Low cost - trying to do this on your own hardware would cost significantly more
CONS:
You would have to establish network connectivity from Amazon to your application
The testers I work with use Bad Boy for load testing. I'm fairly certain you can test interactions that use javascript, so you should be able to test stuff like double-submits.
As far as your backend is concerned, it doesn't matter what triggers a request whether it's from JavaScript or a load testing tool as long as the request is valid.
You can create a bunch of fake requests that do lots of different things (hopefully representative of actual usage patterns) and slam your webserver with a load testing tool.
There's a bunch out there:
jMeter
http_load
Grinder
httpperf
Because JMeter is not a browser, it won't interpret the JavaScript code on the page you GETting:
JMeter does not process Javascript or applets embedded in HTML pages.
[JMeter Wiki]
So, what can you do? You can add WebDriver to JMeter test and by this, evaluate the web pages.
Web Driver Sampler automates the execution and collection of
Performance metrics on the Browser (client-side). A large part of
performance testing, up to this point, has been on the server side of
things. However, with the advancement of technology, HTML5, JS and CSS
improvements, more and more logic and behaviour have been pushed down
to the client. This adds to the overall perceived performance of
website/webapp, but this metric is not available in JMeter. Things
that add to the overall browser execution time may include:
Client-side Javascript execution - eg. AJAX, JS templates
CSS transforms - eg. 3D matrix transforms, animations
3rd party plugins - eg. Facebook like, Double click ads, site analytics, etc
All these things add to the overall browser execution time, and this
project aims to measure the time it takes to complete rendering all
this content.
Official guide: https://jmeter-plugins.org/wiki/WebDriverTutorial/
I've tried Badboy which is OK. The big, fat, heavy tool is SilkTest. It requires a lot of programming to get up and running, but you can get something very solid done!
If you only need to stress test request from e.g. IIS log files, I have a custom build tool. I will publish it at CodePlex very soon.
Selenium RC is another alternative.
Also related, check out my recent article on Ajaxian. I think it does a good job of explaining why real browsers do matter and why executing JavaScript is becoming more important for load testing.
http://ajaxian.com/archives/why-load-testing-ajax-is-hard
There's a new tool in this area called k6
it has a way to access the DOM, and I'm planning to try it in a project.
background story:
You can visit this and this blog.
maybe it will help.