Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
On my last project, I created some test cases through Selenium, then automated them so they would run on every build launched from hudson. It worked fantastic, and was consistent for about a month.
Then the tests started failing. It was, most times, timing issues which caused the failures. After about two weeks of effort put in over the course of the next two months, it was decided to drop the Selenium tests. They should have been passing, but the responses and timing of the web application were varying to the extent to which tests would fail when they should have passed.
Did you have a similar experience? Is Selenium still a good tool to use for Web Application testing?
Selenium is great tool for web testing, although it's important to make sure your tests are reliable. Timing issues are common, so I would suggest the following:
Make sure you set a sensible timeout value. I find between 1-2 minutes works well.
Don't have pauses in your tests - they are the main cause for timing issues. Instead use the waitFor* commands. The waitForCondition is very useful
Identify external calls that can cause timeouts and block that traffic from the machine running tests. You can do this on a firewall level or simply redirect the domain to localhost in your hosts file.
Update:
You should also consider using Selenium Grid. It wont directly help with your timeouts, but it can provide a quicker feedback loop for your failures. If you're using TestNG to run your tests you can get it to automatically rerun failures - this gives the tests failing due to timeouts a second chance.
At my previous job we investigated using it as a test tool but found it too fussy to bother integrating into our process. Pretty much the same experience as you.
This was two or three years ago in version 0.8 or so though, I would have expected it to get better since then.
I've had a similar experience. We created a project that would bootstrap a selenium proxy and run an automated suite of tests, but unfortunately it clashed with our build server in a huge way. There were too many browser inconsistencies and third party dependencies for us to reliably add it to our build. It was also too slow for us, and added too much time to our builds.
Most of the errors we would run into would be timeouts.
We ended up keeping the project and use it for integration tests on major releases. The bootstrapping code that we used has proved invaluable in other areas as well.
Probably best to be run after a nightly build when there's the time for it. It, or Watin, coulod be integrated with your build scripts.
Very much depends on your team, but if you've a small testing team this can be priceless for picking up some very obvious runtime issues.
I'd keep the scope modest and really use them for some sanity testing that at least each page can load.
I did have a similar experience with Selenium. We had a legacy system which we built a sort of testing framework around so that we could test the changes we were making. This worked great at the start but eventually some of the earlier tests began to fail (or take too long to run) so we started to turn off more and more of the tests.
To fix some of the issues we stopped selenium from opening and closing a browser for each test i.e. the tests were broken up into blocks and for each block of tests the browser would only be opened once. This reduced the time taken to run the tests from several hours to 30 minutes.
Despite the issues I think Selenium is a great tool for testing web-based applications. Many of the problems we experienced centered on the fact that the system we were testing was a legacy system. If you like test-driven development then Selenium fits in very well with that development practice.
EDIT:
Another good thing about Selenium is the ability to track what developer introduced the error as well as where the error is (source file). This makes life so much easier when it comes to fixing the error.
We initially tried to use selenium on our build machine but tests were very brittle and we found we spent a lot of time trying to keep old tests running when changes occurred to unrelated functionality accessed through the same set of pages. We were automating the tests through nunit.
I would use selenium more as a customer acceptance and integration testing tool. I'd agree with using it for a nightly build on functionality that is stable.
At a first glance, Selenium looks great. Unfortunately, as sometimes happens with open source projects, they rush to implement new features instead of making it more stable.
Related
I am a beginner in Cypress Automation Testing. I have one confusion. When we need to add our Automation scripts to run with GitHub workflows to trigger when we push a commit, for what environment should we write tests? In the local environment at localhost or for the staging site of the project?
Could anyone please get my confusion cleared on this Automation Testing and how it should be written and How can we add Cypress Automation Tests with GitHub CI/CD?
Thanks.
Ok, let me give this a shot. Of course, I do not know the exact setup of the project that you are working on, but let me give you some pointers, so you can decide for yourself what works best in your setting.
My answer is based on the assumption that you are building an automated regression test set in Cypress with the primary goal to prevent production incidents. In addition, it aims save you tons of 'manual testing' for each release to production because you want to make sure everything is still working properly.
First of all, you want your automated tests to run on a stable environment(*). If the environment is not stable, many tests will fail for many reasons, and those are usually not the right ones. You'll spend more time figuring out why your tests are failing, than actually catching issues with it. This makes a local, dev environment not really suited for the task, so I would not pick a localhost environment for this. Especially not when you have multiple developers working in your team, each with their own localhost.
A test environment is already a way more stable environment. You want your tests to only fail when you have an actual issue on your hands. As a rule of thumb, the 'higher' you go, the more stable.
Second, you want to catch the issues early in the game, so I would definitely make sure that the tests can run on the environment where all code comes together for the first time (in other words, the environment that has the master branch or whatever your team calls that branch). This is usually the test environment. In my projects, I initially build the set for this environment, and ideally, I run it daily. Your tests won't always pass here (bonus if they do), and that is OK... as long as you understand why they don't ;-)
Some things to keep in mind are integrations or connecting systems, and whether you need those for your tests to pass. In general, you don't want to be (too) dependent on (third-party) integrations for you test cases to go green. Sometimes, when those integrations are vital to the process that you need to test, it is inevitable. However, integrations are often not (fully) set up on test/lower environments. There are workarounds for this, like stubs, but let's not get into that now - that's a whole different topic.
Third, you want your tests to run on a production-like environment on the code exactly in the state that it goes to production. This is usually the acceptance, staging or pre-production environment, i.e. the last one before production. These environments often have all integrations in place and are often very similar to production. If you find an issue here, it's almost guaranteed that it is also an issue in production. This is IMO where you want to integrate your tests into your CI/CD pipeline. Ideally, your full automated set is in the pipeline, but in practice, you should only add the tests that are stable and robust, otherwise your production deployments will be blocked very often.
So, long story short, my advice: write your tests for your test environment, where you do your 'manual testing' (I hate that term BTW, all testing is manual... as if there is such a thing as 'manual coding') and run it early and often. Then put the stable ones in the pipeline of the production deployment. If you only have local, staging and production, it should be staging.
If your developers want to run the set on their local environments, they can still do that - you can share the tests with them or even better, they can take it from the repository and run it locally - but I don't think you should make it part of the deployment process always and everywhere. It will slow down your process massively.
You can work with environment variables to easily switch for the environment where you want to run your tests: https://docs.cypress.io/guides/guides/environment-variables#Setting
I hope this helps. I'm looking forward to read what others have to say about this, too.
Happy Testing!
Jackie
PS. I see that you also asked about how to add Cypress to your CI/CD pipeline. I think that should be a completely separate topic. It is also way too high level to answer. Maybe it's best to start here: https://docs.cypress.io/guides/continuous-integration/introduction#What-you-ll-learn
(*) I'm talking stable environment here, but this also includes stable code and even a stable application. If your application and code is in a very early stage, really ask yourself whether you already want to start automating your functional UI tests in Cypress - chances are that many things will change (many times) and you'll spend hours updating your tests. Maybe it is better to only think about the scenarios that you want to automate at that stage of the project.
I am new to the Testing Arena. I am working with a very heavy ExtJs application.
And I am looking for the best testing tool.
I came across a bunch of tools, but can't seem to make a decision.
1) Siesta 2) Jasmine 3) Riatest
I want to be able to deploy these tests easily on a CI server.
Siesta and Jasmine can both be used with PhantomJs to automate the tests, but which one is better and easy to use?
As long as I can generate various clicks correctly and capture output, I'm cool.
Any help is appreciated.
Our company is moving from a Java based client to an ExtJS web and mobile application. We use QTP/UFT for Java automation which is slow, buggy, expensive, and cannot get passed the DOM easily so I started investigating Siesta recently. It seems like a viable option in my book but I admit I haven't checked out the other applications.
The initial setup with Siesta took longer than expected but with its event recorder, it makes it a gratifying transition. The recorder still requires debugging. I'm in QA and know how to script using Python, Bash, etc but it's definitely a learning curve to transition from VBScript to ExtJs/Siesta JavaScript. They have an open source version and a free 45 day trial to check out.
I've read about HTML Robot and SmartBear. Here's a post on the Sencha forums that talks about different automation software. Sencha also plans to release some kind of automation involving SenchaCmd during SenchaCon 2015 this April 7 to 9.
You should take a tool which covers your needs and improve the software quality.
Jasmine is good for unit tests without much gui interaction, you should use this to test your domain logic (e.g. stores, models, ...). Jasmine can run on every environment, a simple server with nodejs runtime is enought.
For regression tests the choice is yours. What tool you are comfortable with? Choosing a tool is one part, using it is another. Riatest seems like a windows application, are you able to run this on your CI server?
Evaluate them with your dev team and then make a choice for the long run.
I am a novice in testing.
I am working on Linux.
I was reading about testing in headless mode and came across two things. One was X virtual frame buffer which does graphical operations in memory. So, no output is displayed. The implementation details I found in this link http://www.seleniumtests.com/2012/04/headless-tests-with-firefox-webdriver.html.
The other one that I came across was HtmlUnitDriver. This also does not open any browser while running the test. I wrote a basic sample code using HtmlUnitDriver and the assertions seem to work fine.
I understand that HtmlUnitDriver doesn't work too well with javascript. But apart from this, are there any major differences to choose one over the other?
I am going to be testing a web application that does have some abount of javascript in it.
I am a novice in this field. So, any answers, suggestions, etc. will be appreciated.
Thank you in advance
From my experience with both approaches:
HtmlUnit will in most practical cases be faster than a real browser with xvfb -- simply because it doesn't spend time rendering the pages. (A data point: 17 secs. HtmlDriver vs. 62 secs. FirefoxDriver for a specific test suite I'm using now).
It is easier to run several tests concurrently -- and it consumes a lot less resources -- using HtmlUnit. This can be very important if you have a large number of tests and you need them to finish fast (e.g. you want to follow the 10-minute-build rule).
As you said, HtmlUnit has its own quirks with JavaScript and the DOM. Not better or worse than any other browser (Firefox, Safari, IE, Chrome, ... -- they all have their own quirks), but one on which it is very questionable to spend time fixing bugs. I also find such bugs very difficult to diagnose, but that may be only my ignorance.
One advantage of real browsers + xvfb is that you can always use the exact same tests without xvfb and see what's going on -- possibly even use a console to run some JavaScript to diagnose issues. I sometimes feel quite blind when working with HtmlUnit, and because of above-mentioned quirks you can't always use the same exact tests code in both environments.
So, in summary, unless total test duration is important and you're ready to spend some time fighting HtmlUnit, it's just easier to go with a regular browser + xvfb.
I also like using xvnc, which has the added benefit of allowing you to connect to the screen of a running test and see what's going on (not sure whether you can do that with xvfb).
I have been writing tests for my Ruby code for a while, but as a frontend developer I am obviously interested in bring this into the code I write for my frontend code. There is quite a few different options which I have been playing around with:
CasperJS
Capybara & Rspec
Jasmine
Cucumber or just Rspec
What are people using for testing? And further than that what do people test? Just JavaScript? Links? Forms? Hardcoded content?
Any thoughts would be greatly appreciated.
I had the same questions a few months ago and, after talking to many developers and doing a lot of research, this is what I found out. You should unit test your JavaScript, write a small set of UI integration tests and avoid record and playback testing tools. Let me explain that in more detail.
First, consider the test pyramid. This is a interesting analogy created by Mike Cohn that will help you decide which kind of testing you should be doing. At the bottom of the pyramid are the unit tests, which are solid and provide fast feedback. These should be the foundation of your test strategy and thus occupy the largest part of the pyramid. At the top, you have the UI tests. Those are the tests that interact with your UI directly, like Selenium does for example. Although these tests might help you find bugs, they are more expensive and provide very slow feedback. Also, depending on the tool you use, they become very brittle and you will end up spending more time maintaining these tests than writing actual production code. The service layer, in the middle, includes integration tests that do not require an UI. In Rails, for instance, you would test your REST interface directly instead of interacting with the DOM elements.
Now, back to your question. I found out that I could greatly reduce the number of bugs in my project, which is a web application written in Spring Roo (Java) with tons of JavaScript, simply by writing enough unit tests for JS. In my application, there is a lot of logic written in JS and that is the kind of thing that I am testing here. I am not concerned about how the page will actually look or if the animations plays as they should. I test if the modules I write in JS will execute the expected logic, if element classes are correctly assigned and if error conditions are well handled. For these tests, I've been using Jasmine. This is a great tool. It is very easy to learn and has nice mocking capabilities, which are called spies. Jasmine-jQuery adds more great functionality if you are using jQuery. In particular, it allows you to specify fixtures, which are snippets of the HTML code, so you don't have to manually mock the DOM. I have integrated this tool with maven and these tests are part of my CI strategy.
You have to be careful with UI tests, specially if you rely on record/playback tools like Selenium. Since the UI changes often, these tests keep breaking and you will spend a lot of time finding out if the tests really failed or if they are just outdated. Also, they don't add as much value as unit tests. Since they need an integrated environment to run, you will mostly like run them only after you finished developing, when the cost of fixing things is higher.
For smoke/regression tests, however, UI tests are very useful. If you need to automate these, then you should watch out for some dangers. Write your tests, don't record them. Recorded tests usually rely on automatically generated xpaths that break for every little change you do on your code. I believe Cucumber is a good framework for writing these tests and you can use it along with WebDriver to automate the browser interaction. Code thinking about tests. In UI tests, you will have to make elements easier to find so you don't have to rely on complex xpaths. Adding class and id elements where you usually wouldn't will be frequent. Don't write tests for every small corner case. These tests are expensive to write and take too long to run. You should focus on the cases that explore most of your functionality. If you write too many tests at this level you will probably test the same functionality that you have previously tested on your unit tests (supposing you have written them).
In my current project I am using Spock and Geb to write the UI tests. I find these tools amazing. They are written in Groovy, which suits better my Java project.
There are lots of options and tools for that. But their choice depends on whether you have a web UI or it's a desktop app?
Supposing from the tools you've mentioned it's Web UI. I would suggest Selenium (aka WebDriver): http://seleniumhq.org/docs/
There is a variety of languages it supports (Ruby is in the list). It can be run against a variety of browsers, ad it's quite easy to use with lots of tutorials and tips available.
Oh, and it's free, of course :)
I though as this post gets a lot of likes, I would post my answer to my question as I do write lots of tests now and how you test front end has moved on a lot now.
So in terms of FE testing I spent lot of time using karma with Jasmine, although karma will work nicely with other test suites like mocha & qunit. While these are great and karma allows you to interface directly with browsers to run your tests. The downside is as your test suite gets large it can become quite slow.
So recently I have moved to Jest which is much faster and if your writing react app, using enzyme with snap shot testing give you really good coverage. Talking of coverage Jest has Istanbul coverage built in and set up and mocking is really easy simple to use. The downside it doesn't test in browser and it using something called jsdom which is fast, but does have a few nuisances. Personally I don't find this a big deal particularly when I compile my code through webpack/babel which means the cross browser bugs are fairly few and far between, so generally isn't an issue if you manually test anyway (and imo you should).
In terms of working within the rails stack, this much easy now that the webpacker gem is now available and using npm and node is generally much more excepted. I would recommend using nvm to manage your node versions
While this isn't strictly testing, I would also recommend using linting as this also picks up a lot of issues in your code. For JS I use eslint with prettier and scss/css I use stylelint
In terms on what to test, I think as Carlos talks about the test pyramid is still relevant, after all the theory doesn't change, just the tools. I would also add to be practical about tests, I would always test, but to what level and coverage will depend on the project. It is important to manage your time and spending hours/days testing a short lifecycle project. Larger/longer term projects the benefits of a larger test suite is obviously greater.
Anyway I hope that helps people that look at the question.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
We develop several products and already have extensive unit-tests and fully automated functional tests for them. Problem is that those tests don't run frequently, just manually by developer or just before shipping a new version.
I'm looking for a "test execution manager" software which will allow:
defining test suites as a collection of my existing tests ;
executing the test suites on multiple machines in a our test lab ;
collecting results and presenting them nicely ;
preserve test execution history and results
Most "testing solutions" I've found concentrate on "writing automated tests" (which we already have working) or closely integrate with other aspects of software development, like defining requirements and filing bugs (which we have and don't want to change).
Can anyone recommend a simple and flexible software to do the above without forcing specific development processes?
I though on using (or abusing) Hudson CI for this. Hudson can already run tests, collect results and present them, both periodically or due to code commit; but it was not designed for test suite definition. Any input from experienced Hudson users on this idea is appreciated..
First of all, our developers are not allowed to check in code without running the unit tests. We also run a CI server (Hudson), which builds after a commit and runs the unit tests. We are working on getting the functional tests implemented for the nightly builds.
You said your developers test the software? This is a bad thing. At least let a developer that is not familiar with the code to test your app otherwise you are likely to overlook some bugs, because their existence was ruled out by the developer writing the code. Additionally, who writes the functional tests? Developers again? You should get your BA's to write them. Always remember, four eyes see more than two.
So after all that said, I assume, that the unit tests, will always be run before code is checked into your SCM. The following is targeted primarily at the functional tests.
Simple solution:
You can always create scripts to bundle your tests (batch or shell script that runs the individual test).
Executing of test suites is actually one of the purposes of Hudson
Collecting and presenting results, that is what Hudson is for
See above, can be done with Hudson, without abusing it.
A good solution:
Did you look at tools like IBM Rational Quality Manager? Depending on the test tools you use, you might want to use a test management tool different one. Oracle also offers a tool for it. Don't be mistaken usually these tools can be fairly expensive and offer way more than you want to use. With a little bit help from google you should find something that suits your needs. My keywords were "centralized test management".
In case you use FitNesse for your functional test. You can define suites in FitNesse and I think a suite can be part of a larger suite. FitNesse definitely keeps historic test data. The test can be run from command line which enables you to run the tests from ant or maven.
If you use a unit test framework for your functional testing you can also run them as part of an nightly build and schedule it using your CI server (Hudson or Cruise Control or ...)