Is there a way to stop Selenium from going to other pages? - selenium

I have a selenium-based scraper. Occasionally one of its locators fails in a way that is seemingly undetectable to me, and it "finds" the wrong button. Usually this isn't a big deal, since I expected it won't succeed 100% of the time, but the problem is, that button is leading me to a different session, thereby ending that entire session of scraping.
I was wondering if there is a way to configure selenium to temporarily not load any new pages and stay on the current page only, such that misclicks like this won't have any effect.
(Of course the ideal and "real" solution is to find a way to fix or detect the locator's mistake, but I want to have this at least as a temporary solution, if it is possible),

Related

How to make a snapshot (not screenshot) with automation tooling like Selenium

I would like to automate some tasks with selenium (but it can as well be Puppeteer or Playwright). The problem is, that I need to perform a couple tasks on a specific page (lets say example.com/page/complex). But to get to that page, it takes quite some time. So, ideally, when I have performed the first tasks (which takes me to an other page) I would like go back the that specific page, restore the state and start the next task. I cannot simple point selenium back to example.com/page/complex because it will not have the correct state (for example, a popup will not be active).
So, I can image if you could make a snapshot (memory dump for example) it would be possible to restore the page in the correct state. Is something like this possible with the automation tooling available?

Possible issue with running selenium tests on one machine concurrently

I have multiple similar sites (same layout, just different data), and each of them has drop down menu on mouse over (and disappears on mouse out).
I am using Selenium 2 and WebDriver, and I have one selenium test case that basically do the mouse over and make sure each of the link in the drop down menu works.
I am using selenium grid, so I have a hub and few test machines.
Because I have many sites (few hundred) to test, so I am thinking of making each machine to run the test case against multiple sites in parallel.
My concern is because there can be only one active browser at a time, will it cause issue if web driver tries to perform Action.moveToElement() on multiple browsers at roughly the same time? Will only the active browser performs Action.moveToElement() properly and other browsers fail? If there will be an issue, is there any workaround?
I have tried it using JUnitCore.runClasses(ParallelComputer.classes(), SomeClass1.class, SomeClass2.class, SomeClass3.class);, it decreased the passed tests percentage from 100% to about 67% when running three tests on a machine. Not good =/.
The good part - firefox actually can do it in parallel. If the FF instances are delayed between each other so they don't do the same thing at the same time, it works better. Some of the failures happened during a Firefox bootup - so if you can minimize closing and opening windows, do it. But still, sometimes it just fails for no reason.
If you really would use the saved time, then go for it, log all failed tests and run them again after the first round - this time one at a time.
You could also solve this, depending on your ultimate goal of testing, by not using the Action class with the mouse-movement click, but instead use the WebDriver findBy-click method or Javascript executor method. It would probably be less contentious when running multiple windows at the same time. If the Action class, when defining a mouse movement, uses native calls at all, such as "move to Point", then one browser over the top of another, then I would guess it's possible that the movement point could be masked by another window. I am really not sure about this, just giving you another idea to try.

Click does not always work in Selenium

I use Selenium with PHPUnit, and sometimes test fail with an error condition which seems to be caused by the browser ignoring clickAndWait calls. The test execution passes the clickAndWait command without much delay (even if I set a large timeout), and the next assertion or element access fails; if I make a screenshot, it shows the previous page as if the click command did not happen at all. This happens both with links and with submit buttons (both normal, no javascript: or similar trickery), non-deterministically. It seems to happen more often on certain controls than others (many are not affected at all), and the frequency of tests failing seems more or less contant in the short term, but changes wildly in the long term (sometimes it is 1 in 100, sometimes 1 in 2). I am guessing it is influenced by some sort of server load, but could not see any obvious correlation.
I work more with Selenium 2 but I have noticed this as well. In my case I suspect other system clicks were interfering with Selenium (purely speculation) since I ran the tests on my machine.
The way I solved it was to instead send a key press of the Return key. For most cases this is equivalent to a click and in my experience has created more stable tests.
A quick caveat is that this technique stopped working for me after version 2.3.0. I submitted a bug report about it if you want to take a look.

How to compare test website and live website

We have our production server running our website. Then we have a test server which has exact same data but with changes to code to do some new functionality. This web app has over 500 pages.
Is there any program that can
Login to the test site
Crawl through each page and then save the page as html
Compare with the same page saved with live site?
This way we can make sure that new features that we add to our test site will not break the live site when code updates are applied to production.
I am currently trying to use WinHTTrack website copier and then comparing the test and live folders with some code comparison tool like beyond compare. This works ok but there are lot of files changed because of the domain name changes.
Looking forward to ideas / solutions for this problem.
Regards
Have you looked at using Watir for this? It's not exactly the thing you are looking for but it might allow you some more granularity in your tests and ensure the site is functionally identical rather than getting caught up on changing guids, timestamps and all the other things that tend to change across any significant size website from day to day as part of it's standard functionality.
Apparently you can't make consistent, reproduceable builds in your project, can you? I would recommend moving towards that in the long run, it will save you a lot of headaches. That way you would know exactly what was deployed to which server when, so there would be no more need to bend around backwards to get the deployed sources back like this...
I know this is not a direct solution to your problem... but maybe it is worth comparing, whether you would save more in the long run by investing the efforts into your build process now, instead of implementing this workaround (and then improving your build process anyway - because one day you will almost surely need to do that).
wget has a --convert-links option, there are also some options to preserve cookies that might let you do it logged in http://drupal.org/node/118759#comment-664498
use an Offline Downloader, download all files to your computer from both sources, then compare the folder contents using a free tool like Total Commander.
EDIT
Load both of your sources into a CVS, and compare it there.

browser plugin to test a site's look when migrating

I'm thinking I need a browser plugin that does the following, and if it doesn't exist, it should. I may as well say FF for now, but it could be any browser.
The problem: when moving a website from one server to another, you need migration testing. It is a pain to click on every link by hand and compare it to the old host. You really need 2 machines or have to constantly thrash your hosts file.
The plugin:
Would allow you to specify an alternate hosts entry for a website. 2 entries would make it clear, one for live, one for test.
The plugin would crawl every link on the site, and render the page in the browser, and save an image of the entire page.
It would switch hosts and repeat, and save images in a second folder. Since the rendering engines match, the images should match. We need to switch hosts (like /etc/hosts) so all absolute links are the same for the site.
Now this could be part of the plugin or external, now that we have 2 folders of identically named images, we run an image-diff program on the whole batch. A quick test would be a bdiff or hash, or we could get more sophisticated and determine how different each image is.
This would save so much time. So can it be done with existing tools, or do I need to go write it?
Have a look at Selenium, it allows you to script interactions with the browser and verify content.
That is overengineered. What kind of website is it? How big? Which framework (PHP, JSP, Rails, etc.)? Why not copy the website onto the new server and grep the code for specific ties to the old server?
I'd concentrate on why you think the site would differ between two servers, and focus on testing those specific cases rather than the whole site. When a site is moved to a new machine the issues are generally very obvious from looking at a couple of pages.
Presumably they are both looking at the same data source, assuming there is a data source, otherwise a folder diff on the two installations would suffice. This being the case, it should be a simple task to identify which areas of the site are likely to be affected by a server migration.
Also, I wouldn't personally trust a machine matching two images to sign off system as ready to go live. There just isn't a substitute for real human testing. Yes it's time consuming, but how important is your site?
Try http://www.browsercam.com/ - free trial should allow you to specify main page and follow links to make screenshots automatically of the sub-pages as well.